Prosecution Insights
Last updated: April 19, 2026
Application No. 18/333,063

ENHANCING THE QUALITY OF SIMULATED NETWORK DATA USING GENERATIVE ADVERSARIAL NETWORKS

Non-Final OA §103§112
Filed
Jun 12, 2023
Examiner
HWA, SHYUE JIUNN
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
Juniper Networks Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
703 granted / 852 resolved
+27.5% vs TC avg
Strong +39% interview lift
Without
With
+39.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
28 currently pending
Career history
880
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
42.1%
+2.1% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 852 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1-20 are pending in this office action. This action is responsive to Applicant’s application filed 06/12/2023. Information Disclosure Statement 3. The references listed in the IDS filed 06/21/2023, and 02/14/2024 has been considered. A copy of the signed or initialed IDS is hereby attached. Claim Rejections - 35 USC § 112 The following is a quotation of the second paragraph of 35 U.S.C. 112: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 4. Claims 6, and 18 are rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. Regarding claims 6, and 18, the claim recites “that is masked” which is unclear what “that is” corresponding to. Also, there is insufficient antecedent basis for “masked”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims under 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of 35 U.S.C. 103(c) and potential 35 U.S.C. 102(e), (f) or (g) prior art under 35 U.S.C. 103(a). 5. Claims 1, and 4-6 are rejected under 35 U.S.C. 103(a) as being unpatentable over Soni et al. (US Patent Publication No. 2020/0311482 A1, hereinafter “Soni”) in view of Van Aert et al. (US Patent Publication No. 20250078216 A1, hereinafter “Van Aert”). As to Claim 1, Soni teaches the claimed limitations: “A method, comprising:” as a method comprises generating synthetic multi-channel data associated with a synthetic version of imaging data (paragraph 0005). “receiving, by a device, real network data associated with a network” as the novel generative adversarial network can include, during a training phase, a multi-channel generator that produces multi-channel data and a discriminator that receives in multi-channel data to classify real data or synthetic data (paragraph 0021). “receiving, by the device, a random latent vector and a random process sample” as the generative modeling component (e.g., the multi-channel generator component of the generative modeling component) can receive one or more latent random variables. The one or more latent random variables can be sampled from a data distribution of random variables, the one or more latent random variables can be a vector of random variables (paragraph 0025). “utilizing, by the device, the random latent vector with a generative adversarial network (GAN) model to generate synthetic network data” as the multi-channel generator component can employ a deep neural network such as a convolutional neural network to generate the synthetic multi-channel data, the convolutional neural network can be a spring network of convolutional layers (paragraph 0027). The training component can employ the first predicted label or the second predicted label for the imaging data to train a generative artificial intelligence model, alternatively, the training component can employ the first predicted label or the second predicted label for the synthetic multi-channel data to train the generative artificial intelligence model (paragraph 0030). “training, by the device, the GAN model with the real network data and the synthetic network data to generate a trained GAN model” as the segmentation component can generate first segmentation data indicative of a segmentation for a real image included in the imaging data (e.g., a real image included in a first data channel of the imaging data). The first segmentation data can be included in a data channel of the imaging data. Additionally, the segmentation component can generate second segmentation data indicative of a remaining segmentation for the real image included in the imaging data (paragraph 0034). Soni does not explicitly teach the claimed limitation “utilizing, by the device, the random process sample with a random process to generate simulated network data; applying, by the device, weights to the real network data, the synthetic network data, and the simulated network data to generate weighted real network data, weighted synthetic network data, and weighted simulated network data; combining, by the device, the weighted real network data, the weighted synthetic network data, and the weighted simulated network data to generate interpolated network data; and performing, by the device, one or more actions based on the interpolated network data”. Van Aert teaches a method generating of a plurality of training image pairs may comprise creating distorted images that include combinations of simulated noise and/or artefacts representative of different noise and/or artefact sources of varying severity and relative weights (paragraph 0044; claim 11). Since experimentally only data can be collected that are at least to some extent distorted and/or noisy, the synthetic generation of undistorted and distorted images offers many aspects. It is an aspect that the ANN can be efficiently trained with a large set of realistic simulated data, that can cover a large space of different types and relative weights of noise and artefacts, different imaged specimens, and/or different acquisition settings and/or environments, the ANN can be easily trained with a large set of simulated data images per trained network that thus may cover a very wide range of possible use cases and scenarios (paragraphs 0100-0104). The numerical parameter ranges that are applied for the data generation may be fine-tuned based on analyzing a large number of high quality simulations of TEM images for different specimens and microscope settings. This has the aspect that, once suitable parameter ranges (or sampling distributions) are determined for each atomic number Z from detailed simulations of electron microscopy image formation and electron-specimen interaction processes, sufficiently realistic images can be generated at a very low computational cost by random sampling of the parameters (paragraphs 0121, 0123). The (intermediate) distorted image may then be created by using bicubic interpolation and evaluating on the non-regular grid, which is built by adding the positions of the regular grid and the generated displacements. Thus, the distorted image under construction may be updated by the (for X- and Y-) jitter processes: x←SJ(y) (paragraph 0155). Interpolation distortions can occur as the result of a user interaction, e.g. by applying a transformation function to the image as received by the user/system before restoration, i.e. the distorted image. Such transformation might be needed to obtain suitable input for a further postprocessing step, for a better visualization of an area of interest and/or other case-specific goals of the user or the postprocessing flow. Such interpolation distortion may be modelled by applying a random transformation, e.g. a random linear transformation matrix, to the training image pair (paragraphs 0167-0168). In summary, training data may be provided by generating undistorted synthetic images, and creating distorted images, e.g. as a realization of stochastic variables. Post-processing distortions may be added (to one or more training samples) by applying one or more simulated post-processing distortions to the undistorted and/or distorted image of the pair, as discussed hereinabove. For example, an interpolation distortion may be simulated by applying a random transformation, e.g. a random linear transformation matrix, to the training image pair. Gaussian blurring may be simulated by convolution of the distorted image with a 2D-Gaussian function (paragraph 0172). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Soni and Van Aert before him/her, to modify Soni the random process sample with a random process to generate simulated network data because that would provide a good, versatile and/or efficient image restoration of electron microscopy images, i.e. to reduce and/or remove image artefacts due to various sources of noise and image distortion, e.g. in an efficient and widely applicable manner as taught by Van Aert (paragraph 0015). As to Claim 4, Soni teaches the claimed limitations: “wherein training the GAN model with the real network data and the synthetic network data to generate the trained GAN model comprises: training the GAN model with the real network data and the synthetic network data to generate the weight” as (paragraphs 0021, 0025-0027, 0030, 0034-35, 0038). Van Aert teaches (paragraphs 0002, 0100-0104, 0121, 0123). As to Claim 5, Soni teaches the claimed limitations: “wherein applying the weights to the real network data, the synthetic network data, and the simulated network data to generate the weighted real network data, the weighted synthetic network data, and the weighted simulated network data comprises: applying a first weight to the real network data to generate the weighted real network data; applying a second weight to the synthetic network data to generate the weighted synthetic network data; and applying a third weight to the simulated network data to generate the weighted simulated network data, wherein a sum of the first weight, the second weight, and the third weight is equal to one” as (paragraphs 0021, 0025, 0027, 0030, 0034-0035, 0037-0038, 0050). Van Aert teaches (paragraphs 0044, 0100-0104, 0115, 0121, 0123, 0155, 0164-0165, 0167-0168, 0171-0173, 0175, 0201, 0209). As to Claim 6, Soni teaches the claimed limitations: “wherein a value of the first weight determines a quantity of the real network data that is masked” as (paragraphs 0025-0026, 0035, 0037-0038, 0042, 0049). 6. Claims 2, and 8, and 10-14, are rejected under 35 U.S.C. 103(a) as being unpatentable over Soni et al. (US Patent Publication No. 2020/0311482 A1) as applied to claims 1 and 8 above, and further in view of in view of Van Aert et al. (US Patent Publication No. 20250078216 A1) and Itu et al. (US Patent Publication No. 2019/0139641 A1, hereinafter “Itu”). As to Claim 2, Soni does not explicitly teach the claimed limitation “wherein the real network data includes a multivariate dataset”. Itu teaches the generator network is seeded with a randomized input that is sampled from a predefined latent space (e.g. a multivariate normal distribution) (paragraph 0061). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Soni, Van Aert and Itu before him/her, to modify Soni the real network data includes a multivariate dataset because that would provide a good, versatile and/or efficient image restoration of electron microscopy images, i.e. to reduce and/or remove image artefacts due to various sources of noise and image distortion, e.g. in an efficient and widely applicable manner as taught by Van Aert (paragraph 0015). Or provide in an automated manner. Machine learning algorithms have superior predictive capabilities in complex tasks, showing expert-level performance as taught by Itu (paragraph 0020). As to Claim 8, Soni teaches the claimed limitations: ”A device, comprising: one or more memories; and one or more processors to:” as a computer readable storage device is provided. The computer readable storage device comprises instructions that, in response to execution, cause a system comprising a processor to perform operations (paragraph 0006). “receive real network data associated with a network” as the novel generative adversarial network can include, during a training phase, a multi-channel generator that produces multi-channel data and a discriminator that receives in multi-channel data to classify real data or synthetic data (paragraph 0021). “receive a random latent vector and a random process sample” as the generative modeling component (e.g., the multi-channel generator component of the generative modeling component) can receive one or more latent random variables. The one or more latent random variables can be sampled from a data distribution of random variables, the one or more latent random variables can be a vector of random variables (paragraph 0025). “utilizing the random latent vector with a generative adversarial network (GAN) model to generate synthetic network data” as the multi-channel generator component can employ a deep neural network such as a convolutional neural network to generate the synthetic multi-channel data, the convolutional neural network can be a spring network of convolutional layers (paragraph 0027). The training component can employ the first predicted label or the second predicted label for the imaging data to train a generative artificial intelligence model, alternatively, the training component can employ the first predicted label or the second predicted label for the synthetic multi-channel data to train the generative artificial intelligence model (paragraph 0030). “training the GAN model with the real network data and the synthetic network data to generate a trained GAN model” as the segmentation component can generate first segmentation data indicative of a segmentation for a real image included in the imaging data (e.g., a real image included in a first data channel of the imaging data). The first segmentation data can be included in a data channel of the imaging data. Additionally, the segmentation component can generate second segmentation data indicative of a remaining segmentation for the real image included in the imaging data (paragraph 0034). Soni does not explicitly teach the claimed limitation “utilizing the random process sample with a random process to generate simulated network data; applying weights to the real network data, the synthetic network data, and the simulated network data to generate weighted real network data, weighted synthetic network data, and weighted simulated network data; combine weights to the real network data, the weighted synthetic network data, and the weighted simulated network data to generate interpolated network data; and perform one or more actions based on the interpolated network data”. Van Aert teaches a method generating of a plurality of training image pairs may comprise creating distorted images that include combinations of simulated noise and/or artefacts representative of different noise and/or artefact sources of varying severity and relative weights (paragraph 0044; claim 11). Since experimentally only data can be collected that are at least to some extent distorted and/or noisy, the synthetic generation of undistorted and distorted images offers many aspects. It is an aspect that the ANN can be efficiently trained with a large set of realistic simulated data, that can cover a large space of different types and relative weights of noise and artefacts, different imaged specimens, and/or different acquisition settings and/or environments, the ANN can be easily trained with a large set of simulated data images per trained network that thus may cover a very wide range of possible use cases and scenarios (paragraphs 0100-0104). The numerical parameter ranges that are applied for the data generation may be fine-tuned based on analyzing a large number of high quality simulations of TEM images for different specimens and microscope settings. This has the aspect that, once suitable parameter ranges (or sampling distributions) are determined for each atomic number Z from detailed simulations of electron microscopy image formation and electron-specimen interaction processes, sufficiently realistic images can be generated at a very low computational cost by random sampling of the parameters (paragraphs 0121, 0123). The (intermediate) distorted image may then be created by using bicubic interpolation and evaluating on the non-regular grid, which is built by adding the positions of the regular grid and the generated displacements. Thus, the distorted image under construction may be updated by the (for X- and Y-) jitter processes: x←SJ(y) (paragraph 0155). Interpolation distortions can occur as the result of a user interaction, e.g. by applying a transformation function to the image as received by the user/system before restoration, i.e. the distorted image. Such transformation might be needed to obtain suitable input for a further postprocessing step, for a better visualization of an area of interest and/or other case-specific goals of the user or the postprocessing flow. Such interpolation distortion may be modelled by applying a random transformation, e.g. a random linear transformation matrix, to the training image pair (paragraphs 0167-0168). In summary, training data may be provided by generating undistorted synthetic images, and creating distorted images, e.g. as a realization of stochastic variables. Post-processing distortions may be added (to one or more training samples) by applying one or more simulated post-processing distortions to the undistorted and/or distorted image of the pair, as discussed hereinabove. For example, an interpolation distortion may be simulated by applying a random transformation, e.g. a random linear transformation matrix, to the training image pair. Gaussian blurring may be simulated by convolution of the distorted image with a 2D-Gaussian function (paragraph 0172). Soni does not explicitly teach the claimed limitation “wherein the real network data includes a multivariate dataset”. Itu teaches the generator network is seeded with a randomized input that is sampled from a predefined latent space (e.g. a multivariate normal distribution) (paragraph 0061). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Soni, Van Aert and Itu before him/her, to modify Soni the real network data includes a multivariate dataset because that would provide a good, versatile and/or efficient image restoration of electron microscopy images, i.e. to reduce and/or remove image artefacts due to various sources of noise and image distortion, e.g. in an efficient and widely applicable manner as taught by Van Aert (paragraph 0015). Or provide in an automated manner. Machine learning algorithms have superior predictive capabilities in complex tasks, showing expert-level performance as taught by Itu (paragraph 0020). As to Claim 10, Soni teaches the claimed limitations: “wherein the GAN model includes a generator component and a discriminator component” as (paragraphs 0004, 0021, 0023, 0030, 0041, 0048-0049). Van Aert teaches (paragraphs 0180. 0195, 0198, 0232). As to Claim 11, Soni does not explicitly teach the claimed limitation “wherein the one or more processors, to perform the one or more actions, are to one or more of: provide the interpolated network data for display; or retrain the GAN model based on the interpolated network data”. Van Aert teaches (paragraphs 0155, 0167, 0172). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Soni and Van Aert before him/her, to modify Soni interpolated network data because that would provide a good, versatile and/or efficient image restoration of electron microscopy images, i.e. to reduce and/or remove image artefacts due to various sources of noise and image distortion, e.g. in an efficient and widely applicable manner as taught by Van Aert (paragraph 0015). As to Claim 12, Soni does not explicitly teach the claimed limitation “wherein the one or more processors, to perform the one or more actions, are to one or more of: train a network anomaly detection model with the interpolated network data; or train a network forecasting model with the interpolated network data”. Van Aert teaches (paragraphs 0149-0157, 0160-0163, 0167-0168, 0172). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Soni and Van Aert before him/her, to modify Soni train a network anomaly detection model because that would provide a good, versatile and/or efficient image restoration of electron microscopy images, i.e. to reduce and/or remove image artefacts due to various sources of noise and image distortion, e.g. in an efficient and widely applicable manner as taught by Van Aert (paragraph 0015). As to Claim 13, Soni does not explicitly teach the claimed limitation “wherein the one or more processors, to perform the one or more actions, are to: perform initial training of a network anomaly detection model with the interpolated network data; and perform fine tune training of the network anomaly detection model with the real network data”. Van Aert teaches (paragraphs 0071, 0121, 0133, 0155, 0167-0168, 01720179). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Soni and Van Aert before him/her, to modify Soni perform fine tune training of the network anomaly detection model because that would provide a good, versatile and/or efficient image restoration of electron microscopy images, i.e. to reduce and/or remove image artefacts due to various sources of noise and image distortion, e.g. in an efficient and widely applicable manner as taught by Van Aert (paragraph 0015). As to Claim 14, Soni does not explicitly teach the claimed limitation “wherein the one or more processors, to perform the one or more actions, are to: deploy a network forecasting model or a network anomaly detection model, trained with the interpolated network data, in the network”. Van Aert teaches (paragraphs 0149-0157, 0160-0163, 0167-0168, 0172). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Soni and Van Aert before him/her, to modify Soni train a network anomaly detection model because that would provide a good, versatile and/or efficient image restoration of electron microscopy images, i.e. to reduce and/or remove image artefacts due to various sources of noise and image distortion, e.g. in an efficient and widely applicable manner as taught by Van Aert (paragraph 0015). 7. Claims 3, 7, 9, and 15-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Soni et al. (US Patent Publication No. 2020/0311482 A1) as applied to claims 1, 8, and 15 above, and further in view of in view of Van Aert et al. (US Patent Publication No. 20250078216 A1) and Dalli et al. (US Patent Publication No. 2022/0172050 A1, hereinafter “Dalli”). As to Claim 3, Soni does not explicitly teach the claimed limitation “wherein the GAN model is a Wasserstein recurrent GAN model” Dalli teaches the method includes two architectures: Causal Controller and CausalGAN. The Causal Controller architecture is a variation of the Wasserstein GAN, and it is used to control the distribution of the images to be sampled from when intervened or conditioned on a set of image labels (paragraph 0013). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Soni, Van Aert and Dalli before him/her, to modify Soni a Wasserstein recurrent GAN model because that would provide a good, versatile and/or efficient image restoration of electron microscopy images, i.e. to reduce and/or remove image artefacts due to various sources of noise and image distortion, e.g. in an efficient and widely applicable manner as taught by Van Aert (paragraph 0015). Or provide the optimal generator architecture of the CausalGAN is able to sample with the characteristics as defined in the causal controller as taught by Dalli (paragraph 0013). As to Claim 7, Soni does not explicitly teach the claimed limitation “wherein utilizing the random process sample with the random process to generate the simulated network data comprises: utilizing the random process sample and the real network data to generate two Poisson distributions; and superposing the two Poisson distributions to generate the simulated network data”. Dalli teaches contemplated that XAEDs and XGANs may be utilized in the encoding, decoding, modelling, reproduction and generation of various different arbitrary numeric and non-numeric data distributions including Normal, Binomial, Bernoulli, Hypergeometric, Beta-Binomial, Discrete Uniform, Poisson, Negative Binomial, Geometric, Lognormal, Beta, Gamma, Uniform, Exponential, Weibull, Double Exponential, Chi-Squared, Cauchy, Fisher-Snedecor, and Student T distributions using the appropriate parameters applicable for the distribution or set of distributions chosen to be used in a specific implementation (paragraphs 0091-0100). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Soni, Van Aert and Dalli before him/her, to modify Soni two Poisson distributions because that would provide a good, versatile and/or efficient image restoration of electron microscopy images, i.e. to reduce and/or remove image artefacts due to various sources of noise and image distortion, e.g. in an efficient and widely applicable manner as taught by Van Aert (paragraph 0015). Or provide the optimal generator architecture of the CausalGAN is able to sample with the characteristics as defined in the causal controller as taught by Dalli (paragraph 0013). As to claim 9 is rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claim 7. In addition, Soni teaches a computer readable storage device is provided. The computer readable storage device comprises instructions that, in response to execution, cause a system comprising a processor to perform operations (paragraph 0006). Therefore, this claim is rejected for at least the same reasons as claim 7. As to Claim 15, Soni teaches the claimed limitations: ” A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to:” as a computer readable storage device is provided. The computer readable storage device comprises instructions that, in response to execution, cause a system comprising a processor to perform operations (paragraph 0006). “receive real network data associated with a network” as the novel generative adversarial network can include, during a training phase, a multi-channel generator that produces multi-channel data and a discriminator that receives in multi-channel data to classify real data or synthetic data (paragraph 0021). “receive a random latent vector and a random process sample” as the generative modeling component (e.g., the multi-channel generator component of the generative modeling component) can receive one or more latent random variables. The one or more latent random variables can be sampled from a data distribution of random variables, the one or more latent random variables can be a vector of random variables (paragraph 0025). “utilizing the random latent vector with a generative adversarial network (GAN) model to generate synthetic network data” as the multi-channel generator component can employ a deep neural network such as a convolutional neural network to generate the synthetic multi-channel data, the convolutional neural network can be a spring network of convolutional layers (paragraph 0027). The training component can employ the first predicted label or the second predicted label for the imaging data to train a generative artificial intelligence model, alternatively, the training component can employ the first predicted label or the second predicted label for the synthetic multi-channel data to train the generative artificial intelligence model (paragraph 0030). “training the GAN model with the real network data and the synthetic network data to generate a trained GAN model” as the segmentation component can generate first segmentation data indicative of a segmentation for a real image included in the imaging data (e.g., a real image included in a first data channel of the imaging data). The first segmentation data can be included in a data channel of the imaging data. Additionally, the segmentation component can generate second segmentation data indicative of a remaining segmentation for the real image included in the imaging data (paragraph 0034). Soni does not explicitly teach the claimed limitation “utilizing the random process sample with a random process to generate simulated network data; applying weights to the real network data, the synthetic network data, and the simulated network data to generate weighted real network data, weighted synthetic network data, and weighted simulated network data; combine weights to the real network data, the weighted synthetic network data, and the weighted simulated network data to generate interpolated network data; and perform one or more actions based on the interpolated network data”. Van Aert teaches a method generating of a plurality of training image pairs may comprise creating distorted images that include combinations of simulated noise and/or artefacts representative of different noise and/or artefact sources of varying severity and relative weights (paragraph 0044; claim 11). Since experimentally only data can be collected that are at least to some extent distorted and/or noisy, the synthetic generation of undistorted and distorted images offers many aspects. It is an aspect that the ANN can be efficiently trained with a large set of realistic simulated data, that can cover a large space of different types and relative weights of noise and artefacts, different imaged specimens, and/or different acquisition settings and/or environments, the ANN can be easily trained with a large set of simulated data images per trained network that thus may cover a very wide range of possible use cases and scenarios (paragraphs 0100-0104). The numerical parameter ranges that are applied for the data generation may be fine-tuned based on analyzing a large number of high quality simulations of TEM images for different specimens and microscope settings. This has the aspect that, once suitable parameter ranges (or sampling distributions) are determined for each atomic number Z from detailed simulations of electron microscopy image formation and electron-specimen interaction processes, sufficiently realistic images can be generated at a very low computational cost by random sampling of the parameters (paragraphs 0121, 0123). The (intermediate) distorted image may then be created by using bicubic interpolation and evaluating on the non-regular grid, which is built by adding the positions of the regular grid and the generated displacements. Thus, the distorted image under construction may be updated by the (for X- and Y-) jitter processes: x←SJ(y) (paragraph 0155). Interpolation distortions can occur as the result of a user interaction, e.g. by applying a transformation function to the image as received by the user/system before restoration, i.e. the distorted image. Such transformation might be needed to obtain suitable input for a further postprocessing step, for a better visualization of an area of interest and/or other case-specific goals of the user or the postprocessing flow. Such interpolation distortion may be modelled by applying a random transformation, e.g. a random linear transformation matrix, to the training image pair (paragraphs 0167-0168). In summary, training data may be provided by generating undistorted synthetic images, and creating distorted images, e.g. as a realization of stochastic variables. Post-processing distortions may be added (to one or more training samples) by applying one or more simulated post-processing distortions to the undistorted and/or distorted image of the pair, as discussed hereinabove. For example, an interpolation distortion may be simulated by applying a random transformation, e.g. a random linear transformation matrix, to the training image pair. Gaussian blurring may be simulated by convolution of the distorted image with a 2D-Gaussian function (paragraph 0172). Soni does not explicitly teach the claimed limitation “wherein the GAN model is a Wasserstein recurrent GAN model”. Dalli teaches the method includes two architectures: Causal Controller and CausalGAN. The Causal Controller architecture is a variation of the Wasserstein GAN, and it is used to control the distribution of the images to be sampled from when intervened or conditioned on a set of image labels (paragraph 0013). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Soni, Van Aert and Dalli before him/her, to modify Soni a Wasserstein recurrent GAN model because that would provide a good, versatile and/or efficient image restoration of electron microscopy images, i.e. to reduce and/or remove image artefacts due to various sources of noise and image distortion, e.g. in an efficient and widely applicable manner as taught by Van Aert (paragraph 0015). Or provide the optimal generator architecture of the CausalGAN is able to sample with the characteristics as defined in the causal controller as taught by Dalli (paragraph 0013). As to claims 16-18 are rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claims 4-6. In addition, Soni teaches systems, apparatuses or processes explained in this disclosure can constitute machine-executable component(s) embodied within machine, e.g., embodied in one or more computer readable mediums (or media) associated with one or more machines (paragraph 0023). Therefore, these claims are rejected for at least the same reasons as claims 4-6. As to claim 19 is rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claim 7. In addition, Soni teaches a computer readable storage device is provided. The computer readable storage device comprises instructions that, in response to execution, cause a system comprising a processor to perform operations (paragraph 0006). Therefore, this claim is rejected for at least the same reasons as claim 7. As to claim 20 is rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claim 10. In addition, Soni teaches systems, apparatuses or processes explained in this disclosure can constitute machine-executable component(s) embodied within machine, e.g., embodied in one or more computer readable mediums (or media) associated with one or more machines (paragraph 0023). Therefore, this claim is rejected for at least the same reasons as claim 10. Examiner’s Note Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to James Hwa whose telephone number is 571-270-1285, email address is james.hwa@uspto.gov. The examiner can normally be reached on 9:00 am – 5:30 pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached on 571-272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only, for more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the PAIR system contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 01/24/2026 /SHYUE JIUNN HWA/ Primary Examiner, Art Unit 2156
Read full office action

Prosecution Timeline

Jun 12, 2023
Application Filed
Jan 27, 2026
Non-Final Rejection — §103, §112
Mar 24, 2026
Interview Requested
Apr 10, 2026
Examiner Interview Summary
Apr 10, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602571
NETWORK PARTITIONING FOR SENSOR-BASED SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12596683
LOG-STRUCTURED FILE SYSTEM FOR A ZONED BLOCK MEMORY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12596700
CONCURRENT OPTIMISTIC TRANSACTIONS FOR TABLES WITH DELETION VECTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12566750
SYSTEMS AND METHODS OF FACILITATING AN INFORMED CONSENSUS-DRIVEN DISCUSSION
2y 5m to grant Granted Mar 03, 2026
Patent 12561580
GENERATING ENRICHED SCENES USING SCENE GRAPHS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+39.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 852 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month