Prosecution Insights
Last updated: April 19, 2026
Application No. 16/394,493

AUTONOMOUS MODIFICATION OF DATA

Final Rejection §101§103
Filed
Apr 25, 2019
Examiner
KWON, JUN
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
8 (Final)
38%
Grant Probability
At Risk
9-10
OA Rounds
4y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
26 granted / 68 resolved
-16.8% vs TC avg
Strong +46% interview lift
Without
With
+46.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
102
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§101 §103
Detailed Action This Office Action is in response to the remarks entered on 10/20/2025. Claim 2 is cancelled. Claims 1 and 3-20 currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 3-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, 2A Prong 1: applying at least one random modification to the base data sample, (a mental process of evaluation – modifying pattern by applying random modification can be done in the human mind with the aid of pen and paper) wherein the discriminator receives as input dataset pairs of datasets, the dataset pairs comprising each a prediction output of the generator based on a base data sample and the corresponding modified data sample, thereby optimizing a joint loss function for the generator and the discriminator, wherein the joint loss function is a Wassertein loss function; (mathematical concept – spec [0026-0027] a joint loss function measures a content loss between a base data sample and a modified data sample which directs to a mathematical concept) predicting an output dataset for unknown data samples as input for the generator without the discriminator (mental process of judgment, as it merely discloses forecasting the output dataset based on unknown data which can be performed in human mind.). generating, 2A Prong 2: This judicial exception is not integrated into a practical application. A computer-implemented method for modifying patterns in datasets, the method using a generative adversarial network comprising a generator and a discriminator, (this recites method using GAN, which is merely apply it MPEP 2106.05(f).) the method comprising: providing pairs of data samples, the pairs comprising each a base data sample and a modified data sample; (insignificant extra-solution activity MPEP 2106.05(g)(iii) of receiving data/data gathering.) training, without the use of labeled or annotated data, the generator for building a model of the generator using an adversarial training method that inputs both samples of the pairs of data samples into the generator, (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, using the trained generator (mere instructions to apply an exception using a computer MPEP 2106.05(f)) 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. A computer-implemented method for modifying patterns in datasets, the method using a generative adversarial network comprising a generator and a discriminator, (this recites method using GAN, which is merely apply it MPEP 2106.05(f).) the method comprising: providing pairs of data samples, the pairs comprising each a base data sample and a modified data sample; (was indicated as an insignificant extra-solution activity MPEP 2106.05(g)(iii), thus re-evaluated as a well understood, routine, and conventional activity MPEP 2106.05(d)(II)(iv) of gathering statistics.) training, without the use of labeled or annotated data, the generator for building a model of the generator using an adversarial training method that inputs both samples of the pairs of data samples into the generator, (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, using the trained generator (mere instructions to apply an exception using a computer MPEP 2106.05(f)) Regarding claim 3, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: further comprising training of different models for the generator network using the adversarial training method and using the pairs of data samples as input, (mere instructions to apply an exception MPEP 2106.05(f), as it discloses using a generator network.) wherein the modified data sample are modified according to a different aspect. (a field of use and technological environment MPEP 2106.05(h).) 2B: further comprising training of different models for the generator network using the adversarial training method and using the pairs of data samples as input, (mere instructions to apply an exception MPEP 2106.05(f), as it discloses using a generator network.) wherein the modified data sample are modified according to a different aspect. (a field of use and technological environment MPEP 2106.05(h).) Regarding claim 4, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein the generator is a neural network having as many output nodes as input nodes, and having less hidden layer nodes than the number of input nodes. (a field of use and technological environment MPEP 2106.05(h), as it merely defines the number of nodes of the generator.) 2B: wherein the generator is a neural network having as many output nodes as input nodes, and having less hidden layer nodes than the number of input nodes. (a field of use and technological environment MPEP 2106.05(h), as it merely defines the number of nodes of the generator.) Regarding claim 5, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein the discriminator is a neural network having as many input nodes as the generator has output nodes and having two output nodes. (a field of use and technological environment MPEP 2106.05(h), as it merely defines the number of nodes of the discriminator.) 2B: wherein the discriminator is a neural network having as many input nodes as the generator has output nodes and having two output nodes. (a field of use and technological environment MPEP 2106.05(h), as it merely defines the number of nodes of the discriminator.) Regarding claim 6, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein the discriminator is a PatchGAN. (a field of use and technological environment MPEP 2106.05(h).) 2B: wherein the discriminator is a PatchGAN. (a field of use and technological environment MPEP 2106.05(h).) Regarding claim 7, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein the joint loss function is a weighted combination of loss functions. (a field of use and technological environment MPEP 2106.05(h).) 2B: wherein the joint loss function is a weighted combination of loss functions. (a field of use and technological environment MPEP 2106.05(h).) Regarding claim 8, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein the loss function is related to content loss of the base data sample. (a field of use and technological environment MPEP 2106.05(h).) and wherein the content loss is determined using a feature map of pre-trained neural network. (mere instructions to apply an exception MPEP 2106.05(f) as it discloses using a neural network.) 2B: wherein the loss function is related to content loss of the base data sample. (a field of use and technological environment MPEP 2106.05(h).) and wherein the content loss is determined using a feature map of pre-trained neural network. (mere instructions to apply an exception MPEP 2106.05(f) as it discloses using a neural network.) Regarding claim 9, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein the randomly modified data sample comprises one or more of a randomly modified dashed line of the first data pattern to a continuous line, a randomly modified color of the first pattern, a randomly removed text pattern from the first pattern, and a randomly removed line pattern from the first pattern, wherein the random modification leaves at least one pattern of the plurality of patterns of the modified data sample unmodified. (a field of use and technological environment MPEP 2106.05(h), as it merely defines the way of modifying the data sample.) 2B: wherein the randomly modified data sample comprises one or more of a randomly modified dashed line of the first data pattern to a continuous line, a randomly modified color of the first pattern, a randomly removed text pattern from the first pattern, and a randomly removed line pattern from the first pattern, wherein the random modification leaves at least one pattern of the plurality of patterns of the modified data sample unmodified. (a field of use and technological environment MPEP 2106.05(h), as it merely defines the way of modifying the data sample.) Regarding claim 10, 2A Prong 1: determining at least one pattern of the images to be modified (a mental process of evaluation, as it merely discloses determining which image to modify which can be performed in human mind.) and relating one out of the set of images and a related image with the at least one pattern of the images defining one of the pairs comprising the base data sample and the modified data sample. (a mental process of evaluation, as it merely discloses figuring out the relationship between the set of images, which can be performed in human mind.) 2A Prong 2: wherein the providing pairs of data samples comprises: providing a set of images with patterns, (an insignificant extra-solution activity of mere data gathering MPEP 2106.05(g)(iii).) randomly modifying the at least one pattern of the images using a random number generator, (an insignificant extra-solution activity of mere data gathering MPEP 2106.05(g)(iii).) 2B: wherein the providing pairs of data samples comprises: providing a set of images with patterns, (indicated as an insignificant extra-solution activity of mere data gathering MPEP 2106.05(g), thus re-evaluated as a well understood, routine, and conventional activity (MPEP 2106.05(d)(II)(iv) of gathering statistics.) randomly modifying the at least one pattern of the images using a random number generator, (indicated as an insignificant extra-solution activity of mere data gathering MPEP 2106.05(g)(iii), thus re-evaluated as a well understood, routine, and conventional activity (MPEP 2106.05(d) according to Berkheimer evidence ([Hsu, US-20190252073-A1, 0110] The network architectures of the generator 560 and discriminator 570 are illustrated in FIG. 6 as generator 610 and discriminator 620. A traditional GAN is used to randomly generate arbitrary realism images from input noise data.) Regarding claim 11, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein the training of the generative adversarial network is terminated if a result of the joint loss function is smaller than a relative threshold value when comparing the result of the current iteration with a previous iteration. (mere instructions to apply an exception using a computer MPEP 2106.05(f), as it merely disclose stopping the training of the network based on the loss function result.) 2B: wherein the training of the generative adversarial network is terminated if a result of the joint loss function is smaller than a relative threshold value when comparing the result of the current iteration with a previous iteration. (mere instructions to apply an exception using a computer MPEP 2106.05(f), as it merely disclose stopping the training of the network based on the loss function result.) Regarding claim 12, 2A Prong 1: Incorporates the rejection of claim 1. 2A Prong 2: wherein the training operation further comprises training the generator for building the model of the generator based on at least one of the base data samples, wherein the at least one of the base data samples lacks a corresponding modified data sample. (mere instructions to apply an exception using a computer MPEP 2106.05(f), as it merely recites training a neural network using the data sample.) 2B: wherein the training operation further comprises training the generator for building the model of the generator based on at least one of the base data samples, wherein the at least one of the base data samples lacks a corresponding modified data sample. (mere instructions to apply an exception using a computer MPEP 2106.05(f), as it merely recites training a neural network using the data sample.) Regarding claim 13, 2A Prong 1: predicting an output dataset for unknown data samples as input for the generator without the discriminator (a mental process of judgment, as it merely discloses forecasting the output dataset based on unknown data which can be performed in human mind.). generating, 2A Prong 2: This judicial exception is not integrated into a practical application. A machine-learning system for modifying patterns in datasets using a generative adversarial network, comprising a generator network system and a discriminator network system, the machine-learning system comprising a memory and at least one processor, coupled to said memory, and operative to perform operations comprising: (this recites method using GAN, which is merely apply it MPEP 2106.05(f).) the method comprising: providing pairs of data samples, the pairs comprising each a base data sample and a modified data sample; (insignificant extra-solution activity MPEP 2106.05(g)(iii) of receiving data/data gathering.) controlling a training of the generator network system for building a model of the generator network system using an adversarial training method that inputs both samples of the pairs of data samples into the generator, wherein the base data sample has a plurality of patterns, wherein the modified data sample has a first pattern of the plurality of patterns that is randomly modified in comparison to the base data sample, wherein the modified data sample has at least one pattern of the plurality of patterns that is unmodified in comparison to the base data sample, wherein the modified pattern is determined by applying at least one random modification to the base data sample, wherein the discriminator network system receives as input dataset pairs of datasets, the dataset pairs comprising each a prediction output of the generator based on a base data sample and the corresponding modified data sample, thereby optimizing a joint loss function for the generator and the discriminator, wherein the joint loss function is a Wassertein loss function; (directs to an insignificant extra-solution activity MPEP 2106.05(g) of training a Generative Adversarial Neural Network, as the limitation merely provides generic/standard GAN techniques for training the generator.) generating, using the trained generator (mere instructions to apply an exception using a computer MPEP 2106.05(f)) 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. A machine-learning system for modifying patterns in datasets using a generative adversarial network, comprising a generator network system and a discriminator network system, the machine-learning system comprising a memory and at least one processor, coupled to said memory, and operative to perform operations comprising: (this recites method using GAN, which is merely apply it MPEP 2106.05(f).) providing pairs of data samples, the pairs comprising each a base data sample and a modified data sample; (was indicated as an insignificant extra-solution activity MPEP 2106.05(g), thus re-evaluated as a well understood, routine, and conventional activity MPEP 2106.05(d)(II)(iv) of gathering statistics.) controlling a training of the generator network system for building a model of the generator network system using an adversarial training method that inputs both samples of the pairs of data samples into the generator, wherein the base data sample has a plurality of patterns, wherein the modified data sample has a first pattern of the plurality of patterns that is randomly modified in comparison to the base data sample, wherein the modified data sample has at least one pattern of the plurality of patterns that is unmodified in comparison to the base data sample, wherein the modified pattern is determined by applying at least one random modification to the base data sample, wherein the discriminator network system receives as input dataset pairs of datasets, the dataset pairs comprising each a prediction output of the generator based on a base data sample and the corresponding modified data sample, thereby optimizing a joint loss function for the generator and the discriminator, wherein the joint loss function is a Wassertein loss function; (is directed to an insignificant extra-solution activity MPEP 2106.05(g), thus re-evaluated as a well understood, routine, and conventional activity MPEP 2106.05(d) according to Berkheimer evidence ([Hsu, US-20190252073-A1, 0110] The network architectures of the generator 560 and discriminator 570 are illustrated in FIG. 6 as generator 610 and discriminator 620. A traditional GAN is used to randomly generate arbitrary realism images from input noise data.). generating, using the trained generator (mere instructions to apply an exception using a computer MPEP 2106.05(f)) Regarding claim 14, 2A Prong 1: Incorporates the rejection of claim 13. 2A Prong 2: wherein the system trains different models for the generator network using the adversarial training method and using the pairs of data samples as input, (mere instructions to apply an exception MPEP 2106.05(f), as it discloses using a generator network.) wherein the modified data sample are modified according to a different aspect. (a field of use and technological environment MPEP 2106.05(h).) 2B: wherein the system trains different models for the generator network using the adversarial training method and using the pairs of data samples as input, (mere instructions to apply an exception MPEP 2106.05(f), as it discloses using a generator network.) wherein the modified data sample are modified according to a different aspect. (a field of use and technological environment MPEP 2106.05(h).) Regarding claim 15, 2A Prong 1: Incorporates the rejection of claim 13. 2A Prong 2: wherein the generator network system is a neural network having as many output nodes as input nodes, and having less hidden layer nodes than the number of input nodes, or wherein the discriminator network system is a neural network having as many input nodes as the generator has output nodes and having two output nodes. (a field of use and technological environment MPEP 2106.05(h), as it merely defines the number of nodes of the generator.) 2B: wherein the generator network system is a neural network having as many output nodes as input nodes, and having less hidden layer nodes than the number of input nodes, or wherein the discriminator network system is a neural network having as many input nodes as the generator has output nodes and having two output nodes. (a field of use and technological environment MPEP 2106.05(h), as it merely defines the number of nodes of the generator.) Regarding claim 16, 2A Prong 1: Incorporates the rejection of claim 13. 2A Prong 2: wherein the discriminator is a PatchGAN system. (a field of use and technological environment MPEP 2106.05(h) ) 2B: wherein the discriminator is a PatchGAN system. (a field of use and technological environment MPEP 2106.05(h) ) Regarding claim 17, 2A Prong 1: Incorporates the rejection of claim 13. 2A Prong 2: wherein the loss function is related to content loss of the base data sample. (a field of use and technological environment MPEP 2106.05(h) ) and wherein the content loss is determined using a feature map of pre-trained neural network. (mere instructions to apply an exception MPEP 2106.05(f) as it discloses using a neural network.) 2B: wherein the loss function is related to content loss of the base data sample. (a field of use and technological environment MPEP 2106.05(h).) and wherein the content loss is determined using a feature map of pre-trained neural network. (mere instructions to apply an exception MPEP 2106.05(f) as it discloses using a neural network.) Regarding claim 18, 2A Prong 1: Incorporates the rejection of claim 13. 2A Prong 2: wherein the providing pairs of data samples comprises providing a set of images with patterns, determining at least one pattern of the images to be modified, randomly modifying the at least one pattern of the images using a random number generator, and relating one out of the set of images and a related image with the at least one pattern of the images defining one of the pairs comprising the base data sample and the modified data sample. (a field of use and technological environment MPEP 2106.05(h), as it merely defines the way of modifying the data sample.) 2B: wherein the providing pairs of data samples comprises providing a set of images with patterns, determining at least one pattern of the images to be modified, randomly modifying the at least one pattern of the images using a random number generator, and relating one out of the set of images and a related image with the at least one pattern of the images defining one of the pairs comprising the base data sample and the modified data sample. (a field of use and technological environment MPEP 2106.05(h), as it merely defines the way of modifying the data sample.) Regarding claim 19, 2A Prong 1: Incorporates the rejection of claim 13. 2A Prong 2: wherein the training operation further comprises training the generator for building the model of the generator based on at least one of the base data samples, wherein the at least one of the base data samples lacks a corresponding modified data sample. (mere instructions to apply an exception using a computer MPEP 2106.05(f), as it merely recites training a neural network using the data sample.) 2B: wherein the training operation further comprises training the generator for building the model of the generator based on at least one of the base data samples, wherein the at least one of the base data samples lacks a corresponding modified data sample. (mere instructions to apply an exception using a computer MPEP 2106.05(f), as it merely recites training a neural network using the data sample.) Regarding claim 20, 2A Prong 1: Claim 20 is a computer program product claim having similar limitation to the claim 1 above. Therefore, the claim is rejected under the same rationale as claim 1 above. 2A Prong 2: A computer program product for modifying patterns in datasets using a generative adversarial network comprising a generator network system and a discriminator network system, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, said program instructions being executable by one or more computing systems or controllers to cause said one or more computing systems to (this recites method using GAN, which is merely apply it MPEP 2106.05(f).). 2B: A computer program product for modifying patterns in datasets using a generative adversarial network comprising a generator network system and a discriminator network system, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, said program instructions being executable by one or more computing systems or controllers to cause said one or more computing systems to (this recites method using GAN, which is merely apply it MPEP 2106.05(f).). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1, 4, 7, 11-13, and 19-20 are rejected under 35 U.S.C. 103 over Shrivastava (US 11475276 B1) in view of Ceccaldi (US 20190046068 A1) and further in view of Jiang (US 20200042613 A1). Regarding claim 1, Shrivastava teaches a computer-implemented method for modifying patterns in datasets, the method using a generative adversarial network comprising a generator and a discriminator, the method comprising ([Shrivastava, column 12, line 61-66] “Therefore, to help make the refined synthetic images 430 more realistic, an adversarial cost term 450 may be added to the overall objective, according to some embodiments. For example, in one embodiments, a generative network G.sub.θ, such as generator 120, and a discriminative network D.sub.ϕ, such as discriminator 130, may both be learned.” [Shrivastava, column 6, line 20 - 42] “Rather than generate images from scratch (e.g., the focus of most traditional generative models) a generative model, such as generator 120, may be coupled with a simulator or synthesizer, such as synthesizer 110 … Synthesizer 110 may generate synthetic data in any of various ways, according to different embodiments. Synthesizer 110 may be configured to generate synthetic data based on a set of training data including labeled real images in which the labels of the real images may be sufficient to generate corresponding synthetic images (i.e., synthetic images that look very similar to corresponding real images in shape, pose and/or appearance) …” The real images correspond to the base data, and the synthetic image corresponds to the modified data. The synthesizer modifies the real data. Generator generates refined synthetic image that goes into the discriminator based on the real image and the synthetic image pair): providing pairs of data samples, the pairs comprising each a base data sample and a modified data sample ([Shrivastava, column 6, line 20 - 42] “Rather than generate images from scratch (e.g., the focus of most traditional generative models) a generative model, such as generator 120, may be coupled with a simulator or synthesizer, such as synthesizer 110, thereby allowing the refinement of synthetic data (e.g., to make them more realistic) by utilizing both a synthesizer and adversarial networks (e.g., generator 120 and discriminator 130), according to some embodiments. Synthesizer 110 may generate synthetic data in any of various ways, according to different embodiments. Synthesizer 110 may be configured to generate synthetic data based on a set of training data including labeled real images in which the labels of the real images may be sufficient to generate corresponding synthetic images (i.e., synthetic images that look very similar to corresponding real images in shape, pose and/or appearance). For example, in the case of depth images of a human hand, the position, shape and bone angles of the hand may be the same (or similar) for each pair of synthetic and real images. Additionally, in some embodiments, synthesizer 110 may be configured to generate an image from an input label vector s. Given this label vector, synthesizer 110 may generate a corresponding synthetic image, according to one embodiment.” The real images correspond to the base data, and the synthetic image corresponds to the modified data. The generator generates refined synthetic image that goes into the discriminator based on the real image and the synthetic image pair. [Shrivastava, column 5, line 17-20] “discriminative network configured to receive both refined synthetic data from the generative network and real data (e.g., from a set of training data) and learns to distinguish between the two” teaches the discriminator receives both real (base data) and refined synthetic data (modified data) ), training, , wherein the joint loss function is a Wassertein loss function; ([Shrivastava, column 6, line 20 - 42] The real images is the base data, and the synthetic image is the modified data. The generator generates refined synthetic image that goes into the discriminator based on the real image and the synthetic image pair which are further disclosed in [Shrivastava, col 12, line 48-54]. The generator is trained based on a set of training data including real images and synthetic images, which are further disclosed in [Shrivastava, col 12, line 48-54]. [Shrivastava, column 5, line 13-20] teaches the discriminator receives both real (base data) and refined synthetic data (modified data). [Shrivastava, column 13, line 31-44] “As shown in block 510, a mini-batch of images including both real and synthetic may be loaded … The total loss for a mini-batch may, in some embodiments, be computed as the average of L.sub.D.sup.i over the mini-batch. After computing the current generative function G.sub.θ as in block 520, the discriminative parameters may be updated as in block 530 …” This total loss corresponds to the joint loss function for the generator and discriminator); generating, using the trained generator, an additional text-based data sample. ([Shrivastava, col 7, line 30-34] The method is further evaluated based on dataset including text images with 2383 different fonts for the task of font recognition. The dataset contains both labeled synthetic data and partially labeled real-world data. [Shrivastava, column 6, line 20 - 42] The real images is the base data, and the synthetic image is the modified data. The generator generates refined synthetic image that goes into the discriminator based on the real image and the synthetic image pair which are further disclosed in [Shrivastava, col 12, line 48-54]) Shrivastava does not specifically disclose: training without the use of labeled or annotated data; a first pattern of the plurality of patterns that is randomly modified in comparison to the base data sample, wherein the random modification comprises a randomly removed pattern from the first pattern; wherein the modified pattern is determined by applying at least one random modification to the base data sample; predicting an output dataset for unknown data samples as input for the generator without the discriminator, wherein the joint loss function is a Wasserstein loss function; Ceccaldi teaches: training without the use of labeled or annotated data; ([Ceccaldi, 0045] discloses training using the loss function from both the decoder network and an adversarial network trained to discriminate between features of the encoder network 401. The training process does not require any labeled input data) predicting an output dataset for unknown data samples as input for the generator without the discriminator ([Ceccaldi, 0006] “A patient is scanned by the magnetic resonance imaging system to acquire magnetic resonance data. The magnetic resonance data is input to a machine learnt generator network trained to extract features from input magnetic resonance data and reconstruct domain independent images using the extracted features”, generative network reconstruct the original image (making prediction of original image) using extracted feature) wherein the joint loss function is a Wasserstein loss function ([Ceccaldi, 0051] “During the training process, to avoid or limit one or more of the above referenced issues, the discriminator network 411 provides a gradient calculated using a Wasserstein distance. The Wasserstein distance is a measure of the differences between probability distributions”) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Shrivastava and Ceccaldi to use the method of predicting an output dataset for unknown data samples as input for the generator without the discriminator of Ceccaldi to implement the autonomous data modification system of Shrivastava. The suggestion and/or motivation for doing so is to improve the performance of generator, as testing out the generator independently is necessary to improve the performance of generator. Shrivastava in view of Ceccaldi does not specifically disclose: a first pattern of the plurality of patterns that is randomly modified in comparison to the base data sample, wherein the random modification comprises a randomly removed pattern from the first pattern; wherein the modified pattern is determined by applying at least one random modification to the base data sample; Jiang teaches: a first pattern of the plurality of patterns that is randomly modified in comparison to the base data sample, wherein the random modification comprises a randomly removed pattern from the first pattern ([Jiang, 0081] FIG. 8 includes noisy message generator 810 that processes a message and generates a noisy message by modifying the original message. Any appropriate techniques may be used to generate a noisy message from a message. The noisy message may be created, for example, by removing one or more words or characters, … or any combination of the foregoing. The modifications to create the noisy message may be performed randomly according to a probability distribution. [Jiang, 0084] discloses a decoder neural network component 840 receiving the message feature vector for the noisy message and the original, non-noisy message (e.g., receive word embeddings of the non-noisy message or some other indication of the words of the non-noisy message). Noisy message corresponds to the randomly modified pattern, and non-noisy message corresponds to the base data sample. ); wherein the modified pattern is determined by applying at least one random modification to the base data sample ([Jiang, 0081] FIG. 8 includes noisy message generator 810 that processes a message and generates a noisy message by modifying the original message. Any appropriate techniques may be used to generate a noisy message from a message. The noisy message may be created, for example, by removing one or more words or characters, … or any combination of the foregoing. The modifications to create the noisy message may be performed randomly according to a probability distribution. ); Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Shrivastava, Ceccaldi, and Jiang to use the method of applying random modification, wherein the random modification comprises a randomly removed pattern from the first pattern of Jiang to implement the autonomous data modification system of Shrivastava. The suggestion and/or motivation for doing so is to improve the performance of discriminator by introducing more types of random deformation to the input data. Regarding claim 13, Shrivastava teaches: A machine-learning system for modifying patterns in datasets using a generative adversarial network, comprising a generator network system and a discriminator network system, the machine-learning system comprising a memory and at least one processor, coupled to said memory, and operative to perform operations ([Shrivastava, column 17, line 66 – column 18, line 12] “In at least some embodiments, a system and/or server that implements a portion or all of one or more of the methods and/or techniques described herein, including the techniques to refine synthetic images, to train and execute machine learning algorithms including neural network algorithms, and the like, may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media … computing device 9000 includes one or more processors 9010 coupled to a main memory 9020 (which may comprise both non-volatile and volatile memory modules, and may also be referred to as system memory) via an input/output (I/O) interface 9030.” This paragraph teaches the system comprising memory, processor, operatives. [Shrivastava, column 12, line 61-66] “Therefore, to help make the refined synthetic images 430 more realistic, an adversarial cost term 450 may be added to the overall objective, according to some embodiments. For example, in one embodiments, a generative network G.sub.θ, such as generator 120, and a discriminative network D.sub.ϕ, such as discriminator 130, may both be learned.” [Shrivastava, column 6, line 20 - 42] “Rather than generate images from scratch (e.g., the focus of most traditional generative models) a generative model, such as generator 120, may be coupled with a simulator or synthesizer, such as synthesizer 110 … Synthesizer 110 may generate synthetic data in any of various ways, according to different embodiments. Synthesizer 110 may be configured to generate synthetic data based on a set of training data including labeled real images in which the labels of the real images may be sufficient to generate corresponding synthetic images (i.e., synthetic images that look very similar to corresponding real images in shape, pose and/or appearance) …” The real images correspond to the base data, and the synthetic image corresponds to the modified data. The synthesizer modifies the real data. Generator generates refined synthetic image that goes into the discriminator based on the real image and the synthetic image pair. This paragraph teaches the generator modifies (refines) the original synthetic image to refined synthetic image. The paragraphs teach the GAN system). Claim 13 is a system claim having similar limitation to the method claim 1. Therefore, it is rejected with same rationale as claim 1 above. Regarding claim 20, Shrivastava teaches: A computer program product for modifying patterns in datasets using a generative adversarial network comprising a generator network system and a discriminator network system, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, said program instructions being executable by one or more computing systems or controllers to cause said one or more computing systems ([Shrivastava, column 17, line 66 – column 18, line 12] “In at least some embodiments, a system and/or server that implements a portion or all of one or more of the methods and/or techniques described herein, including the techniques to refine synthetic images, to train and execute machine learning algorithms including neural network algorithms, and the like, may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media … computing device 9000 includes one or more processors 9010 coupled to a main memory 9020 (which may comprise both non-volatile and volatile memory modules, and may also be referred to as system memory) via an input/output (I/O) interface 9030.” This paragraph teaches the system comprising memory, processor, operatives. [Shrivastava, column 12, line 61-66] “Therefore, to help make the refined synthetic images 430 more realistic, an adversarial cost term 450 may be added to the overall objective, according to some embodiments. For example, in one embodiments, a generative network G.sub.θ, such as generator 120, and a discriminative network D.sub.ϕ, such as discriminator 130, may both be learned.” [Shrivastava, column 6, line 20 - 42] “Rather than generate images from scratch (e.g., the focus of most traditional generative models) a generative model, such as generator 120, may be coupled with a simulator or synthesizer, such as synthesizer 110 … Synthesizer 110 may generate synthetic data in any of various ways, according to different embodiments. Synthesizer 110 may be configured to generate synthetic data based on a set of training data including labeled real images in which the labels of the real images may be sufficient to generate corresponding synthetic images (i.e., synthetic images that look very similar to corresponding real images in shape, pose and/or appearance) …” The real images correspond to the base data, and the synthetic image corresponds to the modified data. The synthesizer modifies the real data. Generator generates refined synthetic image that goes into the discriminator based on the real image and the synthetic image pair.” This paragraph teaches the generator modifies (refines) the original synthetic image to refined synthetic image). Claim 20 is a computer program product claim having similar limitation to the method claim 1. Therefore, it is rejected with same rationale as claim 1 above. Regarding claim 4, Shrivastava teaches the method of claim 1. Shrivastava does not specifically disclose wherein the generator is a neural network having as many output nodes as input nodes, and having less hidden layer nodes than the number of input nodes. Ceccaldi teaches wherein the generator is a neural network having as many output nodes as input nodes, and having less hidden layer nodes than the number of input nodes ([Ceccaldi, The first figure, 400 ‘Machine Learnt Network’] shows the equal size of data for the input and the output, [Ceccaldi, 0040] “Various units or layers may be used, such as convolutional, pooling (e.g., max pooling), deconvolutional, fully connected, or other types of layers. Within a unit or layer 435, any number of nodes is provided. For example, 100 nodes are provided. Later or subsequent units may have more, fewer, or the same number of nodes. In general, for convolution, subsequent units have more abstraction. For example, the first unit provides features from the image, such as one node or feature being a line found in the image. The next unit combines lines, so that one of the nodes is a corner. The next unit may combine features (e.g., the corner and length of lines) from a previous unit so that the node provides a shape indication. For transposed-convolution to reconstruct, the level of abstraction reverses. Each unit or layer 435 in the encoder 401 reduces the level of abstraction or compression while each unit or layer 435 in the decoder increases the level of abstraction”). Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Shrivastava and Ceccaldi to use the generator having as many output nodes as input nodes, and having less hidden layer nodes than the number of input nodes of Ceccaldi to implement the autonomous data modification system of Shrivastava. The suggestion and/or motivation for doing so is to generate a value that does not exist in the original dataset, as compressing the input data using less number of hidden nodes deletes some information in the input data, and then decompressing the data adds new information that wasn’t in the original input data. Regarding claim 7, Shrivastava teaches the method of claim 1. Shrivastava does not specifically disclose wherein the joint loss function is a weighted combination of loss functions. Ceccaldi teaches wherein the joint loss function is a weighted combination of loss functions ([Ceccaldi, Claim 19] “The system of claim 18, wherein the generator network is trained using a loss function that is calculated as a combination of a first value from a first loss function provided by the decoder and a second value from a second loss function provided by an adversarial learnt network trained to classify concatenated features from the compact representation as either from a first domain or a second domain”). Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Shrivastava and Ceccaldi to use the joint loss function which is a weighted combination of loss functions of Ceccaldi to implement the autonomous data modification system of Shrivastava. The suggestion and/or motivation for doing so is to improve the accuracy of the loss function by combining the two different ways of calculating probability distributions ([Ceccaldi, 0069]). Regarding claim 11, Shrivastava teaches wherein the training of the generative adversarial network is terminated ([Shrivastava, col 14, line 1-3] The method of FIG. 6 may repeat until all the discriminative network updates for this currently step have been performed, as in decision block 630) Shrivastava does not specifically disclose training is terminated if a result of the joint loss function is smaller than a relative threshold value when comparing the result of the current iteration with a previous iteration. Ceccaldi teaches training is terminated if a result of the joint loss function is smaller than a relative threshold value when comparing the result of the current iteration with a previous iteration ([Ceccaldi, Claim 10] calculating, by the discriminator network, a second value based on a second loss function for the classification; adjusting the encoder network as a function of the first value and second value; and repeating sequentially inputting, inputting, calculating, inputting, calculating, and adjusting until a training loss converges). Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Shrivastava and Ceccaldi to use terminating the training process if a result of the joint loss function is smaller than a relative threshold value when comparing the result of the current iteration with a previous iteration of Ceccaldi to implement the autonomous data modification system of Shrivastava. The suggestion and/or motivation for doing so is to improve the efficiency of the system, as repeating the training process even after the loss function has converged does not improve the performance of the trained machine learning model and waste computation resources. Regarding claim 12, Shrivastava teaches: wherein the training operation further comprises training the generator for building the model of the generator based on at least one of the base data samples, wherein the at least one of the base data samples lacks a corresponding modified data sample ([Shrivastava, col 12, line 33-41; and Fig. 4] the real image 440 inputs to the discriminator to train both the generator and discriminator does not have any corresponding input synthetic image. Additionally, [Shrivastava, col 16, line 54 – col 17, line 5] indicates that the adversarial real data that is used in an adversarial net refinement process may not have to include actual examples of the real data. Instead, similar real (e.g., refined) data that can provide refinement information can be used. The similar real data that is similar to the y_i does not have any corresponding modified data sample.). Claim 19 is a system claim having similar limitation to the method claim 12. Therefore, it is rejected with same rationale as claim 12 above. Claim 3, 6, 8-9, 10, 14 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Shrivastava (US 11475276 B1) in view of Ceccaldi (US 20190046068 A1) in view of Jiang (US 20200042613 A1) and further in view of Yi (Yi et al, 2018, “DualGAN: Unsupervised Dual Learning for Image-to-Image Translation”) Regarding claim 3, Shrivastava teaches: further comprising training of different models for the generator network using the adversarial training method and using the pairs of data samples as input, wherein the modified data sample are modified ([Shrivastava, column 12, line 61-66] “Therefore, to help make the refined synthetic images 430 more realistic, an adversarial cost term 450 may be added to the overall objective, according to some embodiments. For example, in one embodiments, a generative network G.sub.θ, such as generator 120, and a discriminative network D.sub.ϕ, such as discriminator 130, may both be learned.” [Shrivastava, column 12, line 48-54] “Given pairs of synthetic and real images, generator 120 may be configured to minimize the l.sub.1 or l.sub.2 norm of the image difference 460 between the original synthetic image and the refined synthetic image. Image difference 460 may be based on a comparison (e.g., mathematically) of the original synthetic image and the refined synthetic image, in some embodiments.” This paragraph teaches the generator modifies (refines) the original synthetic image to refined synthetic image. The different models for generator network is taught by another reference Yi). Shrivastava in view of Ceccaldi and further in view of Jiang does not specifically disclose: wherein the modified data sample are modified according to a different aspect. Yi teaches: wherein the modified data sample are modified according to a different aspect ([Yi, page 3, Figure 1] is showing the Generator and the Discriminator receiving a pair of inputs (fake, real), and using different Generators GA and GB. [Yi, page 3, left column, second paragraph, line 8-10] “The discriminator DA is trained with v as positive samples and GA(u; z) as negative examples, whereas DB takes u as positive and GB(v; z0) as negative”, are showing the invention of Yi receiving a pair of inputs in the generator and the discriminator. [Yi, page 6, Figure 4; Figure 6] “Figure 4: Photo!sketch translation for faces. Results of DualGAN are generally sharper than those from cGAN, even though the former was trained using unpaired data, whereas the latter makes use of image correspondence” shows the process of modifying the data sample according to the various aspects, for example, photo->sketch, or Chinese painting->oil painting). Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Shrivastava, Ceccaldi, Jiang, and Yi to use the method of modified data sample are modified according to a different aspect of Yi to implement the autonomous data modification system of Shrivastava, Ceccaldi, and Jiang. The suggestion and/or motivation for doing so is to enable the invention to handle a wider variety of types of data. Claim 14 is a system cl
Read full office action

Prosecution Timeline

Apr 25, 2019
Application Filed
May 24, 2022
Non-Final Rejection — §101, §103
Aug 31, 2022
Response Filed
Oct 27, 2022
Final Rejection — §101, §103
Feb 01, 2023
Request for Continued Examination
Feb 10, 2023
Response after Non-Final Action
Mar 01, 2023
Applicant Interview (Telephonic)
Mar 01, 2023
Examiner Interview Summary
Mar 10, 2023
Non-Final Rejection — §101, §103
Jun 15, 2023
Response Filed
Jun 20, 2023
Applicant Interview (Telephonic)
Jun 20, 2023
Examiner Interview Summary
Aug 17, 2023
Final Rejection — §101, §103
Nov 03, 2023
Request for Continued Examination
Nov 07, 2023
Response after Non-Final Action
Apr 09, 2024
Non-Final Rejection — §101, §103
Aug 19, 2024
Response Filed
Oct 22, 2024
Final Rejection — §101, §103
Jan 27, 2025
Request for Continued Examination
Jan 29, 2025
Response after Non-Final Action
May 13, 2025
Non-Final Rejection — §101, §103
Oct 16, 2025
Examiner Interview Summary
Oct 16, 2025
Applicant Interview (Telephonic)
Oct 20, 2025
Response Filed
Dec 03, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602569
EXTRACTING ENTITY RELATIONSHIPS FROM DIGITAL DOCUMENTS UTILIZING MULTI-VIEW NEURAL NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602609
UPDATING MACHINE LEARNING TRAINING DATA USING GRAPHICAL INPUTS
2y 5m to grant Granted Apr 14, 2026
Patent 12579436
Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series
2y 5m to grant Granted Mar 17, 2026
Patent 12572777
Policy-Based Control of Multimodal Machine Learning Model via Activation Analysis
2y 5m to grant Granted Mar 10, 2026
Patent 12493772
LAYERED MULTI-PROMPT ENGINEERING FOR PRE-TRAINED LARGE LANGUAGE MODELS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+46.2%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month