Prosecution Insights
Last updated: April 19, 2026
Application No. 17/311,895

DATA DENOISING BASED ON MACHINE LEARNING

Final Rejection §103
Filed
Jun 08, 2021
Examiner
MULLINAX, CLINT LEE
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Nokia Technologies Oy
OA Round
2 (Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
4y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
59 granted / 123 resolved
-7.0% vs TC avg
Strong +38% interview lift
Without
With
+38.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
25 currently pending
Career history
148
Total Applications
across all art units

Statute-Specific Performance

§101
22.8%
-17.2% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
13.1%
-26.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 123 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the amendments and remarks filed on 12/03/2025. Claims 56-61, 63-74, and 76-79 are pending. Claims 56, 58-59, 61, 63-64, 66-73, 77-79 have been amended. Response to Arguments Applicant’s arguments, with respect to the claim rejection(s) under 35 U.S.C. 101, have been fully considered and are persuasive. The rejection(s) under 35 U.S.C. 101 have been withdrawn. Applicant’s arguments, with respect to the interpretation(s) of claim(s) 71 under 35 U.S.C. 112(f), have been considered but they are not persuasive. The claimed “one or more memory units comprising instructions” remains interpreted as passing the 3-prong test and thus the interpretations are maintained. Applicant’s arguments, with respect to the rejection(s) of claim(s) 19, 26, and 33 under 35 U.S.C. 103, have been considered but they are not persuasive. Applicant argues that no reference teaches the amended limitation now stating “receiving, from one or more sensors of a mobile phone, a first set of noisy data samples and a second set of noisy data samples captured by the one or more sensors”, have been considered but are moot because the arguments do not apply to the current combination of references being used in the current rejection. See 35 U.S.C 103 section for full mapping of claim limitations necessitated by applicant amendments. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “one or more memory units comprising instructions” in claim 71 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Further, the limitations of “one or more memory units comprising” invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, however, applicant’s page 10, lines 18 - page 11, line 6 recite sufficient structure stating a “The denoising model 301, the noise model 403, and the discriminator 405 may be implemented with a single processor or circuity, or alternatively they may have two or more separate and dedicated processors or circuitries. In a similar manner, they may have a single memory unit, or two or more separate and dedicated memory units”. Here, it is interpreted that “in a similar manner” makes the implementation of the memory unit(s) the same as “processors or circuitries”; thus, interpreting the memory units to be hardware. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 56-58, 60-61, 63-65, 67-68, 70-72, 74, and 76-79 are rejected under 35 U.S.C. 103 as being unpatentable over Gandhi et al (“Denoising Time Series Data Using Asymmetric Generative Adversarial Networks”, 2018) hereinafter Gandhi, in view of Senior et al (US Pub 20170011738) hereinafter Senior, in view of Vogels et al (US Pub 20180293713) hereinafter Vogels. Regarding claims 56 and 71, Gandhi teaches a method; apparatus comprising: one or more processors; and one or more memory units storing instructions that, when executed by the one or more processors, configured to cause the apparatus to (section 1 teaches “We solve these problems by proposing an online, fully automated, end to end system for denoising time series trained using unpaired training corpora” of a GAN architecture algorithm, well known to be implemented as software on a computer, wherein a computer includes one or more memories communicatively coupled to one or more processors for executing code to perform the embodiments of the disclosure): receiving, from one or more sensors , a first set of noisy data samples and a second set of noisy data samples captured by the one or more sensors (sections 4-4.1 and Fig. 1 teach obtaining EEG electrode “noisy signal A…give n training signals” in which multiple samples can be (first/second set). Section 5.2 teaches “we split the datasets into two parts”, each including noisy data samples (alternative first/second set) to train GAN and use discriminators taught in section 4.1.); generating, using a denoising machine learning model comprising a first plurality of parameters, a set of denoised data samples based on the first set of the noisy data samples (sections 4-4.1 and Fig. 1 teach G_B (denoising machine learning model) for maps (generating) noisy time series A (based on the first set of the noisy data samples) to clean time series B (set of denoised data samples)); processing, using a noise machine learning model, the set of the denoised data samples to generate a third set of noisy data samples comprising additive noise added by the noise machine learning model (sections 4-4.1 and Fig. 1 teach G_N (noise machine learning model) to get a noisy time series (third set of noisy data samples) from clean data B using a neural network (additive noise added by the noise machine learning model)); determining, using a discriminator machine learning model and based on the second set of the noisy data samples and the third set of the noisy data samples, a discrimination value (sections 4-4.1, 5.2, and Fig. 1 teach discriminators (discriminator machine learning model) processing B+G_N data (third set) and “original” to determine the signals as “real or generated” (discrimination value)); and adjusting, based on the discrimination value, the first plurality of parameters to enable near real-time training of the denoising machine learning model (sections 4-4.1 and Fig. 1 teach loss functions based on the discriminator output to train the GAN models. Further, sections 1 and 5.2 teach “We solve these problems by proposing an online, fully automated, end-to-end system for denoising time series trained using unpaired training corpora. An online and fully automated system makes it useful in real-time applications.”). However, Gahndi does not explicitly teach receiving, from one or more sensors of a mobile phone, a first set of noisy data samples and a second set of noisy data samples captured by the one or more sensors. Senior teaches receiving, from one or more sensors of a mobile phone, a first set of noisy data samples and a second set of noisy data samples captured by the one or more sensors (paragraphs 0024-0025, 0087-0088, and Fig. 1 teach the user mobile phone device having sensors and sending/obtaining collected user data to be processed of noisy data remotely or locally). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement Senior’s teachings of training multiple neural networks as generators and discriminators on noisy and clean data obtained from user mobile phone devices into Gandhi’s teaching of GAN training and operations on noisy vs clean data in order to improve prediction time delivery to a user phone and (Senior, paragraphs 0024-0025, 0087-0088, and Fig. 1). Further, Gandhi at least implies a method; apparatus comprising: one or more processors; and one or more memory units storing instructions that, when executed by the one or more processors, and determining, using a second neural network and based on the second set of the noisy data samples and the third set of the noisy data samples, a discrimination value (see mappings above); however, Vogel teaches a method; apparatus comprising: one or more processors; and one or more memory units storing instructions that, when executed by the one or more processors… determining, using a second neural network and based on the second set of the noisy data samples and the third set of the noisy data samples, a discrimination value (paragraphs 0186-0188, 0200, and 0236-0244 teach a processor and memory in a computer system for performing the embodiments of the disclosure, including “[t]he discriminator may be configured to receive the input image, the reference image, and the output image produced by the generator, and to generate a quality metric based on a comparison of the output image or the reference image with the input image”). Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Gandhi’s teaching of GAN training and operations on noisy vs clean data, as modified by Senior’s teachings of training multiple neural networks as generators and discriminators on noisy and clean data obtained from user mobile phone devices, to include Vogel’s teachings of a GAN having multiple noisy data sample sets and multiple of the inputs and edited inputs to a discriminator neural network in order to improve accuracy of results from GAN training (Gandhi, paragraph 0033). Regarding claim 57, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claim 56 above; and further teach wherein the first set of the noisy data samples comprises one or more first noisy images, one or more first noisy videos, one or more first noisy 3D scans, or one or more first noisy audio signals, and wherein the second set of the noisy data samples comprises one or more second noisy images, one or more second noisy videos, one or more second noisy 3D scans, or one or more second noisy audio signals (Vogel, paragraphs 0092-0098, 0186-0188, 0200 teach noisy image datasets). Gandhi, Senior, and Vogel are combinable for the same rationale as set forth above with respect to claims 56 and 71. Regarding claims 58 and 72, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claims 56 and 71 above; and further teach training, based on additional noisy data samples and by further adjusting the first plurality of the parameters, the denoising machine learning model, such that the discrimination value approaches a predetermined value (Gandhi, sections 4-5.1 and Fig. 1 teach loss functions based on the discriminator output to implement a training process for the GAN models including the generators (denoising machine learning model)); after the training of the denoising machine learning model, receiving a noisy data sample; denoising, using the trained denoising machine learning model, the noisy data sample to generate a denoised data sample; and presenting to a user, or sending for further processing, the denoised data sample (Gandhi, sections 4-4.1 and Fig. 1 teach once the model is trained on loss functions, G_B (denoising machine learning model) is used for mapping (denoising) noisy time series A to clean time series B (set of denoised data samples); and further processes the times series by discriminators (sending for further processing, the denoised data sample)). Gandhi at least implies training, based on additional noisy data samples and by further adjusting the first plurality of the parameters, the denoising machine learning model, such that the discrimination value approaches a predetermined value; after the training of the denoising machine learning model, receiving a noisy data sample; denoising, using the trained denoising machine learning model, the noisy data sample to generate a denoised data sample; and presenting to a user, or sending for further processing, the denoised data sample (see mappings above); however, Vogel teaches training, based on additional noisy data samples and by further adjusting the first plurality of the parameters, the denoising machine learning model, such that the discrimination value approaches a predetermined value; after the training of the denoising machine learning model, receiving a noisy data sample; denoising, using the trained denoising machine learning model, the noisy data sample to generate a denoised data sample; and presenting to a user, or sending for further processing, the denoised data sample (paragraphs 0184-0201 teach training the “trainable parameters” included in a “generator” (denoising machine learning model) of a GAN system on “noisy image[s]” to “reach a local minimum of their loss function”, and “such that a statistical value of the quality metric (discrimination value) generated by the discriminator approaches a predetermined value”. “Once the GAN has been trained, the generator (denoising machine learning model) may be used to denoise a new input image” that is noisy for further processing.). Gandhi, Senior, and Vogel are combinable for the same rationale as set forth above with respect to claims 56 and 71. Regarding claims 60 and 74, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claims 56 and 71 above; and further teach wherein the first set of the noisy data samples and the second set of the noisy data samples are received from a same source (Gandhi, sections 1, 5.1-5.2 teach EEG signal data obtained). Gandhi at least implies wherein the first set of the noisy data samples and the second set of the noisy data samples are received from a same source (see mappings above), however Vogel teaches wherein the first set of the noisy data samples and the second set of the noisy data samples are received from a same source (paragraph 0006, 0119-0122, 0186-0188, and 0200-0202 teach multiple image sets made from MC path tracing of 3D images (source)). Gandhi, Senior, and Vogel are combinable for the same rationale as set forth above with respect to claims 56 and 71. Regarding claim 61, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claim 56 above; and further teach wherein the first set of the noisy data samples, the second set of the noisy data samples, and the noisy data sample are received from a same type of sensor (Gandhi, sections 1, 5.1-5.2 teach EEG signal data obtained from “electrodes on the scalp”). Gandhi at least implies wherein the first set of the noisy data samples, the second set of the noisy data samples, and the noisy data sample are received from a one or more similar sensors (see mappings above); however, Vogel teaches wherein the first set of the noisy data samples, the second set of the noisy data samples, and the noisy data sample are received from a one or more similar sensors (paragraphs 0092-0098, 0186-0188, 0200 teach multiple noisy image datasets from a camera on a remote device). Gandhi, Senior, and Vogel are combinable for the same rationale as set forth above with respect to claims 56 and 71. Regarding claim 63, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claim 56 above; and further teach wherein the denoising machine learning model and the discriminator machine learning model comprise a generative adversarial network (Vogel, paragraphs 0180-0201 teach multiple “(GANs) may be used for training a machine-learning based denoiser” and performing the steps of denoising and predicting for noisy images via discriminator). Gandhi, Senior, and Vogel are combinable for the same rationale as set forth above with respect to claims 56 and 71. Regarding claim 64, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claim 56 above; and further teach wherein the discriminator machine learning model comprises a second plurality of parameters, and wherein the adjusting the first plurality of the parameters is based on fixing the second plurality of the parameters, the method further comprising: adjusting the second plurality of the parameters based on fixing the first plurality of the parameters (Vogel, paragraphs 0184-0201 teach training the “trainable parameters” included in a “generator” (first neural network) and “discriminator” (discriminator machine learning model) and the models “may be alternatingly trained” based on the other model’s outputs). Gandhi, Senior, and Vogel are combinable for the same rationale as set forth above with respect to claims 56 and 71. Regarding claims 65 and 76, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claims 56 and 71 above; and further teach wherein the discrimination value indicates a probability, or a scalar quality value, of a noisy data sample of the second set of the noisy data samples or of the third set of the noisy data samples belonging to a class of real noisy data samples or a class of fake noisy data samples (Vogel, paragraphs 0184-0201 teach “The quality metric may indicate a relative probability of the output image or the reference image belonging to a first class of denoised images as compared to a second class of ground truth images. The discriminator outputs the quality metric to the generator” from the different datasets). Gandhi, Senior, and Vogel are combinable for the same rationale as set forth above with respect to claims 56 and 71. Regarding claims 67 and 77, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claims 56 and 71 above; and further teach wherein the noise machine learning model comprises a third plurality of parameters, the method further comprising: receiving a set of reference noise data samples; generating, using the noise machine learning model, a set of generated noise data samples; and training, based on the set of reference noise data samples and the set of generated noise data samples, the noise machine learning model (Gandhi, sections 4-5.1 and Fig. 1 teach B+G_N to get a noisy time series (generating, using the noise machine learning model, a set of generated noise data samples) from clean dataset samples (receiving a set of reference noise data samples) using a neural network (noise machine learning model) with weights and parameters, and is trained using loss functions values). Regarding claim 68, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claim 56 above; and further teach wherein: the noise machine learning model further comprises a modulation model configured to modulate data samples to generate noisy data samples, and the noise machine learning model outputs one or more coefficients to the modulation model; or the noise machine learning model further comprises a convolutional model configured to perform convolution functions on data samples to generate noisy data samples, and the noise machine learning model outputs one or more parameters to the convolutional model (Gandhi, sections 4-5.1 and Fig. 1 teach B+G_N to get a noisy time series from clean dataset samples using a neural network that includes a “convolution” model (noise machine learning model further comprises a convolutional model) utilizing convolution filters on data). Regarding claims 70 and 78, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claims 56 and 71 above; and further teach receiving, from the one or more sensors, a fourth set of noisy data samples and a fifth set of noisy data samples, wherein respective ones of the fourth set of the noisy data samples comprise a first portion and a second portion; denoising, using the denoising machine learning model, the first portion; processing, using the noise machine learning model, the denoised first portion; determining, using the discriminator machine learning model and based on the processed denoised first portions, the second portions, and the fifth set of the noisy data samples, a second discrimination value; and adjusting, based on the second discrimination value, the first plurality of the parameters (Gandhi, sections 4-4.1 and Fig. 1 teach “noisy signal A…given training signals” in which multiple samples can be (first/second set). Section 5.2 teaches “we split the datasets into two parts”, each including noisy data samples (alternative first/second set) to train GAN and use discriminators taught in section 4.1; G_B (denoising machine learning model) for maps (denoising) noisy time series A to clean time series B (set of denoised data samples); B+G_N to get a noisy time series (third set of noisy data samples) from clean data using a neural network (noise machine learning model); discriminators (discriminator machine learning model) processing B+G_N data (third set) and “original” to determine the signals as “real or generated” (discrimination value); loss functions based on the discriminator output to train the GAN models). Further, Gandhi at least implies process iteration, however Vogel explicitly teaches (paragraphs 0184-0201 teach iteratively training the “trainable parameters” included in a “generator” and “discriminator” of a GAN system on “noisy image[s]” to “reach a local minimum of their loss function”, and “such that a statistical value of the quality metric (discrimination value) generated by the discriminator approaches a predetermined value”). Gandhi, Senior, and Vogel are combinable for the same rationale as set forth above with respect to claims 56 and 71. Regarding claim 79, Vogel teaches an apparatus comprising: one or more processors; and memory comprising instructions that, when executed by the one or more processors, cause the apparatus to (paragraphs 0186-0188, 0200, and 0236-0244 teach a processor and memory in a computer system for performing the embodiments of the disclosure): receive a denoising machine learning model, wherein the denoising machine learning model is trained using a generative adversarial network (paragraphs 0006 and 0180-0201 teach “(GANs) may be used for training a machine-learning based denoiser”); receive, from a first one or more sensors , a noisy data sample, wherein the denoising machine learning model is trained using data received from a second one or more sensors that comprise a same type of sensor as the first one or more sensors (paragraphs 0006, 0180-0201, and 0241 teach training the “trainable parameters” included in a “generator” (denoising machine learning model) of a GAN system on “noisy image[s]” from device cameras (first/second sensors) to “reach a local minimum of their loss function”. “Once the GAN has been trained, the generator may be used to denoise a new input image” that is noisy for further processing); generate, using the denoising machine learning model, a denoised data sample based on the noisy data sample (paragraphs 0006 and 0180-0201 teach training the “trainable parameters” included in a “generator” (denoising machine learning model) of a GAN system on “noisy image[s]” from cameras to “reach a local minimum of their loss function” and output a “denoised image”); and . However, Vogel does not explicitly teach from a first one or more sensors of a mobile phone, and provide the denoised data sample to the mobile phone for further processing comprising at least one of image recognition, object recognition, natural language processing, voice recognition, or speech-to-text detection. Senior teaches from a first one or more sensors of a mobile phone, and provide the denoised data sample to the mobile phone for further processing comprising at least one of image recognition, object recognition, natural language processing, voice recognition, or speech-to-text detection (paragraphs 0024-0025, 0058, 0087-0088, and Fig. 1 teach the user mobile phone device having sensors and sending/obtaining collected user data to be processed of noisy data remotely or locally for audio recognition including voice). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement Senior’s teachings of training multiple neural networks as generators and discriminators on noisy and clean data obtained from user mobile phone devices into Vogel’s teachings of a GAN having multiple noisy data sample sets and multiple of the inputs and edited inputs to a discriminator neural network in order to improve prediction time delivery to a user phone and (Senior, paragraphs 0024-0025, 0087-0088, and Fig. 1). Claims 59 and 73 are rejected under 35 U.S.C. 103 as being unpatentable over Gandhi et al (“Denoising Time Series Data Using Asymmetric Generative Adversarial Networks”, 2018) hereinafter Gandhi, in view of Senior et al (US Pub 20170011738) hereinafter Senior, in view of Vogels et al (US Pub 20180293713) hereinafter Vogels, in view of Yu et al (“UAV-Enabled Spatial Data Sampling in Large-Scale IoT Systems Using Denoising Autoencoder Neural Network”, 2018) hereinafter Yu. Regarding claims 59 and 73, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claims 56 and 71 above; and further teach training, based on additional noisy data samples and by further adjusting the first plurality of the parameters, the denoising machine learning model, such that the discrimination value approaches a predetermined value (Gandhi, sections 4-5.1 and Fig. 1 teach loss functions based on the discriminator output to implement a training process for the GAN models including the generators (denoising machine learning model)); However, the combination does not explicitly teach after the training of the denoising machine learning model, delivering the trained denoising machine learning model to a second computing device; receiving a noisy data sample from a sensor of the second computing device; denoising, by the second computing device and using the trained denoising machine learning model, the noisy data sample to generate a denoised data sample; and presenting to a user, or sending for further processing, the denoised data sample. Yu teaches after the training of the denoising machine learning model, delivering the trained first neural network to a second computing device (section 4 and Fig. 3 teaches “wireless sensor nodes as end devices, UAVs as mobile edge devices and IoT cloud platform”, wherein denoising autoencoder (DAE) “parameter sets are learned through the training in the cloud. The parameters of encoders in DAE models are then sent to UAV (after the training of the denoising machine learning model, delivering the trained denoising machine learning model to a second computing device) for data encoding (denoising).”); receiving a noisy data sample from a sensor of the second computing device; denoising, by a second computing device and using the trained denoising machine learning model, the noisy data sample to generate a denoised data sample (section 4 and Fig. 3 teaches “UAVs have the capability of communicating with sensor nodes” collecting data (receiving a noisy data sample from a sensor of the second computing device) for the encoding via the trained encoder of a denoising autoencoder on the UAV (denoising, by the second computing device and using the trained denoising machine learning model, the noisy data sample to generate a denoised data sample)); and providing to the denoised data sample to the mobile phone for further processing (section 4 and Fig. 3 teaches denoising autoencoder (DAE) “parameters of decoders are kept in the cloud for data reconstruction” (sending for further processing) of the encoded values (the denoised data sample) from the UAV encoder and passed to the mobile edge devices (phone)). Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Gandhi’s teaching of GAN training and operations on noisy vs clean data, as modified by Senior’s teachings of training multiple neural networks as generators and discriminators on noisy and clean data obtained from user mobile phone devices, as modified by Vogel’s teachings of a GAN having multiple noisy data sample sets and multiple of the inputs and edited inputs to a discriminator neural network, to include deploying denoising autoencoder neural networks across different devices for sensor data encoding as taught by Yu in order to “address the challenge of accurate and efficient data sampling and reconstruction in large-scale IoT systems” using “DAE neural network” (Yu, section 7). Claims 66 and 69 are rejected under 35 U.S.C. 103 as being unpatentable over Gandhi et al (“Denoising Time Series Data Using Asymmetric Generative Adversarial Networks”, 2018) hereinafter Gandhi, in view of Senior et al (US Pub 20170011738) hereinafter Senior, in view of Vogels et al (US Pub 20180293713) hereinafter Vogels, in view of Dangeti et al (“Denoising Techniques – A Comparison”, 2003) hereinafter Dangeti. Regarding claim 66, the combination of Gandhi, Senior, and Vogel teach all the claim limitations of claim 56 above; however, the combination does not explicitly teach determining, based on a type of a noise process through which the first set of noisy data samples and the second set of noisy data samples are generated, one or more noise types; and determining, based on the one or more noise types, the noise machine learning model corresponding to the noise process. Dangeti teaches determining, based on a type of a noise process through which the first set of noisy data samples and the second set of noisy data samples are generated, one or more noise types; and determining, based on the one or more noise types, the noise machine learning model corresponding to the noise process (abstract teaches “Different noise models including additive and multiplicative types are used. They include Gaussian noise, salt and pepper noise, speckle noise and Brownian noise (noise types). Selection of the denoising algorithm (model) is application dependent. Hence, it is necessary to have knowledge about the noise present in the image so as to select the appropriate denoising algorithm (determining, based on the one or more noise types, the noise machine learning model corresponding to the noise process). The filtering approach has been proved to be the best when the image is corrupted with salt and pepper noise. The wavelet based approach finds applications in denoising images corrupted with Gaussian noise”. Further, pages 14 and 17 teach mean filtering model (noise machine learning model) is best for “impulsive” noise type, and “LMS adaptive filter (noise machine learning model) works well for images corrupted with salt and pepper type noise. But this filter does a better denoising job compared to the mean filter.”). Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Gandhi’s teaching of GAN training and operations on noisy vs clean data, as modified by Senior’s teachings of training multiple neural networks as generators and discriminators on noisy and clean data obtained from user mobile phone devices, as modified by Vogel’s teachings of a GAN having multiple noisy data sample sets and multiple of the inputs and edited inputs to a discriminator neural network, to include denoising model selection based on noise type determination as taught by Dangeti in order to classify and quantify the best denoising algorithm for specific application from their “efficiency” and “performance” (Dangeti, section 1.2, pages 14 and 17). Regarding claim 69, the combination of Gandhi, Vogel, and Dangeti teach all the claim limitations of claim 66 above; and further teach training one or more machine learning models corresponding to one or more noise types; and selecting, from the one or more machine learning models, a machine learning model to be used as the noise machine learning model (Dangeti, section 3.2.2 teaches “adaptive filter (ML model) iteratively adjusts its parameters” (training) including the LMS adaptive filter (ML model) that is taught to be selected from different filter models for use on “salt and pepper type noise” (corresponding to one or more noise types; selecting, from the one or more machine learning models, a machine learning model to be used as the noise machine learning model)). Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Gandhi’s teaching of GAN training and operations on noisy vs clean data, as modified by Senior’s teachings of training multiple neural networks as generators and discriminators on noisy and clean data obtained from user mobile phone devices, as modified by Vogel’s teachings of a GAN having multiple noisy data sample sets and multiple of the inputs and edited inputs to a discriminator neural network, to include denoising model parameter adjusting and model selection based on noise type determination as taught by Dangeti in order to classify and quantify the best denoising algorithm for specific application from their “efficiency” and “performance” (Dangeti, section 1.2, pages 14-17). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLINT MULLINAX whose telephone number is 571-272-3241. The examiner can normally be reached on Mon - Fri 8:00-4:30 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on 571-270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.M./Examiner, Art Unit 2123 /ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Jun 08, 2021
Application Filed
Sep 04, 2025
Non-Final Rejection — §103
Dec 03, 2025
Response Filed
Dec 23, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561620
Machine Learning-Based URL Categorization System With Noise Elimination
2y 5m to grant Granted Feb 24, 2026
Patent 12554962
CONFIGURABLE PROCESSOR ELEMENT ARRAYS FOR IMPLEMENTING CONVOLUTIONAL NEURAL NETWORKS
2y 5m to grant Granted Feb 17, 2026
Patent 12547887
SYSTEM FOR DETECTING ELECTRIC SIGNALS
2y 5m to grant Granted Feb 10, 2026
Patent 12518169
SYSTEMS AND METHODS FOR SAMPLE GENERATION FOR IDENTIFYING MANUFACTURING DEFECTS
2y 5m to grant Granted Jan 06, 2026
Patent 12493771
DEEP LEARNING MODEL FOR ENERGY FORECASTING
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
86%
With Interview (+38.3%)
4y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 123 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month