Prosecution Insights
Last updated: April 19, 2026
Application No. 18/548,151

DEVICE AND METHOD FOR DENOISING AN INPUT SIGNAL

Non-Final OA §102§103§DP
Filed
Aug 28, 2023
Examiner
SHIFERAW, ELENI A
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
37%
Grant Probability
At Risk
1-2
OA Rounds
5y 1m
To Grant
73%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
49 granted / 132 resolved
-20.9% vs TC avg
Strong +36% interview lift
Without
With
+35.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
10 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 132 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-15 are canceled. Claims 16-29 are newly added and pending. Amendment Amendment to the spec on 8/28/23 is not accepted because some of the pages are blurred and are not legible. Applicant is encouraged to submit a legible copy. Claim Objections Examiner suggestion to amend claims as follows: “first part” with “a first machine learning model” or “generator” (claim 1 and throughout). Similarly, appropriate correction is required for limitation “a second part” in claim 18 and throughout. On par. 96 the applicant disclosure states first and second part being generator and discriminator respectively. Parag. 18 discusses the first part being machine learning model…Appropriate correction is required. Claim 22 is objected to because: The formula recited ( ) is not legible the claim fails to specify for plurality of variables in the equation. The claim only specifies for variables x^3 and G. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg , 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman , 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi , 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum , 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel , 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington , 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA/25, or PTO/AIA/26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer . Claim FILLIN "Indicate the claim(s) of the present application." \d "[ 1 ]" 16-29 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 14-24 of copending Application No. FILLIN "Insert the number of the reference application." \d "[ 3 ]" 18548135 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other see the comparison table below: Instant application 18548151 Co-pending application 18548135 18. (New) The method according to claim 16, wherein the first part is provided based on training the first part to denoise a provided input signal, wherein the training of the first part includes the following steps: providing a first input signal and a first value to the first part, wherein the first input signal characterizes a noisy signal and the first value characterizes a randomly drawn value; determining, by the first part, a first output signal for the first input signal and the first value; determining, by a second part, a second value based on the first output signal, wherein the second value characterizes a probability of the first output signal to characterize a noisy signal; determining, by the second part, a third value based on a supplied second input signal, wherein the second input signal characterizes a non-noisy signal and wherein the third value characterizes a probability of the second input signal to characterize a non-noisy signal; training the first part and the second part, wherein the training includes: adapting a plurality of parameters of the first part according to a gradient of the second value with respect to the plurality of parameters of the first part, and adapting a plurality of parameters of the second part according to a gradient of a sum of the second value and the third value with respect to the plurality of parameters of the second part. 14. (New) The computer-implemented method for training a machine learning system to denoise a provided input signal, the training of the machine learning system comprising the following steps: providing a first input signal and a first value to a first part of the machine learning system, wherein the first input signal characterizes a noisy signal and the first value characterizes a randomly drawn value; determining, by the first part, a first output signal for the first input signal and the first value; determining, by a second part of the machine learning system, a second value based on the first output signal, wherein the second value characterizes a probability of the first output signal to characterize a noisy signal; determining, by the second part, a third value based on a supplied second input signal), wherein the second input signal characterizes a non-noisy signal and wherein the third value characterizes a probability of the second input signal characterize a non- noisy signal; and training the machine learning system, wherein training includes: adapting a plurality of parameters of the first part according to a gradient of the second value with respect to a plurality of parameters of the first part, adapting a plurality of parameters of the second part according to a gradient of a sum of the second value and the third value with respect to the plurality of parameters of the second part. 19. (New) The method according to claim 18,wherein the method further comprises the following steps: providing a third input signal and a fourth value to the first part, wherein the third input signal characterizes a non-noisy signal; determining, by the first part, a second output signal for the third input signal and the fourth value; and adapting a plurality of parameters of the first part according to a deviation of the second output signal from the th i rd input signal. 15. The method according to claim 14, wherein the method further comprises the following steps: providing a third input signal and a fourth value to the first part, wherein the third input signal characterizes a non-noisy signal; determining, by the first part, a second output signal for the third input signal and the fourth value; adapting a plurality of the parameters of the first part according to a deviation of the second output signal to the third input signal. 20. (New) The method according to claim 18,wherein the method further comprises the following steps: determining, by the first part and based on the first input signal and the first value, a fifth value characterizing a classification of the type of noise characterized by the first input signal; adapting a plurality of parameters of the first part according to a deviation of a class characterized by the fifth value and a class of noise type corresponding to the first input si g nal. 16. The method according to claim 14, wherein the method further comprises the following steps: determining, by the first part and based on the first input signal and the first value, a fifth value characterizing a classification of the type of noise characterized by the first input signal; adapting a plurality of the parameters of the first part according to a deviation of a class characterized by the fifth value and a class of noise type corresponding to the first input signal. 2 1 (New) The method according to claim 20, wherein the method further comprises the following steps: determ ining, by the first part and based on the third input signal and the fourth value, a fifth value characterizing a classification of the type of noise characterized by the third input signal; adapting a plurality of parameters of the first part according to a deviation of a class characterized by the fifth value and a class characterizing an absence of noise. 17. (New) The method according to claim 15, wherein the method further comprises the following steps: determining, by the first part and based on the third input signal and the fourth value, a fifth value characterizing a classification of the type of noise characterized by the third input signal; adapting a plurality of the parameters of the first part according to a deviation of a class characterized by the fifth value and a class characterizing an absence of noise. 23. (New) A computer-implemented method for determining a denoised signal from an input signal, comprising the following steps:providing a first part, wherein the first part is configured to denoise an input signal based on the input signal and a randomly drawn first value;determining a denoised signal by the first part based on the input signal and a randomly drawn first value; and providing an output signal as the denoised signal. 24. (New) The method according to claim 23,wherein the provided first part has been trained to denoise a provided input signal, wherein the training of the first part includes the following steps: providing a first input signal and a first value to the first part, wherein the first input signal characterizes a noisy signal and the first value characterizes a randomly drawn value; determining, by the first part, a first output signal for the first input signal and the first value; determining, by a second part, a second value based on the first output signal, wherein the second value characterizes a probability of the first output signal to characterize a noisy signal; determining, by the second part, a third value based on a supplied second input signal, wherein the second input signal characterizes a non-noisy signal and wherein the third value characterizes a probability of the second input signal to characterize a non-noisy signal; training the first part and the second part, wherein the training includes : adapting a plurality of parameters of the first part according to a gradient of the second value with respect to the plurality of parameters of the first part, and adapting a plurality of parameters of the second part according to a gradient of a sum of the second value and the third value with respect to the plurality of parameters of the second part. 19. (New) A computer-implemented method for determining a denoised signal from an input signal, comprising the following steps: providing a trained first part of a machine learning system, the first part being trained by: providing a first input signal and a first value to the first part, wherein the first input signal characterizes a noisy signal and the first value characterizes a randomly drawn value; determining, by the first part, a first output signal for the first input signal and the first value; determining, by a second part of the machine learning system, a second value based on the first output signal, wherein the second value characterizes a probability of the first output signal to characterize a noisy signal; determining, by the second part, a third value based on a supplied second input signal), wherein the second input signal characterizes a non-noisy signal and wherein the third value characterizes a probability of the second input signal characterize a non-noisy signal; and training the machine learning system, wherein training includes: adapting a plurality of parameters of the first part according to a gradient of the second value with respect to a plurality of parameters of the first part, adapting a plurality of parameters of the second part according to a gradient of a sum of the second value and the third value with respect to the plurality of parameters of the second part; determining the input signal by the trained first part based on the input signal and a randomly-drawn first value; and providing the output signal as the denoised signal. 25. (New) The method according to claim 23,wherein the denoised signal is used as input of a control system, wherein the control system is configured to determine a control signal of an actuator based on the denoised signal. 20, (New) The method according to claim 19, wherein the denoised signal is used as input of a control system, wherein the control system is configured to determine a control signal of an actuator based on the denoised signal. 26. (New) The method according to claim 23, wherein the denoised signal is used as input to a virtual sensor for determining a property of the input signal that is not measured by the input signal itself. 21, (New) The method according to claim 19, wherein the denoised signal is used as input to a virtual sensor for determining a property of the input signal that is not measured by the input signal itself. 27. (New) The method according to claim 16, wherein the input signal is a sensor signal. 22 (New) The method according to claim 19, wherein first input signal and/or second input signal and/or third input signal and/or the input signal are sensor signals. 28. (New) A training system, configured to train a first part to denoise a provided input signal, wherein the training system is configured to: provide a first input signal and a first value to the first part, wherein the first input signal characterizes a noisy signal and the first value characterizes a randomly drawn value, determine, by the first part, a first output signal for the first input signal and the first value; determine, by a second part, a second value based on the first output signal, wherein the second value characterizes a probability of the first output signal to characterize a noisy signal, determine, by the second part, a third value based on a supplied second input signal, wherein the second input signal characterizes a non-noisy signal and wherein the third value characterizes a probability of the second input signal to characterize a non-noisy signal; train the first part and the second part, wherein the training includes: adapting a plurality of parameters of the first part according to a gradient of the second value with respect to the plurality of parameters of the first part, and adapting a plurality of parameters of the second part according to a gradient of a sum of the second value and the third value with respect to the plurality of parameters of the second part. 23. (New) A training system configured to train a machine learning system to denoise a provided input signal, the training of the machine learning system configured to: provide a first input signal and a first value to a first part of the machine learning system, wherein the first input signal characterizes a noisy signal and the first value characterizes a randomly drawn value; determine, by the first part, a first output signal for the first input signal and the first value; determine, by a second part of the machine learning system, a second value based on the first output signal, wherein the second value characterizes a probability of the first output signal to characterize a noisy signal; determine, by the second part, a third value based on a supplied second input signal), wherein the second input signal characterizes a non-noisy signal and wherein the third value characterizes a probability of the second input signal characterize a non- noisy signal; and train the machine learning system, wherein training includes: adapting a plurality of parameters of the first part according to a gradient of the second value with respect to a plurality of parameters of the first part, adapting a plurality of parameters of the second part according to a gradient of a sum of the second value and the third value with respect to the plurality of parameters of the second part. 29. (New) A non-transitory machine-readable storage medium on which is stored a computer program for determining a classification and/or regression result based on a provided input signal, the computer program, when executed by a computer, causing the computer to perform the following steps- providing a first part, wherein the first part is configured to denoise the provided input signal based on the input signal and a randomly drawn first value; randomly drawing a plurality of first values; determining, by the first part, a plurality of denoised signals, wherein each denoised signal from the plurality of denoised signals is determined based on the input signal and a first value from the plurality of first values; determining, by a model, a plurality of predicted values based on the denoised signals, wherein each predicted value characterizes a classification of a denoised signal or a regression result based on a denoised signal, and providing an aggregated signal characterizing an aggregation of the predicted values, wherein the aggregated signal characterizes the classification and/or regression result. 24. (New) A non-transitory machine-readable storage medium on which is stored a computer program for training a machine learning system to denoise a provided input signal, the computer program, when executed by a processor, causing the processor to perform the following steps: providing a first input signal and a first value to a first part of the machine learning system, wherein the first input signal characterizes a noisy signal and the first value characterizes a randomly drawn value; determining, by the first part, a first output signal for the first input signal and the first value; determining, by a second part of the machine learning system, a second value based on the first output signal, wherein the second value characterizes a probability of the first output signal to characterize a noisy signal; determining, by the second part, a third value based on a supplied second input signal), wherein the second input signal characterizes a non-noisy signal and wherein the third value characterizes a probability of the second input signal characterize a non- noisy signal, and training the machine learning system, wherein training includes: adapting a plurality of parameters of the first part according to a gradient of the second value with respect to a plurality of parameters of the first part, adapting a plurality of parameters of the second part according to a gradient of a sum of the second value and the third value with respect to the plurality of parameters of the second part. 16. (New) A computer-implemented method for determining a classification and/or regression result based on a provided input signal, the method comprising the following steps: providing a first part, wherein the first part is configured to denoise the provided input signal based on the input signal and a randomly drawn first value; randomly drawing a plurality of first values; determining, by the first part, a plurality of denoised signals, wherein each denoised signal from the plurality of denoised signals is determined based on the input signal and a first value from the plurality of first values; determining, by a model, a plurality of predicted values based on the denoised signals, wherein each predicted value characterizes a classification of a denoised signal or a regression result based on a denoised signal; and providing an aggregated signal characterizing an aggregation of the predicted values, wherein the aggregated signal characterizes the classification and/or regression result determined by the method. 17. (New) The method according to claim 16, wherein a third value is provided by the method, wherein the third value characterizes a variance of the predicted values. 14. (New) The computer-implemented method for training a machine learning system to denoise a provided input signal, the training of the machine learning system comprising the following steps: providing a first input signal and a first value to a first part of the machine learning system, wherein the first input signal characterizes a noisy signal and the first value characterizes a randomly drawn value; determining, by the first part, a first output signal for the first input signal and the first value; determining, by a second part of the machine learning system, a second value based on the first output signal, wherein the second value characterizes a probability of the first output signal to characterize a noisy signal; determining, by the second part, a third value based on a supplied second input signal), wherein the second input signal characterizes a non-noisy signal and wherein the third value characterizes a probability of the second input signal characterize a non- noisy signal; and training the machine learning system, wherein training includes: adapting a plurality of parameters of the first part according to a gradient of the second value with respect to a plurality of parameters of the first part, adapting a plurality of parameters of the second part according to a gradient of a sum of the second value and the third value with respect to the plurality of parameters of the second part. 23. (New) A computer-implemented method for determining a denoised signal from an input signal, comprising the following steps: providing a first part, wherein the first part is configured to denoise an input signal based on the input signal and a randomly drawn first value; determining a denoised signal by the first part based on the input signal and a randomly drawn first value; and providing an output signal as the denoised signal. 14. (New) The computer-implemented method for training a machine learning system to denoise a provided input signal, the training of the machine learning system comprising the following steps: providing a first input signal and a first value to a first part of the machine learning system, wherein the first input signal characterizes a noisy signal and the first value characterizes a randomly drawn value; determining, by the first part, a first output signal for the first input signal and the first value; determining, by a second part of the machine learning system, a second value based on the first output signal, wherein the second value characterizes a probability of the first output signal to characterize a noisy signal; determining, by the second part, a third value based on a supplied second input signal), wherein the second input signal characterizes a non-noisy signal and wherein the third value characterizes a probability of the second input signal characterize a non- noisy signal; and training the machine learning system, wherein training includes: adapting a plurality of parameters of the first part according to a gradient of the second value with respect to a plurality of parameters of the first part, adapting a plurality of parameters of the second part according to a gradient of a sum of the second value and the third value with respect to the plurality of parameters of the second part. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 23 and 25-26 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by FILLIN "Insert the prior art relied upon" \d "[ 3 ]" WO 2020/128134 A1 (herein after Honkala) . Regarding claim 23 Honkala teaches a computer-implemented method for determining a denoised signal from an input signal ( abstract and page 1 lines 16-26: t he computing device may denoise, using a first neural network comprising a first plurality of parameters, the first set of noisy data samples to generate a set of denoised data samples. The computing device may process, using a noise model, the set of denoised data samples to generate a third set of noisy data samples. …. After training, the computing device may denoise, using the trained first neural network, the noisy data sample to generate a denoised data sample ), comprising the following steps: providing a first part, wherein the first part is configured to denoise an input signal based on the input signal and a randomly drawn first value ( figs. 3B-4 and page 8 line 25-30: t he random vector z 317 may also be input into the decoder layer 315N… The random vector z 317 may allow the denoising model 301 to generate one or more possible output data samples corresponding to an input data sample … Honkala teaches a denoising model (denoising model 301) that is configured to accept a random latent vector z together with an input data sample and to produce denoised outputs. This corresponds to the claimed “first part” configured to denoise based on the input signal and a randomly drawn first value ) ; determining a denoised signal by the first part based on the input signal and a randomly drawn first value ( page 1 lines 16-26: t he computing device may denoise, using a first neural network comprising a first plurality of parameters, the first set of noisy data samples to generate a set of denoised data samples. The computing device may process, using a noise model, the set of denoised data samples to generate a third set of noisy data samples. …. After training, the computing device may denoise, using the trained first neural network, the noisy data sample to generate a denoised data sample … Honkala explicitly describes using the trained neural network to denoise a noisy input to produce a denoised sample and explains that a random vector z is input to allow generation of one or more possible outputs for a given input. ) ; and providing an output signal as the denoised signal ( output layer 319 of fig. 3B outputting a denoised data sample … description of 6B : s tep 635, the trained denoising model may be used to process further noisy data samples (e.g., measured by sensors) to generate denoised data samples. …. Honkala teaches that the denoised data sample produced by the denoising model may be presented or sent for further processing—i.e., the denoised sample is provided as an output signal ) . Regarding claim 25, Honkala in view of Mahto teaches the method according to claim 23, Honkala teaches wherein the denoised signal is used as input of a control system, wherein the control system is configured to determine a control signal of an actuator based on the denoised signal ( Fig. 3B description : a denoising model that generates denoised outputs from a noisy input using a randomly drawn latent variable and expressly contemplates using the denoised output for further processing. In particular, Honkala discloses that “The random vector z 317 may also be input into the decoder layer 315N… The random vector z 317 may allow the denoising model 301 to generate one or more possible output data samples corresponding to an input data sample … and after training “the computing device may denoise, using the trained first neural network, the noisy data sample to generate a denoised data sample. The computing device may present to a user, or send for further processing, the denoised data sample ) . Regarding claim 26, Honkala in view of Mahto teaches the method according to claim 23, Honkala teaches wherein the denoised signal is used as input to a virtual sensor for determining a property of the input signal that is not measured by the input signal itself (FIGS. 6A–6B, step 635 : generating a denoised signal and explicitly using that denoised signal as input to further downstream processing that infers properties not directly measured by the raw signal. Honkala states that after training “the computing device may denoise, using the trained first neural network, the noisy data sample to generate a denoised data sample. The computing device may present to a user, or send for further processing, the denoised data sample” and that “The further processing of the denoised data samples may comprise, for example, image recognition, object recognition, natural language processing, speech recognition, speech-to-text detection, heart rate monitoring, detection of physiological attributes, monitoring of physical features, location detection, etc . ) . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 16 - 22 and 24 , 27 - 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO 2020/128134 A1 (herein after Honkala) in view of WO 2020/003533 A1 (herein after Mahto). Regarding claims 16 and 29 , Honkala teaches a computer-implemented method and non-transitory machine-readable storage medium ( page 33 line 8: non-transitory machine -readable storage medium ) on which is stored a computer program for determining a classification and/or regression result based on a provided input signal ( see fig. 4 , abstract & page 10 lines 18-30 : data denoising based on machine learning … FIG S . 3- 4 & 9 … a schematic diagram showing an example process for training a denoising model with noisy data samples by using the GAN process. For example, the process may be used for training a denoising model based on only noisy data samples (e.g., clean data samples are not necessary). … The discriminator may make determinations to classify input data samples. The denoising model and / or the discriminator may be trained based on the determinations ) , the method comprising the following steps: providing a first part ( page 9 line 29 Generative Adversarial Networks ( GAN ) or denoising model 301 ) , wherein the first part is configured to denoise the provided input signal based on the input signal ( noisy input /data sample ) and a randomly drawn first value ( random vector z ) [see fig. 4 and page 8 line 25-30: t he random vector z 317 may comprise a set of one or more random values. As one example, the random vector z 317 may comprise a vector (0.21 , 0.87, 0.25, 0.67, 0.58), the values of which may be determined randomly, for example, by sampling each component independently from a uniform or Gaussian distribution. The random vector z 317 may allow the denoising model 301 to generate one or more possible output data samples corresponding to an input data sample (e.g., by configuring different value sets for the random vector z 317), and thus may allow the denoising model 301 to model the whole probability distribution ] ; randomly drawing a plurality of first values ( page. 18 first paragraph: … multi sample generation from the random vecto r … the computing device may configure a ML network for training the denoising model 301. For example, the computing device may use, as the denoising model training network, the example process as discussed in connection with FIG. 4. In step 621 , the computing device may determine, from the plurality of noisy data samples received in step 601 , a first set of noisy data samples and a second set of noisy data samples. For example, the first set of the noisy data samples and the second set of the noisy data samples may be selected randomly (or shuffled) as subsets of the plurality of the noisy data samples : denoising model takes random vector z as a second input … Fig. 3B and text: s ampling different z values to produce multiple plausible denoised outputs … ) ; determining, by the first part, a plurality of denoised signals, wherein each denoised signal from the plurality of denoised signals is determined based on the input signal and a first value from the plurality of first values ( see figs. 3B & 4 , page 18 last paragraph ; … the computing device may determine, from the plurality of noisy data samples received in step 601 , a first set of noisy data samples and a second set of noisy data samples. For example, the first set of the noisy data samples and the second set of the noisy data samples may be selected randomly (or shuffled) as subsets of the plurality of the noisy data samples (e.g., following the stochastic gradient descent training method). Additionally or alternatively, each of the first set of noisy data samples and the second set of noisy data samples may include all of the plurality of noisy data samples received in step 601 (e.g., following the standard gradient descent training method). Each of the first set of the noisy data samples and the second set of the noisy data samples may comprise one or more noisy data samples. The first set of the noisy data samples may have same members as, or different members from, the second set of the noisy data samples. For example, the plurality of noisy data samples received in step 601 may comprise N data samples. Each of the first set of noisy data samples and the second set of noisy data samples may comprise one (1) data sample from the plurality of noisy data samples (e.g., following the stochastic gradient descent approach). … each denoised signal is determined based on the noisy input and random vector z … The random vector z 317… may allow the denoising model 301 to generate one or more possible output data samples corresponding to an input data sample. ) ; and providing an aggregated signal characterizing an aggregation of the predicted values, wherein the aggregated signal characterizes the classification and/or regression result determined by the method ( Fig. 9 and description: t he process may be used to train a noise model for signal dependent noise (e.g., noise in X-ray medical images). The process may use a noise generator 901 , a modulation function 903, an environment and/or sensor 905, and a discriminator 907. The discriminator 907 may comprise, for example, an artificial neural network (ANN), a multilayer perceptron (e.g., the neural network 100), a convolutional neural network (e.g., the neural network 200), a recurrent neural network, a deep neural network, or any other type of neural network (e.g., similar to the discriminator 405), and may learn to classify input data as measured noisy data samples or generated noisy data samples. The modulation function 903 may be configured to introduce noise to data samples by modulating the data samples …. Fig. 7 … Multiple output samples… can be evaluated and aggregated for improved classification accuracy ) . Honkala fails to explicitly teach however, Mahato teaches determining, by a model, a plurality of predicted values based on the denoised signals, wherein each predicted value characterizes a classification of a denoised signal or a regression result based on a denoised signal ( Mahto par. 61: The discriminator 102 has one neural network (NN). The discriminator 102 reads the denoised feature vectors (A’) and the corresponding clean features (A) and then predicts the probability of each being the originally clean feature vector and estimates the class of each input feature. Also, PLDA similarity is computed between denoised feature vector (A’) and the clean feature vector (A) as a measure to find class-oriented similarity between the two features. Both generator and discriminator neural networks (NNs) are trained alternatively to optimize generator and discriminator losses such that after the training discriminator can correctly classify denoised features into their classes and cannot distinguish between original clean features (A) and denoised features (A’). … see further Figs. 2 & 11 & their description; : … Matho teaches passing generator/ denoised features to a classifier (discriminator/classifier) and computing classification outputs and classification loss (see Mahto: discriminator/classification branches; generator - classification loss …. In Mahto’s , architecture is built to produce features used for classification/regression ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the of the claimed invention was made to combined Honkala’s explicit use of random latent draws and multiple denoised realizations with Mahto’s classifier/PLDA/class aware training losses to obtain the claimed method steps (drawing multiple latent values, computing multiple denoised signals, producing model predictions per denoised signal, aggregating predictions, and producing a variance/uncertainty measure), and to train the denoiser as recited in claims 18 - 22. Regarding claim 28 , Honkala teaches a training system and the method , c onfigured to train a first part to denoise a provided input signal ( fig. 3b : optimize the output of the denoising model 301 (e.g., to improving the performance of its denoising function), the denoising model 301 may be trained based on one or more pairs of noisy data samples and corresponding clean data samples (e.g., using a supervised learning method). The clean data samples may be, for example, data samples, obtained using sensor devices, with an acceptable level of quality (e.g., signal-to-noise ratio satisfying a threshold). This may result in the system’s dependence on the ability to obtain clean data samples (e.g., using sensor devices) , wherein the training system is configured to: provide a first input signal and a first value to the first part, wherein the first input signal characterizes a noisy signal and the first value characterizes a randomly drawn value ( figs. 3B-4 and page 8 line 25-30: The random vector z 317 may also be input into the decoder layer 315N… The random vector z 317 may allow the denoising model 301 to generate one or more possible output data samples corresponding to an input data sample) ; determine, by the first part, a first output signal for the first input signal and the first value ( fig. 4 and description: denoising model 301 determining inputs from noisy data samples and random number Z and determining and denoising… After training, the computing device may denoise, using the trained first neural network, the noisy data sample to generate a denoised data sample) ; determine, by a second part , a second value based on the first output signal, wherein the second value characterizes a probability of the first output signal to characterize a noisy signal ( output layer 319 of fig. 3B outputting a denoised data sample … description of 6B : s tep 635, the trained denoising model may be used to process further noisy data samples (e.g., measured by sensors) to generate denoised data samples ; … page. 26 last par. & fig. 7 : The discrimination value may be determined based on the input data sample itself, and may indicate probabilities (and/or scalar quality values) that the input data sample belongs to measured noise or generated noise. The discrimination value may be compared with the ground truth and/or the target of the noise generator 701 (e.g., to“fool” the discriminator 703 so that the discriminator 703 may treat generated noise data samples as measured noise), and the weights and/or other parameters of the discriminator 703 and/or the noise generator 701 may be adjusted in a similar manner as discussed in connection with training the denoising model 301 (e.g., in step 631). … fig. 4 element 405-discriminator/second part ) ; determine, by the second part, a third value based on a supplied second input signal, wherein the second input signal characterizes a non-noisy signal and wherein the third value characterizes a probability of the second input signal to characterize a non-noisy signal ( abstract and page 9 lines 20-27 : A n apparatus for pattern recognition includes a generator which transforms noisy feature vectors into denoised feature vectors, a discriminator which takes the denoised feature vectors and the original clean feature vectors corresponding to the denoised feature vectors as input and predicts probability for both of the input features of being an original clean feature, classifies the input feature vectors into its corresponding classes … optimize the output of the denoising model 301 (e.g., to improving the performance of its denoising function), the denoising model 301 may be trained based on one or more pairs of noisy data samples and corresponding clean data samples (e.g., using a supervised learning method). The clean data samples may be, for example, data samples, obtained using sensor devices, with an acceptable level of quality (e.g., signal-to-noise ratio satisfying a threshold). This may result in the system’s dependence on the ability to obtain clean data samples (e.g., using sensor devices). … discriminator 405 determining a third value (see arrows as an input to the discriminator 405) ) ; Honkala teaches first part (denoising model 304) and second part (discriminator 405) as shown in figs 4B and description of fig. 3B. Honikala fails to explicitly teach, however, Mahto teaches : the first part and second part ( Mahto on par. 60 : As shown in the FIG. 3, the generator 101 has two neural networks (NNs) as: Encoder (Genc) and Decoder (Gdec) . In training stage, Genc reads noisy features as input, encode them into class-dependent features (f), then Gdec reads the encoded features (f) and a random noise vector (N) and produces a denoised feature vector (A’) at output …Fig. 3 description (see Mahto ¶[0055]–¶[0056] Mahto identifies a generator neural network (generator 101) that (i) accepts noisy input features, (ii) uses an encoder/decoder architecture, and (iii) accepts a random noise vector (N) at the decoder to produce a denoised output A’ —teaching the claimed first AI model that denoises an input signal based on the input and a randomly drawn value … In the training phase, the generator 101 reads noisy features(y) and estimates denoised features (z). Then, the discriminator 102/second model AI reads denoised features (z) and predicts the probability of it being an originally clean feature vector (D.sub.r(z)) and also estimates its class label (D.sub.d(z)). Then, the discriminator 102 reads original clean features (x) and predicts the probability of it being an originally clean feature vector (D.sub.r(x)) and also estimates its class label (D.sub.d(x) … The discriminator 102 model takes the denoised feature vectors and the original clean feature vectors corresponding to the denoised feature vectors as input. The discriminator predicts probability for both of the input features for being an original clean feature. The discriminator classifies the input feature vectors into its corresponding classes. … pars. 63-64 : t he generator 101 transforms noisy feature vectors into denoised feature vectors … par. 64 : discriminator/classifier t hat (i) determines a probability (second value) based on the first output signal (probability the first output is noisy/clean), (ii) determines a probability (third value) based on a supplied clean (non noisy) second input, and (iii) supplies objective values / losses used to train both models … ) ; train the first part and the second part, wherein the training ( see above fig. 3 : generator and discriminator models ) includes: adapting a plurality of parameters of the first part according to a gradient of the second value with respect to the plurality of parameters of the first part (FIG.2–FIG.4 training loop description … Objective function calculator 103 reads outputs of discriminator (D_r(x), D_r(z)) and (D_d(x), D_d(z)) and the ground truth class labels(l) of the input feature vectors and calculates discriminator loss 1032. … Then the parameter updater 104 updates parameter of generator to optimize objection function …par. 72 : One embodiment of the present invention includes a loss function characterized by the formula custom character_G,id = E_{x (3)}[‖x (3) − G(x^(3), z=0)‖ p] + E {x (3),z1,z2}[‖G(x (3), z=z1) − G(x^(3), z=z2)‖_p] …) , and adapting a plurality of parameters of the second part according to a gradient of a sum of the second value and the third value with respect to the plurality of parameters of the second part (FIG.2–FIG.4 training loop descriptio n : The discriminator 102 reads denoised features (z) and predicts the probability of it being an originally clean feature vector (D_r(z)) and also estimates its class label (D_d(z)). Then, the discriminator 102 reads original clean features (x) and predicts the probability of it being an originally clean feature vector (D_r(x)) and also estimates its class label (D_d(x)). Objective function calculator 103 reads outputs of discriminator (D_r(x), D_r(z)) and (D_d(x), D_d(z)) and the ground truth class labels(l) of the input feature vectors and calculates discriminator loss 1032. The parameter updater 104 updates parameter of discriminator to optimize objection function … The GAN-based loss is calculated with the output probability predicted by the discriminator for the input feature vector to be an originally clean feature vector) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the of the claimed invention was made to combined the teachings of two AI model architecture to make denoised outputs indistinguishable from real clean features (adversarial/GAN loss), preserve and enhance class discriminability (classification loss + PLDA class oriented loss), and enforce distributional and robustness properties (KL regularizer + random noise injection), with the discriminator providing the real/clean and class feedback necessary for effective generator training. Regarding the method claims 18 and 24 , claims 18 and 24 recite similar limitations as claim 28 and rejected based on the same rational as claim 28. Regarding claim 17 , Honkala in view of Mahto the teaches the method according to claim 16, Honkala further teaches wherein a third value is provided by the method, wherein the third value characterizes a variance of the predicted values ( fig. 6 step 613: The computing device may use suitable techniques used for GAN training (e.g., backpropagation, stochastic gradient descent (SGD), etc.) to train the noise model. More details regarding training various types of noise models are further discussed in connection with FIGS. 7-9. If the noise type of the plurality of noisy data samples is not determined (step 607: N), the method may proceed to step 615. For example, the noise type of the plurality of noisy data samples might not be determined if there is no information (e.g., no record in the database) indicating the noise type corresponding to the data sample type and/or the sensor type of the plurality of noisy data samples. In step 615, the computing device may train one or more noise models corresponding to one or more types of nois e… m ultiple draws, reporting a variance/uncertainty across the multiple prediction s in using the stochastic (z latent) is discussed in Honka la … probabilistic modeling via latent sampling ) . Regarding claim 19, Honkala in view of Mahto teaches the method according to claim 18, Honkala furt
Read full office action

Prosecution Timeline

Aug 28, 2023
Application Filed
Mar 09, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 7983414
PROTECTED CRYPTOGRAPHIC CALCULATION
2y 5m to grant Granted Jul 19, 2011
Patent 7984512
INTEGRATING SECURITY BY OBSCURITY WITH ACCESS CONTROL LISTS
2y 5m to grant Granted Jul 19, 2011
Patent 7965844
SYSTEM AND METHOD FOR PROCESSING USER DATA IN AN ENCRYPTION PIPELINE
2y 5m to grant Granted Jun 21, 2011
Patent 7954164
METHOD OF COPY DETECTION AND PROTECTION USING NON-STANDARD TOC ENTRIES
2y 5m to grant Granted May 31, 2011
Patent 7954156
METHOD TO ENHANCE PLATFORM FIRMWARE SECURITY FOR LOGICAL PARTITION DATA PROCESSING SYSTEMS BY DYNAMIC RESTRICTION OF AVAILABLE EXTERNAL INTERFACES
2y 5m to grant Granted May 31, 2011
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
37%
Grant Probability
73%
With Interview (+35.5%)
5y 1m
Median Time to Grant
Low
PTA Risk
Based on 132 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month