DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments field 30 January 2025 with respect to independent claims 1, 13, and 25 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. See the application of the Ho reference below.
Furthermore, Applicant’s amendments filed 30 January 2026 necessitated new grounds of rejection. It is noted that instead of adding the allowable limitations from claim 6 specifying the diffusion timestep values to independent claim 1 Applicant has chosen to amend claim 1 to include the broad definition of diffusion timesteps taken from the introductory portion of claim 6. Such a combination of features has not been recited in past pending claims thus necessitating new grounds which are also necessitated by the other amendments made to the independent claims.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 2, 5, 7, 8, 11, 13-14, 17, 25, 26, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Ho {Ho, Jonathan, Ajay Jain, and Pieter Abbeel. "Denoising diffusion probabilistic models." Advances in neural information processing systems 33 (2020): 6840-6851} and Yoon {Adversarial Purification with Score-based Generative Models, Jongmin Yoon, Sung Ju Hwang, Juho Lee Proceedings of the 38th International Conference on Machine Learning, PMLR 139:12062-12072, 2021}
Claim 1
In regards to claim 1, Ho discloses a processor, comprising: one or more circuits to {see Section B Experimental Details describing implementations including neural network architecture details, training and test results for the implementations in which the circuits for such implementations and test results are inherent}
receive a first image of an object the first image including perturbations introduced to the first image prior to being received
{See abstract, Section I Introduction, Section 2 Background including input/received image data Xo, section 3.3 discussing the image data being input for denoising to consist of integer vales.
Note that, at this stage of the claim, the classification is mere intended use and is not effectuated until after the second image is generated; in other words, the claim does not require (and the specification does not support) receiving the first (input image) at the classifier but instead receiving the first (input) image at the diffusion neural network to add/remove noise and generate the second (denoised) image and that the output of this diffusion neural network is the second image which is supplied to a separate classifier. Moreover, Ho’s purpose is to remove noise by applying/using a diffusion neural network such that Ho receives a first image that contains noise (e.g. perturbations introduced to the first image prior to being received) and outputs a denoised high quality image as is evident from the title, abstract, section 3.4 Simplified Training Objective which is to “train the network to denoise data” and Section 6 Conclusion};
using a diffusion neural network to add noise to the first image over a number of forward iterations, the number of forward iterations being determined based on a selected diffusion timestep; and generate a second image of the object based, at least in part, upon using the diffusion neural network to remove the noise from the first image of the object,
{Ho teaches that a generative neural network may advantageously use a diffusion model (diffusion neural network as claimed). Moreover, this diffusion neural network both adds noise to the first image and generates a second images to remove noise. See Sections 2 and 3 including forward (diffusion) process to add noise over a number of forward iteration and a reverse iterative process to remove the noise from the first image to generate the second (denoised).
Furthermore, “the number of forward iterations being determined based on a selected diffusion timestep” merely indicates a truism or axiom of diffusion neural networks. Indeed, Applicant defines “diffusion timestep” quite broadly as follows: “a selected diffusion timestep, which represents an amount of noise added during forward process 212” as set forth in [0060] of the instant published application. Furthermore, the number of forward iterations of the forward process 212 is related to and otherwise “based on” the amount of noise added during that forward process. In other words, to achieve a desired amount of noise the forward process must iterate N times. Note that the claim does not specify the number of iterations or the specific amount of noise selected for the “diffusion timestep” but instead recites an axiom of diffusion neural networks that iteratively add noise (an unspecified amount of times) until an amount of (unspecified) noise is achieved. Ho also clearly employs the standard diffusion timestep as discussed in Section 2 including disclosure that the timestep t may be arbitrarily chosen/selected thus further meeting the broad definitional language added to claim 1 regarding the “diffusion timestep”.},
wherein removing the noise causes the perturbations introduced to the first image prior to the receipt of image to be at least partially removed with the noise
{the claim language merely recites the effect of applying Ho’s denoising using a diffusion model which will, at least partially, remove noise including perturbations (e.g. noise) introduced prior to the receipt of the image}; and
Yoon is an analogous reference because it is from the same field of generative neural networks and purification (noise removal/denoising). See abstract, Sections 1, 2 and cites below.
Yoon also teaches
receive a first image of an object to be classified via a classifier neural network, the first image including perturbations introduced to the first image prior to being received
{section 3.1 inputs, fig. 1. See also the mapping for the classify step below. Further as to the perturbations introduced to the first image prior to being received see the attacked images which are received by Yoon and purified/denoised which is robust against various attacks that purposely introduce perturbations and denoises such attacked images to make the adversarial perturbations negligeable as discussed in the abstract and Sections 1, 2};
generate a second image of the object based, at least in part, upon a neural network to remove the noise from the first image of the object and generate a second image of the object based, at least in part, upon using the neural network to remove the noise from the first image of the object, wherein removing the noise causes perturbations introduced to the first image prior to the receipt of the image to be at least partially removed with the noise
{Section 3, fig. 1 including purification/denoising of the image that purifies/denoises/removes noise from the first image using a generative model. Note also that the wherein clause merely recites the effect of applying denoising which will, at least partially, remove noise including perturbations (e.g. other noise) introduced prior to the receipt of the image such as adversarial perturbations. In addition, Yoon’s neural network denoises/purifies the image which will, at least partially, remove noise including perturbations introduced to the first image prior to the purification stage be at least partially removed with the injected noise}; and
classify the object in the second image via a classifier neural network
{see abstract, Sections 1, in which the denoised/purified output from the neural network is sent to a classifier and in which the purification defends against adversarial attacks and improves the classification accuracy and in which a “goal is to remove any adversarial noise from potentially attacked images into clean images so that they could be correctly classified when fed to the classifier”, Sections 2.1, 3.2, 3.3, 3.4, Algorithm 1}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Ho’s diffusion neural network that is specifically designed for denoising and which both adds noise to the first image and generates a second image of the object to remove the noise from the first image of the object, wherein removing the noise causes perturbations introduced to the first image prior to the addition of the noise to be at least partially removed with the noise such that the denoised output is fed into a classifier neural network as taught by Yoon because Yoon motivates (section 3.3) such denoising prior to classification to improve classification accuracy.
Claim 2
In regards to claim 2, Ho discloses wherein the first image has a probability of including pixel data modified to include one or more adversarial perturbations
{this claim element could be interpreted to read on any image because the “probability” may be zero such that Ho reads on this claim element}.
Under an alternative interpretation in which this limitation is view as a field of use in which the input (first image) is an attack image as known in the art having adversarial perturbation(s) intended to fool a classifier into mis-detection, see Yoon discussing purifying attack images before they are fed into classifier. See also Introduction discussing making classifiers robust to adversarial attacks using randomized purification. See also section 3 discussing adversarial perturbations}.
Under this alternative interpretation, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Ho’s diffusion neural network
that is specifically designed for denoising and which both adds noise to the first image and generates a second image of the object to remove the noise from the first image of the object, wherein removing the noise causes perturbations introduced to the first image prior to the addition of the noise to be at least partially removed with the noise such that the denoised output is fed into a classifier neural network as taught by Yoon and wherein the first image has a probability of including pixel data modified to include one or more adversarial perturbations as also taught by Yoon because Yoon motivates (section 3.3) such denoising prior to classification to improve classification accuracy.
Claim 5
In regards to claim 5, Ho discloses wherein the one or more circuits are further to use the diffusion network to remove the noise over a number of reverse iterations performed by the diffusion neural network {see above cites for claim 1 including secton 3.2 in which noise removal (denoising) reverse process is performed using a diffusion neural network.}
Claims 7, 8, 13, 14, and 25-26
The rejection of apparatus claims 1 and 2 above applies mutatis mutandis to the corresponding limitations of system claims 7 and 8; method claims 13 and 14; and system claims 25-26 while noting that the rejection above cites to both device and method disclosures. Note that system claims 7-8 recite one or more processors while system claims 25-26 recite one or more processors and a memory which are disclosed by Ho in Section B Experimental Details.
Claims 11, 17, and 27
The rejection of processor/circuit claim 5 above applies mutatis mutandis to the corresponding limitations of system claims 11, method claim 17 and system claim 29 while noting that the rejection above cites to both device and method disclosures.
Claims 4, 10, 16, and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Ho and Yoon as applied to claim 1 above, and further in view of Durkan (US 2025/0174000 A1).
Claim 4
In regards to claim 4, Ho discloses teaches wherein the noise is added by the diffusion neural network
Durkan is from the same field of denoising images having perturbations (noise) introduced to the first image prior to being received. See abstract, [0006] and cites below. Durkan also teaches wherein the noise is added by the diffusion neural network using a stochastic differential equation (SDE) {Fig. 1 diffusion neural network 110, fig. 2 step 208; [0006], [0021], [0108]-[0113] teaching a diffusion neural network using SDE to generate a denoising output.}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Ho’s diffusion neural network that is specifically designed for denoising using a probabilistic model such that the diffusion neural network uses a stochastic differential equation (SDE) as taught by Durkan because such an SDE approach also effectively denoises images including perturbations (noise) introduced to the image prior to being received, because there is a reasonable expectation of success, and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claims 10, 16 and 28
The rejection of processor/circuit claim 4 above applies mutatis mutandis to the corresponding limitations of system claim 10, method claim 16 and system claim 28 while noting that the rejection above cites to both device and method disclosures.
Allowable Subject Matter
Claims 6, 12, 18 and 30 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Roy {Roy, Sudipta Singha, et al. "A robust system for noisy image classification combining denoising autoencoder and convolutional neural network." International Journal of Advanced Computer Science and Applications 9.1 (2018): 224-235} discloses a denoising autocoder neural network that adds and removes noise for improving the accuracy of a downstream CNN classifier. See Fig. 1 copied below and section II.
PNG
media_image1.png
284
534
media_image1.png
Greyscale
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael R Cammarata whose telephone number is (571)272-0113. The examiner can normally be reached M-Th 7am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL ROBERT CAMMARATA/Primary Examiner, Art Unit 2667