DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgement is made of Applicant’s claim of this application being a National Stage Application of International Patent Application No. PCT/GB2022/052081, filed on August 10, 2022, and claim of priority and benefit from British Application No. GB2111654.6, filed on August 13, 2021.
Information Disclosure Statement
The information disclosure statements (“IDS”) filed on 02/07/2024 and 03/07/2024 were reviewed and the listed references were noted.
Drawings
The 6-page drawings have been considered and placed on record in the file.
Status of Claims
Claims 1-20 are pending.
Claim Objections
Claims 4 and 5 are objected to because of the following informalities: there should be a comma (“,”) before the term “wherein” in the first line of these claims. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 2, 3, 7, and 13 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Consider Claim 2, the claim includes the term “and/or”. This limitation recites: “…wherein the input signal is filtered by being processed by the autoencoder, feature squeezer, U-net and/or super resolution network.” Review of Applicant's specification does not reveal that at the effective filing date of the instant application the inventors were in possession of the above-recited limitations. More specifically, in order for the steps separated by the term "and/or", such as “A and/or B” to be enabled by the specification, the specification must include the following:
i) a section/embodiment that discloses written description with respect to “A”;
ii) a section/embodiment that discloses written description with respect to “B”; and
iii) a section/embodiment that discloses written description with respect to the combination of “A” and “B”, i.e., sections (i) and (ii), together.
Since Applicant’s specification does not include all three (3) sections or embodiments, the specification lacks enablement of the claimed limitation, or the limitation is not supported by the original specification. Accordingly, Claims 2, 3, 7, and 13 are rejected under 35 U.S.C. 112(a) for containing subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
If Applicant believes that its original specification includes the above-described three (3) sections/embodiments for all limitations within the above-listed claims separated by the term “and/or”, Applicant should provide the examiner with paragraph numbers and detail explanation as to where these steps have been disclosed in the specification. Otherwise, the term “and/or” in these claims should be replaced by “or”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 4-9, 11, 16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Xiaodong Yu (US 2021/0157911 - IDS) in view of James K. Baker (US 2020/0285939).
Consider Claim 1, Yu discloses “A method for filtering adversarial noise, comprising: receiving an input signal which comprises an unknown level of adversarial noise” (Yu, Fig. 2:304, Denoising Model and Paragraph [0018], wherein it is disclosed that the denoising model 304 may be considered as a prefiltering module to denoise inputs); “filtering the received input signal with a neural network to remove noise from the received input signal, thereby producing a filtered signal” (Yu, Paragraph [0018] discloses “The denoising encoder may reconstruct the original image 102 by 1) first trying to encode the inputs or preserve the information about the original image 102 and then 2) undo the corruption or perturbation added”); calculating (Yu, Paragraph [0018], the calculation of a loss function L1 norm); “and outputting the filtered signal and the (Yu, Paragraph [0018], wherein it is disclosed: “Therefore, the denoising model may be used to aid in recognizing a corrupt or modified image 106, such that next time the system encounters an image which has been perturbed, it is recognized it. In other words, the denoising model operate as a defense against adversarial attacks by recognizing/learning when an image is corrupt.” The L1 norm, i.e., the loss function, is part of the output of denoising model, see Fig. 3:314). Although Yu calculates a loss function for its denoising neural network, it does not explicitly disclose calculation of “confidence value”. However, in an analogous field of endeavor, Baker discloses a confidence estimating machine learning system that calculates a confidence score according to whether the standard output and the auxiliary output are correct; and back propagating, by the confidence-estimating machine learning system implemented by the computer system, a derivative of a loss function to the auxiliary output of the machine learning system (Baker, Paragraph [0742]).
Accordingly, before the effective date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Yu with teachings of Baker to calculate a confidence score for its image reconstruction denoising neural network system. One of ordinary skill in the art would be motivated to combine Yu and Baker according to known methods in order to establish a relationship between loss function and confidence score in a machine learning system. Therefore, it would have been obvious to combine Yu and Baker to obtain the invention of Claim 1.
Consider Claim 2, the combination of Yu and Baker discloses “The method of claim 1, wherein the neural network comprises one or more of an autoencoder, a feature squeezer, a U-net or a super resolution network, and wherein the input signal is filtered by being processed by the autoencoder, feature squeezer, U-net and/or super resolution network” (Yu, Paragraphs [0025]-[0026], the recitation of autoencoder as the denoising model).
Consider Claim 4, the combination of Yu and Baker discloses “The method of claim 1 wherein filtering the received input signal comprises comparing the received input signal to an expected input signal and removing any parts of the received input signal which do not correspond to the expected input signal, wherein the expected input signal was taught to, or learned by, the neural network based on neural network training data” (Yu, Paragraphs [0022] discloses: “the denoiser 308 generates the denoised image 304 which includes the modified image after having been filtered and where its loss 314 against the original image 318 is considered.” And Paragraph [0023] discloses: “wherein the denoiser attempts to reconstruct the original image 318 by filtering out the noise (or perturbation) introduced in the adversarial attack”).
Consider Claim 5, the combination of Yu and Baker discloses “The method of any preceding claim 1 wherein filtering the received input signal comprises comparing the received input signal to known adversarial noise patterns and removing any parts of the received input signal which correspond to a known adversarial noise pattern, wherein the adversarial noise patterns were taught to, or learned by, the neural network based on neural network training data” (Yu, Paragraphs [0023] discloses: “The reconstructed or denoised image 304 is then input into the adversarially trained model which as previously indicated may have been previously trained using other adversarially attacked images. (emphasis added)”, interpreted as the adversarial noise patters).
Consider Claim 6, the combination of Yu and Baker discloses “The method of any preceding claim 1, wherein the confidence value is indicative of a remaining level of adversarial noise in the filtered signal” (Yu, Paragraphs [0027] discloses: “the perturbed image 404 is represented by nodes x.sub.1−x.sub.N and reconstructed image x′.sub.1−x′.sub.N, where the goal of the system is to minimize reconstruction loss such that x.sub.1−x.sub.N and reconstructed image x′.sub.1−x′.sub.N are equivalent. Thus, reconstruction loss is determined and measured against the original image 402” and “loss may be measured against a threshold, and/or iterations may continue until the perturbed image 404 is equivalent to the original image” (emphasis added), which are indicative of a remaining adversarial noise in the filtered image).
Consider Claim 7, the combination of Yu and Baker discloses “The method of any preceding claim 1, wherein the confidence value is indicative of how similar the filtered signal is to the input signal, how different the filtered signal is to the input signal, and/or how similar the filtered signal is to the neural network's training data” (Yu, Paragraphs [0027] discloses: “Thus, reconstruction loss is determined and measured against the original image 402” and “loss may be measured against a threshold, and/or iterations may continue until the perturbed image 404 is equivalent to the original image” (emphasis added), which are indicative of similarity of the filtered image and the original input).
Consider Claim 8, the combination of Yu and Baker discloses “The method of any preceding claim 1, wherein the confidence value is indicative of the detection of a pattern of adversarial noise in the input signal which has been previously encountered by the neural network during training” (Yu, Paragraphs [0023] discloses: “The reconstructed or denoised image 304 is then input into the adversarially trained model which as previously indicated may have been previously trained using other adversarially attacked images. (emphasis added)”, interpreted as the adversarial noise patters).
Consider Claim 9, the combination of Yu and Baker discloses “The method of any preceding claim 1, further comprising: comparing the calculated confidence value with a predetermined confidence threshold value; and only outputting the filtered signal and confidence value if the calculated confidence value is greater or equal to the predetermined confidence threshold value” (Yu, Paragraphs [0027] discloses: “ “loss may be measured against a threshold, and/or iterations may continue until the perturbed image 404 is equivalent to the original image” (emphasis added)).
Consider Claim 11, the combination of Yu and Baker discloses “The method of any preceding claim 1, wherein the input signal comprises image data” (Yu, Abstract and Fig. 1B:152).
Consider Claim 16, the combination of Yu and Baker discloses “The method of any preceding claim 1, wherein the neural network is trained with data comprising adversarial noise” (Yu, Paragraph [0023]).
Claim 19 recites a system with elements corresponding to the steps recited in Claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Yu and Baker references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Yu and Baker references discloses a processor, a memory (for example, see Yu, Paragraph [0029]).
Claim 20 recites a non-transitory computer-readable storage medium storing instructions corresponding to the steps recited in Claim 1. Therefore, the recited instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Yu and Baker references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Yu and Baker references discloses a non-transitory medium (for example, see Yu, Paragraph [0029]).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Xiaodong Yu (US 2021/0157911 - IDS) in view of James K. Baker (US 2020/0285939), and in further view of Bhambri et al. (“A Survey of Black-Box Adversarial Attacks on Computer Vision Models” – IDS).
Consider Claim 3, the combination of Yu and Baler does not explicitly disclose “The method of claim 1, wherein the neural network is a probabilistic neural network implemented as an ensemble, and/or as a Bayesian neural network and/or implementing Monte Carlo dropout analysis or latent variable sampling on the filtered signal”. However, in an analogous field of endeavor, Bhambri discloses stochastic activation pruning for guarding pre-trained networks against adversarial attacks (Bhambri, Page 19, SAP). Furthermore, Bhambri discloses Pixel Deflection samples pixels randomly in an input image and replaces it with another randomly sampled pixel from its square neighborhood. By Pixel Deflection, certain pixels are dropped (Bhambri, Page 21, Pixel Deflection).
Accordingly, before the effective date of the instant application, it would have been obvious to one of ordinary skill in the art to combine the combination of You and Baker with teachings of Bhambri to introduce probabilistic neural network as a signal filtering system. One of ordinary skill in the art would be motivated to combine the combination of Yu and Baker with Bhambri in order to preserve enough background pixels while mitigating the impact of adversarial attacks (Bhambri, Page 21, Pixel Deflection). Therefore, it would have been obvious to combine Yu, Baker, and Bhambri to obtain the invention of Claim 3.
Allowable Subject Matter
Claims 10, 12-15, and 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: consider Claim 10, none of the cited prior art references, alone or in combination, provides a motivation to teach the ordered combination of the recited limitations of Claim 10. In addition, consider Claim 12, although Chen et al. (“Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack” – IDS) discloses detection of the adversarial patch and curing the image through deleting the adversarial patches (Chen, Section III), none of the cited prior art references, alone or in combination, provides a motivation to teach the ordered combination “calculating the confidence value comprises calculating uncertainty values for pixels within the image data” with the limitations in the claims that Claim 12 is dependent from. Moreover, dependent Claims 13-15, which depend from Claim 12, include the above-referenced allowable subject matter (please note that the rejection of dependent Claim 13 under 35 U.S.C. 112(a) must also be overcome in order for this claim to be allowable). Finally, Consider Claims 17 and 18, none of the cited prior art references, alone or in combination, provides a motivation to teach the ordered combination of the recited limitations of Claims 17 and 18.
Conclusion and Contact
The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure: Wei et al. (US 2020/0265273).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Siamak HARANDI whose telephone number is (571)270-1832. The examiner can normally be reached on Monday - Friday 9:30 - 6:00 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SIAMAK HARANDI/Primary Examiner, Art Unit 2662