DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/23/2025 has been entered. Claims 1-20 are pending in the application.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Muehlenstaedt (US 20230410469 A1) in view of Gupta (US 20210241169 A1).
Regarding claim 1, Muehlenstaedt teaches a processor, comprising: one or more circuits to (Fig. 7, Fig. 1, Fig. 6)
use one or more neural networks to predict respective labels for the one or more modified images; and ([0006]: receiving an input image, generating a label prediction corresponding to the input image using a trained neural network, generating a correlation structure based on a comparison of the input image with each of a plurality of reference images. [0052]: if the correlation between the input image and a reference image is determined to be high (i.e., cor(dr,p)→[1] and/or cor(dr,p) is greater than a threshold such as greater than about 0.8, greater than about 0.9, greater than about 0.95, or the like), the label prediction corresponding to the reference image (y(r)) will be assigned to the input image.)
generate one or more labels of one or more images based, at least in part, on the respective perturbation amount by which the one or more images were modified and on the predicted respective labels. ([0006]: generating an updated label prediction corresponding to the input image using the label prediction and the correlation structure. [0008]: identifying the label prediction as the updated label prediction in response to determining that there exists a correlation between the input image and each of the plurality of reference images that is less than a threshold. [0011]: Generating the correlation structure can also include identifying a reference image of the plurality of reference images that has a highest correlation with the input image, and identifying a correct label associated with the identified reference image as the updated label prediction. [0051]-[0053].)
Muehlenstaedt does not explicitly disclose modify one or more images by one or more respective perturbation amounts.
However, Gupta teaches modify one or more images by one or more respective perturbation amounts. ([0020]: Adversarial samples are generated at each training iteration by modifying the clean samples (also referred to herein as ground truth samples) for that iteration with a small targeted perturbation.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include above limitation into Muehlenstaedt. One would have been motivated to do so because Adversarial samples are generated at each training iteration by modifying the clean samples (also referred to herein as ground truth samples) for that iteration with a small targeted perturbation. Rather than using the clean images as the training data, the adversarial samples are used. The network in this process learns how to classify these modified, adversarial samples. As taught by Gupta, [0020].
Regarding claim 2, Muehlenstaedt and Gupta teach the processor of claim 1.
Muehlenstaedt teaches wherein the one or more images were used prior to modification to train the one or more neural networks, and ([0011]: computing a distance between the input image and each of the plurality of reference images by comparing a feature map of the input image and a reference feature map of that reference image. The feature map and the reference feature map may be obtained from a layer of the trained neural network. [0041]. [0002]: An adversarial attack might entail presenting a neural network with inaccurate or misrepresentative data during training, or it may include introducing maliciously designed data to deceive an already trained neural network.)
wherein the one or more images were modified based, at least in part, on one or more adversarial attack techniques. ([0022]: an adversarial attack may include pixels purposely and intentionally perturbed to confuse and deceive a neural network during image classification and object detection.)
Regarding claim 3, Muehlenstaedt and Gupta teach the processor of claim 1.
Muehlenstaedt teaches wherein the one or more circuits are to assign a label to an image of the one or more modified images from a corresponding image in a training dataset based, at least in part, on the perturbation amount by which the image was modified being below a threshold amount. ([0008]: identifying the label prediction as the updated label prediction in response to determining that there exists a correlation between the input image and each of the plurality of reference images that is less than a threshold. [0052].)
Regarding claim 4, Muehlenstaedt and Gupta teach the processor of claim 1.
Muehlenstaedt teaches wherein the one or more circuits are to assign a label to an image of the one or more modified images, the label indicative of the perturbation amount by which the image was modified being above a threshold amount. ([0052]: if the correlation between the input image and a reference image is determined to be high (i.e., cor(dr,p)→[1] and/or cor(dr,p) is greater than a threshold such as greater than about 0.8, greater than about 0.9, greater than about 0.95, or the like), the label prediction corresponding to the reference image (y(r)) will be assigned to the input image.)
Regarding claim 5, Muehlenstaedt and Gupta teach the processor of claim 1.
Muehlenstaedt teaches wherein the one or more circuits are to obtain the perturbation amount to apply to the one or more images in one or more adversarial attacks. ([0022]: an adversarial attack may include pixels purposely and intentionally perturbed to confuse and deceive a neural network during image classification and object detection. [0008] and [0052]: identifying the label prediction as the updated label prediction in response to determining that there exists a correlation between the input image and each of the plurality of reference images that is less or greater than a threshold.)
Regarding claim 6, Muehlenstaedt and Gupta teach the processor of claim 1.
Muehlenstaedt teaches wherein the one or more images were used prior to modification to train one or more neural networks, and ([0002]. [0011]. [0040]-[0041].)
wherein the one or more circuits are to: determine which of the one or more modified images causes the one or more neural networks to produce one or more incorrect outputs and ([0002]. [0022]: an adversarial attack may include pixels purposely and intentionally perturbed to confuse and deceive a neural network during image classification and object detection, where such pixels are not easily recognizable by a human user. For example, consider a neural network trained to classify road signs. If the network is presented with a new, not previously seen road sign, then the neural network will likely make a confident and probably correct classification. However, if the neural network is presented with an image outside the distribution of images used for training, e.g., an image of a cat, then a conventional neural network is prone to still confidently predict a road sign for the cat image. In another example, a human carrying an object that fits a different object class (such as a bicycle) could be incorrectly classified by an image classification neural network.)
generate the one or more labels for the determined one or more modified images that caused the one or more neural networks to produce the one or more incorrect outputs. ([0052]: if the correlation between the input image and a reference image is determined to be high (i.e., cor(dr,p)→[1] and/or cor(dr,p) is greater than a threshold such as greater than about 0.8, greater than about 0.9, greater than about 0.95, or the like), the label prediction corresponding to the reference image (y(r)) will be assigned to the input image, irrespective of label predicted by the neural network (nnβ)(xp)). However, if the correlation between the input image and a reference image is determined to be low (i.e., cor(dr,p)→0 and/or cor(dr,p) is less than a threshold such as less than about 0.1, less than about 0.2, less than about 0.05, or the like), the label predicted by the neural network (nnβ)(xp)) will be used as the image label prediction. In other words, if an input image is not identical but very similar to a reference image, it will lead to a small distance between the input image and the reference image creating a high correlation, outweighing the neural network prediction. However, if an input image is not similar to any reference image, the reference dataset will not have any effect on the neural network label prediction.)
Regarding claim 7, Muehlenstaedt and Gupta teach the processor of claim 6.
Muehlenstaedt teaches wherein the one or more circuits are to fine-tune the one or more neural networks using the one or more modified images and the one or more labels. ([0023]: retraining the neural network to recognize adversarial attacks and/or using additional training data including edge cases can help address the above issues to some extent. [0024]: utilize a combination of a neural network with a correlation structure (e.g., a correlation structure as used in a Gaussian process) corresponding to a reference dataset that enables significantly improved prediction accuracy during, for example, image classification (as discussed below). In case of image classification, the reference dataset can include images known to be associated with adversarial attacks or edge cases where the reference dataset is not used for training of the neural network but for correction of neural network predictions during inference.)
Same rationales apply to claim 8 (system) and claim 15 (method) because they are substantially similar to claim 1 (processor).
Same rationales apply to claim 9 (system) and claim 16 (method) because they are substantially similar to claim 2 (processor).
Same rationales apply to claim 10 (system) and claim 17 (method) because they are substantially similar to claim 3 (processor).
Same rationales apply to claim 11 (system) and claim 18 (method) because they are substantially similar to claim 4 (processor).
Same rationales apply to claim 12 (system) and claim 19 (method) because they are substantially similar to claim 5 (processor).
Same rationales apply to claim 13 (system) and claim 20 (method) because they are substantially similar to claim 6 (processor).
Same rationales apply to claim 14 (system) because it is substantially similar to claim 7 (processor).
Response to Arguments
Applicant’s arguments, see pages 6-7, filed 12/23/2025, with respect to the rejection(s) of claims 1-20 under 35 U.S.C. § 102(a)(2) have been fully considered but are moot in view of new ground(s) of rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZI YE whose telephone number is (571)270-1039. The examiner can normally be reached Monday - Friday, 8:00am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emmanuel Moise can be reached at 5712723865. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZI YE/Primary Examiner, Art Unit 2455