DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 11/17/2023 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
The listing of references in the specification is not a proper information disclosure statement. 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, "the list may not be incorporated into the specification but must be submitted in a separate paper." Therefore, unless the references have been cited by the examiner on form PTO-892, they have not been considered. See pages 31-35 of Applicant’s originally filed specification titled “REFERENCES”, where 39 non-patent literature documents have been cited, but not filed in a proper information disclosure statement.
Requirements for Information under 37 CFR § 1.105
Applicant and the assignee of this application are required under 37 CFR 1.105 to provide the following information that the examiner has determined is reasonably necessary to the examination and treatment of matters in this pending application. The duty of candor and good faith under 37 CFR 1.56 requires that the applicant reply to a requirement under 37 CFR 1.105 with information reasonably and readily available.
Specific information that is reasonably required includes the information regarding the contents of the references shown on pages 31-35 of Applicant’s originally filed specification titled “REFERENCES”.
This information is properly necessary to compel disclosure of information that the examiner deems pertinent to patentability Applicant’s claimed invention.
The above information as pertaining to the following is requested:
(i) Commercial databases: The existence of any particularly relevant commercial database known to any of the inventors that could be searched for a particular aspect of the invention, regarding the existing adversarial attack models using data perturbation and black-box attacks.
(ii) Search: Whether a search of the prior art was made, and if so, what was searched with regard to existing black-box adversarial attack models.
(iii) Information used to draft application: A copy of any non-patent literature, published application, or patent (U.S. or foreign) that was used to draft the application, relating to existing black-box adversarial attack models.
(iv) Information used in invention process: A copy of any non-patent literature, published application, or patent (U.S. or foreign) that was used in the invention process, such as by designing around or providing a solution to accomplish an invention result, such as designing a black-box adversarial attack model, combination of perturbation to accomplish invention desired results, or performing statistical analysis in comparing the invention’s results with existing black-box adversarial attack models.
(v) Improvements : Where the claimed invention is an improvement, identification of what is being improved, with relation to existing black-box adversarial attack models that achieve desired results by perturbing input data.
(vi) Technical information known to applicant. Technical information known to applicant concerning the related existing black-box adversarial attack models, the disclosure, the claimed subject matter, other factual information pertinent to patentability, or concerning the accuracy of the examiner’s stated interpretation of such items.
Where the applicant does not have or cannot readily obtain an item of required information, a statement that the item is unknown or cannot be readily obtained may be accepted as a complete reply to the requirement for that item.
This requirement is an attachment of the enclosed Office action. A complete reply to the enclosed Office action must include a complete reply to this requirement. The time period for reply to this requirement coincides with the time period for reply to the enclosed Office action.
Specification
The disclosure is objected to because of the following informalities:
Page 8 line 26 and page 31 line 13 contain a typo, “liar data” should read “lidar data”.
Appropriate correction is required.
Claim Objections
Claims 13 and 21 are objected to because of the following informalities:
Claim 13 contains a typo, “the method of any of claims 10,” should read “the method of claim 10,”.
Claim 21 contains a typo, “liar data” should read “lidar data”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 5 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding Claim 5:
Claim 5 recites “wherein input samples are chosen that are similar to training inputs used to train the perception component under attack”. There is no support in the disclosure regarding how the inventor intended to perform this claimed function. The algorithm or steps/procedures for these claimed functions is not explained at all or is not explained in sufficient detail (simply restating the function reciting in the claim is not necessarily sufficient) so that one of ordinary skill in the art would recognize that the applicant had possession of the claimed invention. There is no description in the originally filed disclosure regarding how the inventor intended to choose input samples that “are similar to training inputs used to train the perception component under attack” other than restating the claim language, and a brief mention in originally filed page 3 “Synthetic inputs (that are statistically similar to real inputs) could also be used. Such input may be generated using sensor modelling techniques and the like”. In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See MPEP §§ 2163.02 and 2181, subsection IV.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 4, 8-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “promising attack direction” in claim 2 and “promising attack condition” in claims 10, 12-13, and 15 are relative terms which render the claims indefinite. The terms “promising attack direction” and “promising attack condition” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The originally filed disclosure does not define the term or provide a standard for ascertaining the requisite degree, as the term “promising” is not a term of the art, while pages 2-3 give vague implication of that the term could mean and page 7 gives some examples of what the promising attack direction could/may be, these do not make the relative term “promising” within claims 2 and 10 definite.
This is in contrast to claim 4, where the “promising attack direction” is defined by the claim, and is therefore definite in claim 4.
Claim 9 recites the limitation "the successful attack condition" in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 8 recites the limitation "the successful attack condition" in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 10 recites the limitation “the statistical analysis” in line 2 and "the successful attack condition" in line 4. There is insufficient antecedent basis for this limitation in the claim.
Respective dependent claims fall together accordingly.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 5-6, 8-9, and 21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang, ZM., Gu, MT. & Hou, JH. Sample Based Fast Adversarial Attack Method. Neural Process Lett 50, 2731-2744 (2019) hereinafter Wang.
Regarding Claim 1:
Wang discloses a computer-implemented method of generating black-box adversarial inputs to a perception component (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), the method comprising: computing an adversarial input by applying a perturbation to an original input, the adversarial input satisfying an attack objective when inputted to the perception component (Wang Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t” misclassification is an attack objective); wherein the perturbation is determined by selectively combining component perturbations selected from a predetermined set of component perturbations (Wang Section 3 “The inherent reason that one adversarial sample can fool a DNN is that it’s located the classification border in some feature space. However, the boundary of a DNN is built from its training samples. Then, can we generate adversarial samples just based existing data samples? From the results of our experiment, the answer is yes. To find the essential difference between different classes, we perform principal component analysis on training data, and express the difference between classes with PCA coefficients”; Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t. In PCA space we know the main difference between class a and class t is the difference vector yt − ya. So the fastest way to make the attacked network give answer t is to drive x to class t alone direction from ya to yt” the algorithm determines the set of component perturbations and combining/manipulating the vectors); and wherein said inputs correspond to respective points in an input vector space, and the component perturbations encode principal attack directions in the input vector space for satisfying said attack objective (Wang Section 3 and 3.2), the principal attack directions having been determined by analysing: (i) a set of sample attack directions, or (ii) a set of input samples (Wang Section 3 and 3.2).
Regarding Claim 2:
Wang further discloses the method of claim 1 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), wherein at least one of the sample attack directions is a direction along which a successful attack has been observed, or a promising attack direction (Wang at abstract “First, we find the key difference between different classes based on principle component analysis and calculate the difference vector. During attacking, we just drive a sample to the target class (for target adversarial) or the nearest other class (for misclassification adversarial).”; 3.3 and Fig 1 move towards the target class).
Regarding Claim 5:
Wang further discloses the method of claim 1 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), wherein the principal attack directions are chosen by analysing the set of input samples (Wang Section 3 “To find the essential difference between different classes, we perform principal component analysis on training data, and express the difference between classes with PCA coefficients.”), wherein input samples are chosen that are similar to training inputs used to train the perception component under attack (Wang Figs. 3-7 original samples are close to the misclassification adversarial examples).
Regarding Claim 6:
Wang further discloses the method of claim 1 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), wherein said analysis is a singular value decomposition of (i) or (ii) or other decomposition of (i) or (ii) (Wang Section 3 SVD is simply a PCA of the known inputs; 3.1 PCA Model Building and 3.2).
Regarding Claim 8:
Wang further discloses the method of claim 1 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), comprising: selectively modifying the perturbation, by selectively adding or subtracting component perturbations, until the successful attack condition is satisfied (Wang Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t.”).
Regarding Claim 9:
Wang further discloses the method of claim 6 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), comprising: selectively modifying the perturbation, by selectively adding or subtracting component perturbations, until the successful attack condition is satisfied (Wang Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t.”) and wherein the component perturbations are principal direction vectors of a principal direction matrix computed via the singular value decomposition (Wang Section 3 SVD is simply a PCA of the known inputs; 3.1 PCA Model Building and 3.2).
Regarding Claim 21:
Wang further discloses the method of claim 1 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), wherein each input comprises image data, liar data or radar data (Wang Fig. 4-7).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Zhang et. al. (US Publication No. US 2021/0012188 A1) hereinafter Zhang.
Regarding Claim 4:
Wang discloses the method of claim 2 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”).
Wang does not explicitly disclose wherein the promising attack direction is determined as a gradient of an attack loss function on the perception component or a proxy perception component.
Zhang teaches wherein the promising attack direction is determined as a gradient of an attack loss function on the perception component or a proxy perception component (Zhang Fig. 2 obtain a loss function based on the determined label probabilities and ground-truth labels corresponding to the clean batch for training the deep neural network; [0065] gradients of the perturbed image is calculated).
It would have been obvious to one having ordinary skill in the art before the time the invention was effectively filed to combine the method of generating black-box adversarial inputs disclosed by Wang with the attack loss function taught by Zhang. The motivation for this combination would be to improve the efficacy of and to update the adversarial inputs by providing a measure of their effectiveness.
Claim(s) 10-13 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Jetley, S., Lord, N., & Torr, P. (2018). “With friends like these, who needs adversaries?”. Advances in neural information processing systems, 31 hereinafter Jetley.
Regarding Claim 10:
Wang discloses the method of claim 6 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), … and the perturbation is selectively modified by applying the following steps, starting with the first component perturbation, until the successful attack condition is satisfied (Wang Section 3 “The inherent reason that one adversarial sample can fool a DNN is that it’s located the classification border in some feature space. However, the boundary of a DNN is built from its training samples. Then, can we generate adversarial samples just based existing data samples? From the results of our experiment, the answer is yes. To find the essential difference between different classes, we perform principal component analysis on training data, and express the difference between classes with PCA coefficients”; Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t. In PCA space we know the main difference between class a and class t is the difference vector yt − ya. So the fastest way to make the attacked network give answer t is to drive x to class t alone direction from ya to yt” the algorithm determines the set of component perturbations and combining/manipulating the vectors): determining at least one modified perturbation, by adding or subtracting the component perturbation to/from the perturbation (Wang Section 3 and 3.2), determining whether the modified perturbation satisfies a promising attack condition (Wang Section 3 and 3.2 “given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t”), and if so, repeating the steps for the next principal direction with the modified perturbation, and if not, discarding the modified perturbation, and repeating the steps for the steps for the next principal direction without modifying the perturbation (Wang Section 3.2 “3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t. In PCA space we know the main difference between class a and class t is the difference vector yt − ya. So the fastest way to make the attacked network give answer t is to drive x to class t alone direction from ya to yt” the algorithm determines the set of component perturbations and combining/manipulating the vectors”).
Wang does not explicitly disclose wherein the component perturbations are assigned an order in performing the statistical analysis.
Jetley teaches wherein the component perturbations are assigned an order in performing the statistical analysis (Jetley Figure 4 and Section 4.3 “Classification accuracies on image sets projected onto subspaces of the spans of their corresponding DeepFool perturbations. For each net-dataset pair, DeepFool perturbations are computed over the image set and assembled into a matrix that is decomposed into its SVD. The singular vectors are ordered as per their singular values: Shi represents the high-to-low ordering, Slo the low-to-high, and d the number of vectors retained.”).
It would have been obvious to one having ordinary skill in the art before the time the invention was effectively filed to combine the method of generating black-box adversarial inputs disclosed by Wang with the ordering of the perturbations taught by Jetley. The motivation for this combination would be to improve the analysis of the perturbations as ordering the perturbations improves the clarity gleaned by the said analysis as seen by Jetley (Jetley Section 4.3 “Here, we implement this using a collection of DeepFool perturbations to provide the required gradient information, and repeat the analysis of Sec. 4.2, using singular values to order the vectors. The results, in Fig. 4, neatly replicate the previously seen classification accuracy trends for high-to-low and low-to-high curvature traversal of image-space directions. Henceforth, we use these directions directly, simplifying analysis and allowing us to analyse ImageNet networks”).
Regarding Claim 11:
The combination of Wang and Jetley further teaches the method of claim 10 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), wherein said analysis is a singular value decomposition of (i) or (ii), the method comprising (Wang Section 3 SVD is simply a PCA of the known inputs; 3.1 PCA Model Building and 3.2): selectively modifying the perturbation, by selectively adding or subtracting component perturbations, until the successful attack condition is satisfied (Wang Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t.”); wherein the component perturbations are principal direction vectors of a principal direction matrix computed via the singular value decomposition (Wang Section 3 SVD is simply a PCA of the known inputs; 3.1 PCA Model Building and 3.2); and wherein the principal direction vectors are ordered by decreasing singular value (Jetley Figure 4 and Section 4.3 “The singular vectors are ordered as per their singular values: Shi represents the high-to-low ordering, Slo the low-to-high, and d the number of vectors retained.”).
Regarding Claim 12:
The combination of Wang and Jetley further teaches the method of claim 10 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), wherein the perturbation is selectively modified by applying the following steps: determining a first modified perturbation, by doing one of: adding the component perturbation to the perturbation, and subtracting the component perturbation from the perturbation (Wang Section 3 “The inherent reason that one adversarial sample can fool a DNN is that it’s located the classification border in some feature space. However, the boundary of a DNN is built from its training samples. Then, can we generate adversarial samples just based existing data samples? From the results of our experiment, the answer is yes. To find the essential difference between different classes, we perform principal component analysis on training data, and express the difference between classes with PCA coefficients”; Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t. In PCA space we know the main difference between class a and class t is the difference vector yt − ya. So the fastest way to make the attacked network give answer t is to drive x to class t alone direction from ya to yt” the algorithm determines the set of component perturbations and combining/manipulating the vectors), determining whether the first modified perturbation satisfies a promising attack condition, and if so repeating the steps for the next principal direction with the first modified perturbation (Wang Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t.”), if not, discarding the first modified perturbation, and determining a second modified perturbation, by doing the other one of: adding the component perturbation to the perturbation, and subtracting the component perturbation from the perturbation (Wang Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t.”), determining whether the second modified perturbation satisfies the promising attack condition, and if so repeating the steps for the next principal direction with the second modified perturbation (Wang Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t.”), if not, discarding the second modified perturbation, and repeating the steps for the next principal direction without modifying the perturbation (Wang Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t.”).
Regarding Claim 13:
The combination of Wang and Jetley further teaches the method of any of claims 10 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), wherein the following operations are performed to determine whether a modified perturbation satisfies the promising attack condition: applying the modified perturbation to the original input vector, to compute a perturbed input vector (Wang Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t.”), providing the perturbed input vector to the perception component, to obtain at least one numerical output value (Wang Fig. 2, Fig 4, Fig. 6), in the case of the first component perturbation, comparing the numerical output value with a corresponding numerical output value obtained for the original input, in order to determine whether the promising attack condition is satisfied (Wang Fig. 2 numerical outputs compared; Section 4.1; Fig. 4 and 6 numerical outputs compared), in the case of each subsequent component perturbation, comparing the numerical output value with the numerical output value obtained for the previous component perturbation, in order to determine whether the promising attack condition is satisfied (Wang Fig. 2 numerical output compared; Section 4.1; Fig. 4 and 6 numerical outputs compared).
Claim(s) 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Wang and Jetley as applied to claims 10-13 above, and further in view of Zhang.
Regarding Claim 14:
The combination of Wang and Jetley teach the method of claim 13 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”).
The combination of Wang and Jetley does not explicitly teach wherein the comparison is performed to determine whether an attack loss has been increased or decreased relative to the original inputs, where the attack loss encodes the attack objective.
Zhang teaches wherein the comparison is performed to determine whether an attack loss has been increased or decreased relative to the original inputs, where the attack loss encodes the attack objective (Zhang Fig. 2 obtain a loss function based on the determined label probabilities and ground-truth labels corresponding to the clean batch for training the deep neural network; [0030] goal of minimizing the loss function and [0065] gradients of the perturbed image is calculated).
It would have been obvious to one having ordinary skill in the art before the time the invention was effectively filed to further combine the attack loss determination taught by Zhang with the combination of Wang and Jetley. The motivation of this combination would be to actually see whether the ground truth of the model actually reflects whether or not the attack was successful (Zhang [0038] “In the meantime, many efforts have been devoted to defending against adversarial examples. Recently, some showed that many existing defense methods suffer from a false sense of robustness against adversarial attacks due to gradient masking, and adversarial training is one of the effective defense methods against adversarial attacks”).
Regarding Claim 15:
The combination of Wang, Jetley, and Zhang further teaches the method of claim 14 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), wherein each numerical output value is a confidence score, and the promising attack condition is satisfied if the numerical output value indicates decreased confidence of the perception component (Wang Section 4.1 “To test the performance of proposed sample based adversarial attack algorithm, we compared 4 different attack algorithms with 3 evaluation criterions: (1) the mean absolute difference (MAD) or L1-norm distance between original samples and adversarial sample; (2) the overall fool ratio (FR) or attack success ratio, which the ratio of the number of successful attacks to the number of targets attacked; (3) speed, the average process time to generate an adversarial sample, including all failed tries.”; Tables 1-5 fool ratio measured), wherein the confidence score pertains to: a known class to which the original input belongs, and the promising attack condition is satisfied if the numerical output value indicates decreased classification confidence with respect to the known class (Wang Section 4.1”; Tables 1-5 fool ratio measured), or a specific class other than a known class to which the original input belongs, and the promising attack condition is satisfied if the numerical output value indicates increased classification confidence with respect to the specific other class (Wang Section 4.1”; Tables 1-5 fool ratio measured).
Claim(s) 19, 20, 23, 25, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Liu et. al. (US Publication No. US 2020/0285952 A1) hereinafter Liu.
Regarding Claim 19:
Wang discloses the method of claim 1 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”).
Wang does not disclose comprising: based on the adversarial input, identifying and mitigating a problem with the perception component.
Liu teaches comprising: based on the adversarial input, identifying and mitigating a problem with the perception component (Liu Fig. 10 hardening the computer model against adversarial attacks; [0088] the original model is hardened by providing the adversarial attack perturbations in conjunction with the true label for the images).
It would have been obvious to one having ordinary skill in the art before the time the invention was effectively filed to combine the method of generating black-box adversarial inputs disclosed by Wang with the problem identification and mitigation taught by Liu. The motivation for this combination would be to improve security by ensuring that a model placed in important software/hardware is not compromised by adversarial inputs as in Liu [0088-0089] “The hardened computing model 760 may be installed in the computing system that executes the hardened computing model 760 to perform classification operations either alone or in combination with other operations that are performed by the computing system. For example, the computing system may implement a cognitive computing mechanism that utilizes the classification operations of the hardened computing model 760 as one component to the overall cognitive operations performed by the cognitive computing mechanism, e.g., patient treatment recommendation, patient medical image analysis, vehicle navigation and/or obstacle avoidance or other vehicle safety system (e.g., automatic braking, automatic steering, warning notification output via a dashboard or audible warning mechanism, etc.), or the like”.
Regarding Claim 20:
The combination of Wang and Liu further teaches the method of claim 1 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), comprising modifying the perception component so that the adversarial input no longer satisfies the attack objective (Liu Fig. 10 hardening the computer model against adversarial attacks; [0088] the original model is hardened by providing the adversarial attack perturbations in conjunction with the true label for the images).
Regarding Claim 23:
The combination of Wang and Liu further teaches the method of claim 20 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), comprising incorporating the modified perception component in a robotic system (Liu [0088] “For example, the computing system may implement a cognitive computing mechanism that utilizes the classification operations of the hardened computing model 760 as one component to the overall cognitive operations performed by the cognitive computing mechanism, e.g., patient treatment recommendation, patient medical image analysis, vehicle navigation and/or obstacle avoidance or other vehicle safety system (e.g., automatic braking, automatic steering, warning notification output via a dashboard or audible warning mechanism, etc.), or the like.”).
Regarding Claims 25 and 27:
Claim 25. Wang discloses generate black-box adversarial inputs to a perception component (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”), by: computing an adversarial input by applying a perturbation to an original input, the adversarial input satisfying an attack objective when inputted to the perception component (Wang Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t” misclassification is an attack objective); determining the perturbation by selectively combining component perturbations selected from a predetermined set of component perturbations (Wang Section 3 “The inherent reason that one adversarial sample can fool a DNN is that it’s located the classification border in some feature space. However, the boundary of a DNN is built from its training samples. Then, can we generate adversarial samples just based existing data samples? From the results of our experiment, the answer is yes. To find the essential difference between different classes, we perform principal component analysis on training data, and express the difference between classes with PCA coefficients”; Section 3.2 “Target adversarial attack means we intentionally make a network give a specific wrong answer for a given sample. For example, given a sample x with label a, we slightly modify it to x’ which the attacked network will recognized as t. In PCA space we know the main difference between class a and class t is the difference vector yt − ya. So the fastest way to make the attacked network give answer t is to drive x to class t alone direction from ya to yt” the algorithm determines the set of component perturbations and combining/manipulating the vectors), wherein said inputs correspond to respective points in an input vector space, and the component perturbations encode principal attack directions in the input vector space for satisfying said attack objective (Wang Section 3 and 3.2); and determining the principal attack directions by analysing: (i) a set of sample attack directions, or (ii) a set of input samples (Wang Section 3 and 3.2).
Wang does not explicitly disclose A computer system comprising at least one memory configured to store computer-readable instructions; at least one hardware processor coupled to the at least one memory and configured to execute the computer-readable instructions, which upon execution cause the at least one hardware processor to.
Liu teaches A computer system comprising at least one memory configured to store computer-readable instructions; at least one hardware processor coupled to the at least one memory and configured to execute the computer-readable instructions, which upon execution cause the at least one hardware processor to (Liu [0072-0074]; [0077]).
It would have been obvious to one having ordinary skill in the art before the time the invention was effectively filed to implement the method of generating black-box adversarial inputs disclosed by Wang within a computer system as taught by Liu. The motivation for this combination would be to apply the techniques of generating adversarial inputs explicitly disclosed by Wang within a computer system that is taught by Liu (Liu [0072-0074]; [0077]).
Claim 27 recites substantially the same content and is therefore rejected under the same rationales. Liu further teaches A non-transitory medium embodying computer-readable instructions configured, when executed on one or more hardware processors ((Liu [0072-0074]; [0077])).
Claim(s) 24 is rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of N. Narodytska and S. Kasiviswanathan, "Simple Black-Box Adversarial Attacks on Deep Neural Networks," 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 2017, pp. 1310-1318 hereinafter Narodytska.
Regarding Claim 24:
Wang discloses the method of claim 1 (Wang Section 3 Sample Based Adversarial Attack “our method is black-box”, Section 3 a DNN is fooled, 3.2 “we intentionally make a network give a specific wrong answer for a given sample”).
Wang does not explicitly disclose wherein the statistical analysis assigns a relative order to the component perturbations, reflecting relative dominance of the principal attack directions, and the component perturbations are selectively combined in the assigned relative order.
Narodytska teaches wherein the statistical analysis assigns a relative order to the component perturbations, reflecting relative dominance of the principal attack directions, and the component perturbations are selectively combined in the assigned relative order (Narodytska 1314-1315 and Algorithm 2 LOCSEARCHADV (NN) sorts images in decreasing order of score, where the score indicates the effectiveness of the perturbation “Pixels whose perturbation lead to a larger decrease of f are more likely useful in constructing an adversarial candidate. From sorted(I), it records a set of pixel locations (P∗X, P∗Y )i based on the first t elements of sorted(I), where the parameter t regulates the number of pixels perturbed in each round”; “the algorithm takes an image as input, and in each round, finds some pixel locations to perturb using the above defined objective function and then applies the above defined transformation function to these selected pixels to construct a new (perturbed) image. It terminates if it succeeds to push the true label below the kth place in the confidence score vector at any round”).
It would have been obvious to one having ordinary skill in the art before the time the invention was effectively filed to combine the method of generating black-box adversarial inputs disclosed by Wang with the statistical analysis taught by Narodytska. The motivation for this combination would be to improve the performance of the perturbations evidenced by Narodytska (Narodytska 1315 “The perturbation parameter p was adaptively adjusted during the search. Though not critical, doing so helps in faster determination of the most helpful pixels in generating the adversarial image” and 1316 “Another advantage with our approach is that it modifies a very tiny fraction of pixels as compared to all the pixels perturbed by FGSM, and also in many cases with far less average perturbation. Putting these points together demonstrates that Algorithm LOCSEARCHADV is successful in generating more adversarial images than FGSM, while modifying far fewer pixels and adding less noise per image. On the other side, FGSM takes lesser time in the generation process and generally seems to produce higher confidence scores for the adversarial (misclassified) images”).
Conclusion
The prior art made of record in the submitted PTO-892 Notice of References Cited and not relied upon is considered pertinent to applicant’s disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIGUEL A LOPEZ whose telephone number is (703)756-1241. The examiner can normally be reached 8:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jorge Ortiz-Criado can be reached on 5712727624. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.A.L./ Examiner, Art Unit 2496
/JORGE L ORTIZ CRIADO/ Supervisory Patent Examiner, Art Unit 2496