DETAILED ACTION
This first non-final action is in response to applicants’ original filing on 07/22/2024. Claims 1-16 are currently pending and have been considered as follows.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Drawings
The drawings filed on 07/22/2024 are accepted.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/12/2025 has been placed in the application file, and the information referred therein has been considered as to the merits.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-4, 7-9, and 12-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites the limitation “the open sources” in line 6. There is insufficient antecedent basis for this limitation in the claim.
Claim 3, which depends upon Claim 2, inherits the same lack of antecedent basis and is rejected under 35 U.S.C. 112(b).
Claim 4 recites the limitation “the highest attention score” in line 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 7 recites the limitation “the open sources” in line 6. There is insufficient antecedent basis for this limitation in the claim.
Claim 8, which depends upon Claim 7, inherits the same lack of antecedent basis and is rejected under 35 U.S.C. 112(b).
Claim 9 recites the limitation “the highest attention score” in lines 4-5. There is insufficient antecedent basis for this limitation in the claim.
Claim 12 recites the limitation “the open sources” in line 6. There is insufficient antecedent basis for this limitation in the claim.
Claim 13, which depends upon Claim 12, inherits the same lack of antecedent basis and is rejected under 35 U.S.C. 112(b).
Claim 14 recites the limitation “the highest attention score” in lines 4-5. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 6, 11, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over JI et al. (US 20220244953 A1, hereinafter Ji) in view of Tian et al. (“Generating Adversarial Examples of Source Code Classification Models via Q-Learning-Based Markov Decision Process”, December 2021, IEEE 21st International Conference on Software Quality, Reliability and Security, pp. 807-818, hereinafter Tian).
As to Claim 1:
Ji discloses an adversarial attack method (e.g. Ji code vulnerabilities to attacks [0012]; process [0048]; FIG. 5; method [0065]), comprising:
selecting vulnerable positions of an original source code (e.g. Ji “determine whether the target binary code includes one of those known vulnerabilities by comparing the target binary code to comparing binaries generated from each of the source code functions” [0017]; “system 300 compares a target binary code 310 to each source code 370 in a database 372 of source code functions with known vulnerabilities (received, for example, from the National Vulnerability Database). The system 300 can then be used to determine if binary code running on a device has any of known vulnerabilities included in the database 372 by comparing the binary code 310 to each source code 370 in the database 372” [0045]);
acquiring open source codes based on an open source code set (e.g. Ji “Open-source software projects allow code segments to be copied and pasted to new locations” [0003]; “as many open-source libraries are widely used, the vulnerabilities (e.g., those in OpenSSL and FFmpeg) are also inherited by closed-source applications (in binary code format)” [0005]; system receives source code with known vulnerabilities from the National Vulnerability Database [0045]; [0051]);
selecting dissimilar codes among the open source codes based on dissimilarity of the open source codes (e.g. Ji “In the context of code similarity computation, the loss function should be able to generate loss values based on the similarity (i.e., the loss value should be small if two similar codes have similar embedding) and the learned model must be able to detect the subtle difference in codes with different code similarity types. In other words, the model should be able to learn that type-1 is more similar than type-2 and type-3, type-2 more similar than type-3, and type-3 more similar than completely different code” [0082]);
acquiring an attention score for each of the dissimilar codes (e.g. Ji “Therefore, the similarity ranking can be represented as type-1>type-2>type-3>different” [0082]; “the system 300 can compare the target binary code 310 to a library of source code function 370 stored in the source code database 372, determine a similarity score 1000 for each source code function 370, and rank the source code functions 370 by their similarity scores 1000. Therefore, in the embodiments where the source code database 372 is a library of source code functions 370 having known vulnerabilities, the system 300 can be used to detect whether the target binary code 310 has a similarity score 1000 that indicates that the target binary code 310 is similar to one or more of source code functions 370 (and is therefore likely to include the known vulnerability or vulnerabilities)” [0087]);
But Ji does not specifically disclose:
extracting a snippet from at least one of the dissimilar codes based on the attention scores; and
generating an adversarial source code based on the at least one snippet.
However, the analogous art Tian does disclose extracting a snippet from at least one of the dissimilar codes based on the attention scores and generating an adversarial source code based on the at least one snippet (e.g. Tian extract the code structure features and obtain the executable attack action sequence for new code snippets [Page 812 left column]; based on reward value Rt [Page 813]; “the structural features of the code can be extracted” [Page 814 right column]; Fig. 4 outputs are the adversarial examples generated after extracting selected code actions from attack testing [Page 812]; “QMDP is mainly implemented to generate adversarial examples for structural features of the source code” [Page 811 left column]; the selected action is executed to perform an adversarial attack and generating a new adversarial example [Page 813 left column - D. Attack Testing]). Ji and Tian are analogous art because they are from the same field of endeavor in source code analysis.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Ji and Tian before him or her, to modify the disclosure of Ji with the teachings of Tian to include extracting a snippet from at least one of the dissimilar codes based on the attention scores and generating an adversarial source code based on the at least one snippet as claimed. The suggestion/motivation for doing so would have been to perform adversarial attacks on source code structural features effectively as well as ensure that the functionality of the code remains consistent (Tian [Page 808 left column]). Therefore, it would have been obvious to combine Ji and Tian to obtain the invention as specified in the instant claim(s).
As to Claim 6:
Ji discloses an adversarial source code generation apparatus (e.g. Ji code vulnerabilities to attacks [0012]; “binary code similarity detection system 300 may be realized using any hardware computing device” [0089]), comprising:
a processor (e.g. Ji hardware computing device executing software instructions [0089]); and
a memory storing one or more instructions executed by the processor (e.g. Ji non-transitory computer readable storage media storing software instructions [0089]), wherein the one or more instructions comprises:
selecting vulnerable positions of an original source code (e.g. Ji “determine whether the target binary code includes one of those known vulnerabilities by comparing the target binary code to comparing binaries generated from each of the source code functions” [0017]; “system 300 compares a target binary code 310 to each source code 370 in a database 372 of source code functions with known vulnerabilities (received, for example, from the National Vulnerability Database). The system 300 can then be used to determine if binary code running on a device has any of known vulnerabilities included in the database 372 by comparing the binary code 310 to each source code 370 in the database 372” [0045]);
acquiring open source codes based on an open source code set (e.g. Ji “Open-source software projects allow code segments to be copied and pasted to new locations” [0003]; “as many open-source libraries are widely used, the vulnerabilities (e.g., those in OpenSSL and FFmpeg) are also inherited by closed-source applications (in binary code format)” [0005]; system receives source code with known vulnerabilities from the National Vulnerability Database [0045]; [0051]);
selecting dissimilar codes among the open source codes based on dissimilarity of the open source codes (e.g. Ji “In the context of code similarity computation, the loss function should be able to generate loss values based on the similarity (i.e., the loss value should be small if two similar codes have similar embedding) and the learned model must be able to detect the subtle difference in codes with different code similarity types. In other words, the model should be able to learn that type-1 is more similar than type-2 and type-3, type-2 more similar than type-3, and type-3 more similar than completely different code” [0082]);
acquiring an attention score for each of the dissimilar codes (e.g. Ji “Therefore, the similarity ranking can be represented as type-1>type-2>type-3>different” [0082]; “the system 300 can compare the target binary code 310 to a library of source code function 370 stored in the source code database 372, determine a similarity score 1000 for each source code function 370, and rank the source code functions 370 by their similarity scores 1000. Therefore, in the embodiments where the source code database 372 is a library of source code functions 370 having known vulnerabilities, the system 300 can be used to detect whether the target binary code 310 has a similarity score 1000 that indicates that the target binary code 310 is similar to one or more of source code functions 370 (and is therefore likely to include the known vulnerability or vulnerabilities)” [0087]);
But Ji does not specifically disclose:
extracting a snippet from at least one of the dissimilar codes based on the attention scores; and
generating an adversarial source code based on the at least one snippet.
However, the analogous art Tian does disclose extracting a snippet from at least one of the dissimilar codes based on the attention scores and generating an adversarial source code based on the at least one snippet (e.g. Tian extract the code structure features and obtain the executable attack action sequence for new code snippets [Page 812 left column]; based on reward value Rt [Page 813]; “the structural features of the code can be extracted” [Page 814 right column]; Fig. 4 outputs are the adversarial examples generated after extracting selected code actions from attack testing [Page 812]; “QMDP is mainly implemented to generate adversarial examples for structural features of the source code” [Page 811 left column]; the selected action is executed to perform an adversarial attack and generating a new adversarial example [Page 813 left column - D. Attack Testing]). Ji and Tian are analogous art because they are from the same field of endeavor in source code analysis.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Ji and Tian before him or her, to modify the disclosure of Ji with the teachings of Tian to include extracting a snippet from at least one of the dissimilar codes based on the attention scores and generating an adversarial source code based on the at least one snippet as claimed. The suggestion/motivation for doing so would have been to perform adversarial attacks on source code structural features effectively as well as ensure that the functionality of the code remains consistent (Tian [Page 808 left column]). Therefore, it would have been obvious to combine Ji and Tian to obtain the invention as specified in the instant claim(s).
As to Claim 11:
Ji discloses an adversarial attack performance analysis method (e.g. Ji code vulnerabilities to attacks [0012] analysis process [0048]; FIG. 5; method [0065]), comprising:
receiving an original source code from an original source code provision apparatus (e.g. Ji comparing target binary code and a source code [0045]-[0047]);
selecting vulnerable positions of an original source code (e.g. Ji “determine whether the target binary code includes one of those known vulnerabilities by comparing the target binary code to comparing binaries generated from each of the source code functions” [0017]; “system 300 compares a target binary code 310 to each source code 370 in a database 372 of source code functions with known vulnerabilities (received, for example, from the National Vulnerability Database). The system 300 can then be used to determine if binary code running on a device has any of known vulnerabilities included in the database 372 by comparing the binary code 310 to each source code 370 in the database 372” [0045]);
acquiring open source codes based on an open source code set (e.g. Ji “Open-source software projects allow code segments to be copied and pasted to new locations” [0003]; “as many open-source libraries are widely used, the vulnerabilities (e.g., those in OpenSSL and FFmpeg) are also inherited by closed-source applications (in binary code format)” [0005]; system receives source code with known vulnerabilities from the National Vulnerability Database [0045]; [0051]);
selecting dissimilar codes among the open source codes based on dissimilarity of the open source codes (e.g. Ji “In the context of code similarity computation, the loss function should be able to generate loss values based on the similarity (i.e., the loss value should be small if two similar codes have similar embedding) and the learned model must be able to detect the subtle difference in codes with different code similarity types. In other words, the model should be able to learn that type-1 is more similar than type-2 and type-3, type-2 more similar than type-3, and type-3 more similar than completely different code” [0082]);
acquiring an attention score for each of the dissimilar codes (e.g. Ji “Therefore, the similarity ranking can be represented as type-1>type-2>type-3>different” [0082]; “the system 300 can compare the target binary code 310 to a library of source code function 370 stored in the source code database 372, determine a similarity score 1000 for each source code function 370, and rank the source code functions 370 by their similarity scores 1000. Therefore, in the embodiments where the source code database 372 is a library of source code functions 370 having known vulnerabilities, the system 300 can be used to detect whether the target binary code 310 has a similarity score 1000 that indicates that the target binary code 310 is similar to one or more of source code functions 370 (and is therefore likely to include the known vulnerability or vulnerabilities)” [0087]);
But Ji does not specifically disclose:
receiving an adversarial source code from an adversarial source code generation apparatus;
determining whether an artificial intelligence model has been attacked by the adversarial source code based on the original source code and the adversarial source code;
wherein the adversarial source code is generated by: extracting a snippet from at least one of the dissimilar codes based on the attention scores; and generating an adversarial source code based on the at least one snippet.
However, the analogous art Tian does disclose receiving an adversarial source code from an adversarial source code generation apparatus (e.g. Tian receive source code examples and generate adversarial examples, “we extract the actions that can be performed with the adversarial transformation based on the input set of examples” [Pages 809-812]), determining whether an artificial intelligence model has been attacked by the adversarial source code based on the original source code and the adversarial source code (e.g. Tian measure effectiveness of QMDP attacks with generated adversarial code examples [Pages 813-815] on Deep Learning (DL) models with source code original inputs [Page 807 - Abstract; I. Introduction]), wherein the adversarial source code is generated by: extracting a snippet from at least one of the dissimilar codes based on the attention scores and generating an adversarial source code based on the at least one snippet (e.g. Tian extract the code structure features and obtain the executable attack action sequence for new code snippets [Page 812 left column]; based on reward value Rt [Page 813]; “the structural features of the code can be extracted” [Page 814 right column]; Fig. 4 outputs are the adversarial examples generated after extracting selected code actions from attack testing [Page 812]; “QMDP is mainly implemented to generate adversarial examples for structural features of the source code” [Page 811 left column]; the selected action is executed to perform an adversarial attack and generating a new adversarial example [Page 813 left column - D. Attack Testing]). Ji and Tian are analogous art because they are from the same field of endeavor in source code analysis.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Ji and Tian before him or her, to modify the disclosure of Ji with the teachings of Tian to include extracting a snippet from at least one of the dissimilar codes based on the attention scores and generating an adversarial source code based on the at least one snippet as claimed. The suggestion/motivation for doing so would have been to perform adversarial attacks on source code structural features effectively as well as ensure that the functionality of the code remains consistent (Tian [Page 808 left column]). Therefore, it would have been obvious to combine Ji and Tian to obtain the invention as specified in the instant claim(s).
As to Claim 16:
Ji in view of Tian discloses A non-transitory computer-readable recording medium recording a program for executing the method of claim 1 on a computer (e.g. Ji non-transitory computer readable storage media storing software instructions for realizing the method [0089]).
Allowable Subject Matter
Claims 5, 10, and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicants’ disclosure.
Canedo et al. (US 20220276628 A1)
SUZUKI (US 20220083670 A1)
GHARIBI et al. (US 20220029972 A1)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kenneth Chang whose telephone number is (571)270-7530. The examiner can normally be reached Monday - Friday 9:30am-5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Taghi Arani can be reached at 571-272-3787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENNETH W CHANG/Primary Examiner, Art Unit 2438
PNG
media_image1.png
35
280
media_image1.png
Greyscale
02.28.2026