Prosecution Insights
Last updated: April 19, 2026
Application No. 18/400,997

SYSTEM AND METHOD FOR THREAT DETECTION AND PREVENTION

Final Rejection §103
Filed
Dec 29, 2023
Examiner
BINCZAK, BRANDON MICHAEL
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
American Express Travel Related Services Company, Inc.
OA Round
2 (Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
2y 11m
To Grant
74%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
23 granted / 60 resolved
-19.7% vs TC avg
Strong +36% interview lift
Without
With
+36.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
34 currently pending
Career history
94
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
26.0%
-14.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 60 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments, see pages 8-10, filed 12/17/2025, with respect to the rejection of claims 1-20 under 35 USC 101 have been fully considered and are persuasive. This rejection has been withdrawn. Applicant’s arguments, see pages 10 and 11, filed 12/17/2025, with respect to the rejection of claims 1-20 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of STUSSI et al (Doc ID US 20240419793 A1) and CARSON (Doc ID US 20190251251 A1). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 8, 10, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over STUSSI et al (Doc ID US 20240419793 A1), and further in view of CARSON (Doc ID US 20190251251 A1). Regarding claim 1: Examiner notes that the methods described in the specification and claims regarding calculating the similarity of input code to “malicious code” and “protection code” are functionally identical. That is, the claimed invention is in essence, trained to recognize code similar to whatever code on which the machine learning (ML) models are trained. As such, when it comes to calculating similarity of input code to malicious/protection code, any prior art which can determine the similarity of input code to code which was provided as training data may read on the claims. STUSSI teaches: A computer implemented method for software threat analysis, comprising: receiving, by a threat management system comprising at least one processor and memory, source code of an application under test ([0112] "… the processor of malicious software detection system 305 receives a software package including software components."); storing, in the memory, (i) training data comprising malicious-code features, threat- protection-code features, and vector embeddings derived from threat-intelligence feeds([0098] "… to train a machine learning model ..., a large dataset of source code and community data is required. The large dataset includes both benign and malicious software packages. … and the community data may include ... online forums, mailing lists, social media platforms, marketplaces etc. ... this data is preprocessed ..., converting the data into a format that can be fed into a machine learning model ..."), and training, by the processor, a first machine learning model using the training data to generate malicious code embeddings and to identify malicious code patterns within the application under test ([0098] & [0129] "… patterns in malicious software packages are identified and machine learning models are used to detect these patterns. Test software is used to generate embeddings of the source code of the test software."); training, by the processor, a second machine learning model using the training data to generate protection code embeddings and to identify threat protection code patterns within the application under test ([0098] & [0129] "… patterns in malicious software packages are identified and machine learning models are used to detect these patterns. Test software is used to generate embeddings of the source code of the test software."); computing, by the processor, a first cosine similarity between a vector embedding of the application under test and malicious code embeddings stored in a malicious code vector database to determine a first likelihood that the application under test includes malicious code ([0113] "... malicious software detection system 305 generates a malicious probability for each of the identified one or more software components ...", [0129] "… (Test Files) are then compared against representative embeddings of these clusters to classify the test files as either malicious files or benign files.", and [0132] "… Cosine similarity was used for computing similarities."); computing, by the processor, a second cosine similarity between a vector embedding of the application under test and protection code embeddings stored in a protection code vector database to determine a second likelihood that the application under test includes threat protection code ([0113] "... malicious software detection system 305 generates a malicious probability for each of the identified one or more software components ...", [0129] "… (Test Files) are then compared against representative embeddings of these clusters to classify the test files as either malicious files or benign files.", and [0132] "… Cosine similarity was used for computing similarities."); CARSON teaches the following limitation(s) not taught by STUSSI: (ii) a plurality of promotion rules defining threshold similarity values and confidence levels for code promotion decisions ([0072] "… The policy may specify which countermeasures or actions to perform depending on the one or more scores calculated by the learning engine."); applying, by the processor, the plurality of promotion rules to the first likelihood and the second likelihood to determine whether the application under test satisfies a promotion criterion ([0072] "Using the scores indicating the likelihood that the executable contains malware, the rule engine of the malware detection system may perform one or more countermeasures based on a policy."); and automatically promoting, blocking, or generating a remediation alert for the application under test based on the applied promotion rules, wherein the remediation alert identifies code features recommended to mitigate detected skimming threats ([0072] "… The policy may for example specify automatically blocking an operation of the executable when the one or more scores indicate high likelihood that the executable contains malware."). Training ML models to calculate similarities in inputted code to code on which it was trained is a known technique in the art, as demonstrated by STUSSI. Further, applying rules to the results of code similarities to include blocking an application determined likely to contain malicious core is a known technique in the art, as demonstrated by CARSON. It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to modify the code similarity calculation of STUSSI with the Code similarity policy assessment of CARSON with the motivation to perform actions based on a similarity detected between a given code sample and training data. Regarding claim 3: The combination of STUSSI and CARSON teaches: The computer implemented method of claim 1, further comprising crawling one or more threat intelligence feeds to obtain the training data (STUSSI [0098] "… the community data may include ... online forums, mailing lists, social media platforms, marketplaces etc. ... this data is preprocessed ..."). Regarding claims 8 and 10: These claims are rejected with the same justification, mutatis mutandis, as their counterpart claims 1 and 3 above. Regarding claim 15: STUSSI teaches: A non-transitory computer-readable device having instructions stored thereon that, when executed by at least one computing device, cause the at least one computing device to perform operations comprising ([0010] "... one or more processors and a memory … storing therein a set of instructions which … causes the one or more processors to detect malicious software packages …"): The remainder of this claim’s limitations are rejected with the same prior art mapping and justification, mutatis mutandis, as its counterpart claims 1 and 8. Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over STUSSI et al (Doc ID US 20240419793 A1) and CARSON (Doc ID US 20190251251 A1) as applied to claims 1, 8, and 15 above, and further in view of HELYAR et al (Doc ID US 20240152624 A1). Regarding claim 2: The combination of STUSSI and CARSON teaches: The non-transitory computer-readable device of claim 15, HELYAR teaches the following limitation(s) not taught by the combination of STUSSI and CARSON: wherein the first machine learning model and the second machine learning model are large language models (LLMs (HELYAR [0023] "… solutions described herein leverage large language models to implement an artificial intelligence … code vulnerability detection tool …")). Utilizing Large Language Models (LLM) in malicious code detection is a known technique in the art, as demonstrated by HELYAR. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the malicious code similarity detection of STUSSI and CARSON with the LLM of HELYAR with the motivation to make use of the flexibility of an LLM where tokenized inputs used by LLMs can be used to calculate similarities. Regarding claims 9 and 16: These claims are rejected with the same justification, mutatis mutandis, as their counterpart claim 2 above. Claims 4, 5, 11, 12, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over STUSSI et al (Doc ID US 20240419793 A1) and CARSON (Doc ID US 20190251251 A1) as applied to claims 1, 8, and 15 above, and further in view of HINES et al (Doc ID US 20210120013 A1). Regarding claim 4: The combination of STUSSI and CARSON teaches: The computer implemented method of claim 1, HINES teaches the following limitation(s) not taught by the combination of STUSSI and CARSON: wherein the training data includes connections to rogue domains, digital skimmer scripts, or indicators of compromised payloads ([0004] "… examples of IPRIDs include … Internet Protocol (IP) addresses, domain names …" and [0305] "... the convolutional neural network 404 is trained using ... at least three of the following aggregate features 308: a count 736 of distinct submissions, a count 740 of distinct final hostnames, a count 744 of submissions of a particular IPRID, ... a count 752 of redirects to a particular IPRID ..."). Examiner notes that the term "rogue domain" is interpreted as a domain taking unauthorized redirects. Training a machine learning model to recognize DNS based attacks such as redirects to a domain is a known technique in the art, as demonstrated by HINES. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the malicious code similarity detection of STUSSI and CARSON with the training data of HINES with the motivation to provide the model with specific training data to be able to recognize a targeted set of attacks. Regarding claim 5: The combination of STUSSI and CARSON teaches: The computer implemented method of claim 1, HINES teaches the following limitation(s) not taught by the combination of STUSSI and CARSON: wherein the training data is stored as a vector database including rogue domain vector embeddings, digital skimmer vector embeddings, or indicators of compromise vector embeddings ([0315] "... training data 304 that is characterized in at least one of the following ways: ... the training data is organized in vector space planes 332 that correspond to data sources, the training data is organized 1410 in vector space planes 332 that correspond to data meanings ..."). Training a machine learning model to recognize DNS based attacks such as redirects to a domain is a known technique in the art, as demonstrated by HINES. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the malicious code similarity detection of STUSSI and CARSON with the training data of HINES with the motivation to provide the model with specific training data to be able to recognize a targeted set of attacks. Regarding claims 11, 12, 17, and 18: These claims are rejected with the same justification, mutatis mutandis, as their counterpart claims 4 and 5 above. Claims 6, 7, 13, 14, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over STUSSI et al (Doc ID US 20240419793 A1) and CARSON (Doc ID US 20190251251 A1) as applied to claims 1, 8, and 15 above, and further in view of DHOKIA et al (Doc ID US 20220247790 A1). Regarding claim 6: The combination of STUSSI and CARSON teaches: The computer implemented method of claim 1, DHOKIA teaches the following limitation(s) not taught by the combination of STUSSI and CARSON: wherein the training data includes content security policies, sub-resource integrity hashes, or HTTP security headers ([0056] "… The training data 830 may include one or more of … access policy statistics, model access control policies …"). Training a machine learning model on security policies is a known technique in the art, as demonstrated by DHOKIA. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the malicious code similarity detection of STUSSI and CARSON with the training data of DHOKIA with the motivation to provide the model with specific training data to be able to recognize security policies in a system for which it is performing code-scanning. Regarding claim 7: The combination of STUSSI and CARSON teaches: The computer implemented method of claim 1, DHOKIA teaches the following limitation(s) not taught by the combination of STUSSI and CARSON: The computer implemented method of claim 1, wherein the training data is stored as a vector database including content security policy vector embeddings, sub-resource integrity hash vector embeddings, or HTTP security header vector embeddings ([0056] "… The training data 830 may include one or more of … access policy statistics, model access control policies …" and [0057] "Selector module 850 selects training vector 860 from the training data 830."). Training a machine learning model on security policies is a known technique in the art, as demonstrated by DHOKIA. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the malicious code similarity detection of STUSSI and CARSON with the training data of DHOKIA with the motivation to provide the model with specific training data to be able to recognize security policies in a system for which it is performing code-scanning. Regarding claims 13, 14, 19, and 20: These claims are rejected with the same justification, mutatis mutandis, as their counterpart claims 6 and 7 above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON BINCZAK whose telephone number is (703)756-4528. The examiner can normally be reached M-F 0800-1700. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached on (571) 270-5143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BB/Examiner, Art Unit 2437 /BENJAMIN E LANIER/Primary Examiner, Art Unit 2437
Read full office action

Prosecution Timeline

Dec 29, 2023
Application Filed
Sep 12, 2025
Non-Final Rejection — §103
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 05, 2025
Examiner Interview Summary
Dec 17, 2025
Response Filed
Jan 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12470534
PARTIAL POOL CREDENTIALLING AUTHENTICATION SYSTEM
2y 5m to grant Granted Nov 11, 2025
Patent 12452224
IMAGE DISPLAY DEVICE AND SYSTEM, AND OPERATION METHOD FOR SAME
2y 5m to grant Granted Oct 21, 2025
Patent 12425867
REGISTRATION AND SECURITY ENHANCEMENTS FOR A WTRU WITH MULTIPLE USIMS
2y 5m to grant Granted Sep 23, 2025
Patent 12417283
IOT ADAPTIVE THREAT PREVENTION
2y 5m to grant Granted Sep 16, 2025
Patent 12411919
Shared Assistant Profiles Verified Via Speaker Identification
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
74%
With Interview (+36.1%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 60 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month