Prosecution Insights
Last updated: April 19, 2026
Application No. 18/628,985

Threat Model Generation Systems

Final Rejection §103
Filed
Apr 08, 2024
Examiner
WON, MICHAEL YOUNG
Art Unit
2443
Tech Center
2400 — Computer Networks
Assignee
Capital One Services LLC
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
666 granted / 835 resolved
+21.8% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
28 currently pending
Career history
863
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
32.9%
-7.1% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 835 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This action is in response to the Amendment filed December 10, 2025. 3. Claims 1, 8, and 15 have been amended and claims 4, 11, and 18 have been canceled. 4. Claims 1-3, 5-10, 12-17, and 19-20 have been examined and are pending with this action. Response to Arguments 5. Applicant’s arguments with respect to the rejection of claims 1-20 under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. Applicant’s arguments with respect to the rejection of claims 1-20 under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Murphy et al. (US 2024/0291853 A1), have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Cecchetti et al. (US 2025/0258925 A1), herein referenced Cecchetti. Cecchetti explicitly discloses, teaches, or in the very least suggests the missing limitations with respect to the test being a penetration test (see rejections below). For these reasons above and the rejections set forth below, claims 1-3, 5-10, 12-17, and 19-20 remain rejected and pending. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 1-3, 5-10, 12-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Murphy et al. (US 2024/0291853 A1) in view of Cecchetti et al. (US 2025/0258925 A1). INDEPENDENT: As per claim 1, Murphy teaches a method comprising: sending, by a computing device and to a large language model (LLM), one or more software modules associated with a computing system (see Murphy, [0006]: “The output formatter subsystem may be configured to utilize a large language model to generate the summarized human-readable report for the initial notification”; [0346]: “When establishing 1800 connectivity with a plurality of security-relevant subsystems (e.g., security-relevant subsystems 226), threat mitigation process 10 may utilize at least one application program interface (e.g., API Gateway 224) to access at least one of the plurality of security-relevant subsystems.”; [0390]: “Threat mitigation process 10 may be configured to harness the power of Generative AI and Large Language Models (LLM). Generative AI models (e.g., AI/ML process 56), as part of the broader artificial intelligence and machine learning landscape, are beginning to play a crucial role in enhancing network threat detection systems. Unlike traditional, discriminative models that classify input data into predefined categories (e.g., malicious or benign), generative models can learn to generate new data samples that are similar to the training data.”; and [0442]: “These formatting scripts (e.g., formatting script 304) may help integrate large language models into broader applications or workflows, ensuring that the interaction between human users and the AI is as seamless and effective as possible. Formatting scripts (e.g., formatting script 304) may be implemented in various programming languages, depending on the environment in which the large language model is being deployed (e.g., Python scripts for a server-side application or JavaScript for client-side processing in a web application)”); inputting, to the LLM, a first prompt requesting information for generating a threat model of the computing system, wherein the threat model is configured to identify one or more threats to the computing system (see Murphy, [0437]: “For example, in a web application that uses a large language model to generate content based on user inputs, a formatting script might”; [0438]: “Preprocess User Inputs: Clean and structure user queries into a format that the model can more effectively understand and process. This could involve correcting typos, removing unnecessary punctuation, or structuring the input into a more coherent prompt.”; [0439]: “Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output.”; and [0444]: “These LLMs can perform various natural language processing tasks, such as answering questions, generating text, translating languages, and more. LLMs work by processing input text, analyzing it, and generating appropriate responses based on learned patterns and context.”); receiving, from the LLM, a first output based on the first prompt, wherein the first output comprises: first information for a first version of the threat model (see Murphy, [0007]: “an executor subsystem configured to iteratively process the mitigation plan using a generative AI model to generate an output, wherein the executor subsystem is configured to utilize several loops and/or nested loops to generate the output; and an output formatter subsystem configured to format the output and generate a summarized human-readable report for the initial notification, wherein the summarized human-readable report defines recommended next steps and/or disclaimers.”; [0390]: “Unlike traditional, discriminative models that classify input data into predefined categories (e.g., malicious or benign), generative models can learn to generate new data samples that are similar to the training data.”; [0437]: “For certain applications, such as code generation or creating structured data from unstructured text, the script might include rules or templates to format the output in a specific syntax or schema.”; [0439]: “Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output”; and [0441]: “Handle Special Formatting: For certain applications, such as code generation or creating structured data from unstructured text, the script might include rules or templates to format the output in a specific syntax or schema.”); a test script for the computing system (see Murphy, [0218]: “threat mitigation process 10 may be configured to allow for the manual generation of testing routine 272. For example, threat mitigation process 10 may define 1300 training routine 272 for a specific attack (e.g., a Denial of Services attack) of computing platform 60. Specifically, threat mitigation process 10 may generate 1302 a simulation of the specific attack (e.g., a Denial of Services attack) by executing training routine 272 within a controlled test environment, an example of which may include but is not limited to virtual machine 274 executed on a computing device (e.g., computing device 12).”; [0219]: “When generating 1302 a simulation of the specific attack (e.g., a Denial of Services attack) by executing training routine 272 within the controlled test environment (e.g., virtual machine 274), threat mitigation process 10 may render 1304 the simulation of the specific attack (e.g., a Denial of Services attack) on the controlled test environment (e.g., virtual machine 274)”; [0222]: “Referring also to FIG. 26, threat mitigation process 10 may be configured to allow for the automatic generation of testing routine 272. For example, threat mitigation process 10 may utilize 1350 artificial intelligence/machine learning to define training routine 272 for a specific attack (e.g., a Denial of Services attack) of computing platform 60.”; [0226]: “when generating 1302 a simulation of the specific attack (e.g., a Denial of Services attack) by executing training routine 272 within the controlled test environment (e.g., virtual machine 274), threat mitigation process 10 may render 1304 the simulation of the specific attack (e.g., a Denial of Services attack) on the controlled test environment (e.g., virtual machine 274).”; and [0656]: “Model Validation and Testing: Before deployment, models are validated and tested to ensure they accurately detect intrusions while minimizing false positives and false negatives. This step might involve using separate datasets not seen by the model during the training phase to evaluate performance”); and a request for data to improve the first version of the threat model, wherein the requested data comprises a result of the test (see Murphy, [0099]: “Once defined, the above-described process of auto-generating messages (this time using revised probabilistic model 100′) may be repeated and this newly-generated content (e.g., generated information 58″) may be compared to information 58 to determine if e.g., revised probabilistic model 100′ is a good explanation of the content. If revised probabilistic model 100′ is not a good explanation of the content, the above-described process may be repeated until a proper probabilistic model is defined.”; [0186]: “Threat mitigation process 10 may receive 1006 plurality of result sets 266 from the plurality of security-relevant subsystems. Threat mitigation process 10 may then combine 1008 plurality of result sets 266 to form unified query result 268.”; [0512]: Threat mitigation process 10 may prompt 2108 a user (e.g., analyst 256) to provide feedback concerning the (above-illustrated) summarized human-readable report (e.g., summarized human-readable report 306). And (if provided), threat mitigation process 10 may receive 2110 feedback concerning the summarized human-readable report (e.g., summarized human-readable report 306) from a user (e.g., analyst 256). For example, the user (e.g., analyst 256) may be asked to give “thumbs-up/thumbs-down” feedback concerning the quality of the (above-illustrated) summarized human-readable report (e.g., summarized human-readable report 306). In the event that the feedback provided is e.g., marginal or poor, threat mitigation process 10 may ask the user (e.g., analyst 256) to provide additional commentary, examples of which may include but are not limited to: “the summary is too long”, “the summary is too short”, “I would appreciate a more detailed roadmap for remediation”, “more concise language would be helpful”, etc. And (if feedback is provided), threat mitigation process 10 may utilize 2112 the feedback to revise the above-described formatting script (e.g., formatting script 304) so that the (above-illustrated) summarized human-readable report (e.g., summarized human-readable report 306) may be tailored based upon such feedback; [0660]: “Monitoring and Updating: Cyber threats are constantly evolving; therefore, AI models require continuous monitoring and retraining to stay effective. This includes updating models with new data reflecting the latest threat patterns and re-deploying them. The repository (e.g., model repository 318) must support these iterative cycles of retraining and updating”); generating, by the computing device based on the first information, the first version of the threat model (see Murphy, [0072]: “The manner in which probabilistic model 100 may be automatically-generated by AI/ML process 56”; [0096]: “And using the probabilistic modeling technique described above, AI/ML process 56 may define a first version of the probabilistic model (e.g., probabilistic model 100) based, at least in part, upon pertinent content found within information 58.”; and [0098]: “Accordingly and when AI/ML process 56 compares the first version of the probabilistic model (e.g., probabilistic model 100) to information 58 to determine if the first version of the probabilistic model (e.g., probabilistic model 100) is a good explanation of the content, AI/ML process 56 may generate a very large quantity of messages e.g., by auto-generating messages using the above-described probabilities, the above-described nodes & node types, and the words defined in the above-described lists (e.g., lists 128, 132, 142, 146, 156, 160, 170, 174), thus resulting in generated information 58′. Generated information 58′ may then be compared to information 58 to determine if the first version of the probabilistic model (e.g., probabilistic model 100) is a good explanation of the content. For example, if generated information 58′ exceeds a threshold level of similarity to information 58, the first version of the probabilistic model (e.g., probabilistic model 100) may be deemed a good explanation of the content. Conversely, if generated information 58′ does not exceed a threshold level of similarity to information 58, the first version of the probabilistic model (e.g., probabilistic model 100) may be deemed not a good explanation of the content”); receiving, by the computing device based on executing the test script, a result of the test (see Murphy, [0227]: “Threat mitigation process 10 may allow 1306 a trainee (e.g., trainee 276) to view the simulation of the specific attack (e.g., a Denial of Services attack) and may allow 1308 the trainee (e.g., trainee 276) to provide a trainee response (e.g., trainee response 278) to the simulation of the specific attack (e.g., a Denial of Services attack). For example, threat mitigation process 10 may execute training routine 272, which trainee 276 may “watch” and provide trainee response 278.”; and [0098]: “Accordingly and when AI/ML process 56 compares the first version of the probabilistic model (e.g., probabilistic model 100) to information 58 to determine if the first version of the probabilistic model (e.g., probabilistic model 100) is a good explanation of the content, AI/ML process 56 may generate a very large quantity of messages e.g., by auto-generating messages using the above-described probabilities, the above-described nodes & node types, and the words defined in the above-described lists (e.g., lists 128, 132, 142, 146, 156, 160, 170, 174), thus resulting in generated information 58′. Generated information 58′ may then be compared to information 58 to determine if the first version of the probabilistic model (e.g., probabilistic model 100) is a good explanation of the content. For example, if generated information 58′ exceeds a threshold level of similarity to information 58, the first version of the probabilistic model (e.g., probabilistic model 100) may be deemed a good explanation of the content. Conversely, if generated information 58′ does not exceed a threshold level of similarity to information 58, the first version of the probabilistic model (e.g., probabilistic model 100) may be deemed not a good explanation of the content.”); inputting, to the LLM: an indication of the first output (see Murphy, [0099]: “If the first version of the probabilistic model (e.g., probabilistic model 100) is not a good explanation of the content, AI/ML process 56 may define a revised version of the probabilistic model (e.g., revised probabilistic model 100′)”; [0215]: “When generating 1208 revised security-relevant information 1250′ (that includes the above-described automation information), threat mitigation process 10 may combine 1210 the automation information (that results from selecting “block IP” or “search”) and initial security-relevant information 1250 to generate and render 1212 revised security-relevant information 1250′.”; and [0228]: “Threat mitigation process 10 may utilize 1356 artificial intelligence/machine learning to revise training routine 272 for the specific attack (e.g., a Denial of Services attack) of computing platform 60 based, at least in part, upon trainee response 278.”); the result of the test (see Murphy, [0185]: “Threat mitigation process 10 may effectuate 1004 at least a portion of unified query 262 on each of the plurality of security-relevant subsystems to generate plurality of result sets 266.”; and [0215]: “When generating 1208 revised security-relevant information 1250′ (that includes the above-described automation information), threat mitigation process 10 may combine 1210 the automation information (that results from selecting “block IP” or “search”) and initial security-relevant information 1250 to generate and render 1212 revised security-relevant information 1250′.”); and a second prompt requesting second information for generating a second version of the threat model (see Murphy, [0099]: “When defining revised probabilistic model 100′, AI/ML process 56 may e.g., adjust weighting, adjust probabilities, adjust node counts, adjust node types, and/or adjust branch counts to define the revised version of the probabilistic model (e.g., revised probabilistic model 100′). Once defined, the above-described process of auto-generating messages (this time using revised probabilistic model 100′) may be repeated and this newly-generated content (e.g., generated information 58″) may be compared to information 58 to determine if e.g., revised probabilistic model 100′ is a good explanation of the content.”; and [0214]: “For this particular example, the third-party (e.g., the user/owner/operator of computing platform 60) may choose two different options to manipulate initial security-relevant information 1250, namely: “block ip” or “search”, both of which will result in threat mitigation process 10 generating 1208 revised security-relevant information 1250′ (that includes the above-described automation information).”); receiving, from the LLM, the second information generated based on the results of then test (see Murphy, Abstract: “the security event within the computing platform; an executor subsystem configured to iteratively process the mitigation plan using a generative AI model to generate an output”; [0007]: “an executor subsystem configured to iteratively process the mitigation plan using a generative AI model to generate an output, wherein the executor subsystem is configured to utilize several loops and/or nested loops to generate the output; and an output formatter subsystem configured to format the output and generate a summarized human-readable report for the initial notification, wherein the summarized human-readable report defines recommended next steps and/or disclaimers.”; [0099]: “When defining revised probabilistic model 100′, AI/ML process 56 may e.g., adjust weighting, adjust probabilities, adjust node counts, adjust node types, and/or adjust branch counts to define the revised version of the probabilistic model (e.g., revised probabilistic model 100′). Once defined, the above-described process of auto-generating messages (this time using revised probabilistic model 100′) may be repeated and this newly-generated content (e.g., generated information 58″) may be compared to information 58 to determine if e.g., revised probabilistic model 100′ is a good explanation of the content.”; [0437]: “For example, in a web application that uses a large language model to generate content based on user inputs, a formatting script might”; [0437]: “For certain applications, such as code generation or creating structured data from unstructured text, the script might include rules or templates to format the output in a specific syntax or schema.”; [0439]: “Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output”; and [0658]: “Version Control: Similar to software development practices, maintaining a version control system for the AI models is crucial. This ensures that updates, improvements, and changes to the models are systematically managed, allowing for the rollback to previous versions if needed.”)); generating, by the computing device based on the second information, the second version of the threat model (see Murphy, [0099]: “When defining revised probabilistic model 100′, AI/ML process 56 may e.g., adjust weighting, adjust probabilities, adjust node counts, adjust node types, and/or adjust branch counts to define the revised version of the probabilistic model (e.g., revised probabilistic model 100′). Once defined, the above-described process of auto-generating messages (this time using revised probabilistic model 100′) may be repeated and this newly-generated content (e.g., generated information 58″) may be compared to information 58 to determine if e.g., revised probabilistic model 100′ is a good explanation of the content.”; [0437]: “For example, in a web application that uses a large language model to generate content based on user inputs, a formatting script might”; [0654]: “AI models require continuous monitoring and retraining to stay effective. This includes updating models with new data reflecting the latest threat patterns and re-deploying them. The repository (e.g., model repository 318) must support these iterative cycles of retraining and updating.”; [0658]: “Version Control: Similar to software development practices, maintaining a version control system for the AI models is crucial. This ensures that updates, improvements, and changes to the models are systematically managed, allowing for the rollback to previous versions if needed.”; [0660]: “Monitoring and Updating: Cyber threats are constantly evolving; therefore, AI models require continuous monitoring and retraining to stay effective. This includes updating models with new data reflecting the latest threat patterns and re-deploying them. The repository (e.g., model repository 318) must support these iterative cycles of retraining and updating.”; and [0681]: “The plurality of AI models (e.g., plurality of AI models 320) defined within the model repository (e.g., model repository 318) may include multiple versions of the same model (e.g., ChatGPT 3.0 versus ChatGPT 3.5 versus ChatGPT 4.0) . . . wherein such different versions provide different levels of performance/operating cost.”); and performing, by the computing device based on one or more first threats identified by the second version of the threat model, a remedial action (see Murphy, [0159]: “The holistic platform report (e.g., holistic platform reports 850, 852) may identify one or more known conditions concerning the computing platform; and threat mitigation process 10 may effectuate 808 one or more remedial operations concerning the one or more known conditions.”; and [0170]: “Once assigned 910 a threat level, threat mitigation process 10 may execute 912 a remedial action plan (e . . . , remedial action plan 252) based, at least in part, upon the assigned threat level.”). Although Murphy teaches throughout of testing the model, updating the model, revising or retraining the model with updates applying the test results, Murphy does not explicitly teach that the test is a penetration test (see rejections above). Cecchetti a penetration test (see Cecchetti, [038]: “cyber reasoning system (CRS) herein decompose penetration testing methodology into discrete, promptable steps. Large language models (LLMs) can be utilized, for instance, to extract context and structure from unstructured projects and code… In various embodiments, the CRS can decompose the penetration testing methodology into discrete, actionable, steps along the methodology using a combination of LLMs, tools, and/or event systems. “; and [0040]: “In various embodiments, by utilizing a system of feedback loops and chained analysis steps, false positives are minimized, for instance, to ensure a high state of integrity in the virtual penetration testing process.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of Murphy in view of Cecchetti by implementing a penetration test. One would be motivated to do so because Murphy teaches in paragraph [0186], “Threat mitigation process 10 may receive 1006 plurality of result sets 266 from the plurality of security-relevant subsystems. Threat mitigation process 10 may then combine 1008 plurality of result sets 266 to form unified query result 268. When combining 1008 plurality of result sets 266 to form unified query result 268, threat mitigation process 10 may homogenize 1010 plurality of result sets 266 to form unified query result 268… then provide 1012 unified query result 268 to the third-party (e.g., the user/owner/operator of computing platform 60)” and also teaches in paragraph [0660], “Monitoring and Updating: Cyber threats are constantly evolving; therefore, AI models require continuous monitoring and retraining to stay effective. This includes updating models with new data reflecting the latest threat patterns and re-deploying them. The repository (e.g., model repository 318) must support these iterative cycles of retraining and updating.”, emphasis added. As per claim 8, Murphy and Cecchetti teach a computing device comprising: one or more processors (see Murphy, [0044]: “The instruction sets and subroutines of threat mitigation process 10s, which may be stored on storage device 16 coupled to computing device 12, may be executed by one or more processors (not shown) and one or more memory architectures (not shown) included within computing device 12. Examples of storage device 16 may include but are not limited to: a hard disk drive; a RAID device; a random-access memory (RAM); a read-only memory (ROM); and all forms of flash memory storage devices.”); and memory storing instructions that, when executed by the one or more processors, cause the computing device to perform actions (see Murphy, [0044]; and [0789]: “Furthermore, the present disclosure may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium.”) comprising: sending, to a large language model (LLM), one or more software modules associated with a computing system; inputting, to the LLM, a first prompt requesting information for generating a threat model of the computing system, wherein the threat model is configured to identify one or more threats to the computing system; receiving, from the LLM, a first output based on the first prompt, wherein the first output comprises: first information for a first version of the threat model; a penetration test script for a penetration test to the computing system; and a request for data to improve the first version of the threat model, wherein the requested data comprises a result of the penetration test; generating, based on the first information, the first version of the threat model; receiving, based on executing the penetration test script, the result of the penetration test; inputting, to the LLM: an indication of the first output; the result of the penetration test; and a second prompt requesting second information for generating a second version of the threat model; receiving, from the LLM, the second information; generating, based on the second information, the second version of the threat model; and performing, based on one or more first threats identified by the second version of the threat model, a remedial action (see Claim 1 rejection above). As per claim 15, Murphy and Cecchetti teach a non-transitory computer-readable medium storing computer instructions (see Murphy, [0044]: “The instruction sets and subroutines of threat mitigation process 10s, which may be stored on storage device 16 coupled to computing device 12, may be executed by one or more processors (not shown) and one or more memory architectures (not shown) included within computing device 12. Examples of storage device 16 may include but are not limited to: a hard disk drive; a RAID device; a random-access memory (RAM); a read-only memory (ROM); and all forms of flash memory storage devices.”; [0789]: “Furthermore, the present disclosure may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium.”) that, when executed by one or more processors, cause perform actions comprising: sending, to a large language model (LLM), one or more software modules associated with a computing system; inputting, to the LLM, a first prompt requesting information for generating a threat model of the computing system, wherein the threat model is configured to identify one or more threats to the computing system; receiving, from the LLM, a first output based on the first prompt, wherein the first output comprises: first information for a first version of the threat model; a penetration test script for the computing system; and a request for data to improve the first version of the threat model, wherein the requested data comprises a result of the penetration test executed based on the penetration test script in the first output; generating, based on the first information, the first version of the threat model; receiving, based on executing the penetration test script, a result of a penetration test; inputting, to the LLM: an indication of the first output; the result of the penetration test; and a second prompt requesting second information for generating a second version of the threat model; receiving, from the LLM, the second information generated based on the results of the penetration test; generating, based on the second information, the second version of the threat model; and performing, based on one or more first threats identified by the second version of the threat model, a remedial action (see Claim 1 rejection above). DEPENDENT: As per claims 2, 9, and 16, which respectively depend on claims 1, 8, and 15, Murphy teaches further comprising: receiving, from a second computing device, update information associated with the computing system; inputting, to the LLM: the second version of the threat model; the update information; and a third prompt requesting third information for generating, based on the update information, a third version of the threat model; and receiving, from the LLM, the third information (see Murphy, [0198]: “When retroactively apply 1106 updated threat event information 270 to previously-generated information associated with one or more security-relevant subsystems 226, threat mitigation process 10 may: apply 1108 updated threat event information 270 to one or more previously-generated log files (not shown) associated with one or more security-relevant subsystems 226; apply 1110 updated threat event information 270 to one or more previously-generated data files (not shown) associated with one or more security-relevant subsystems 226; and apply 1112 updated threat event information 270 to one or more previously-generated application files (not shown) associated with one or more security-relevant subsystems 226.”; [0660]: “Monitoring and Updating: Cyber threats are constantly evolving; therefore, AI models require continuous monitoring and retraining to stay effective. This includes updating models with new data reflecting the latest threat patterns and re-deploying them. The repository (e.g., model repository 318) must support these iterative cycles of retraining and updating.”; and Claim 1 rejection above. NOTE: repeating same steps previously taught for additional versions does not functionally or patentably change or improve upon the teachings of prior art, since Murphy teaches in paragraph [0098]: “Accordingly and when AI/ML process 56 compares the first version of the probabilistic model (e.g., probabilistic model 100) to information 58 to determine if the first version of the probabilistic model (e.g., probabilistic model 100) is a good explanation of the content, AI/ML process 56 may generate a very large quantity of messages e.g., by auto-generating messages using the above-described probabilities, the above-described nodes & node types, and the words defined in the above-described lists (e.g., lists 128, 132, 142, 146, 156, 160, 170, 174), thus resulting in generated information 58′. Generated information 58′ may then be compared to information 58 to determine if the first version of the probabilistic model (e.g., probabilistic model 100) is a good explanation of the content. For example, if generated information 58′ exceeds a threshold level of similarity to information 58, the first version of the probabilistic model (e.g., probabilistic model 100) may be deemed a good explanation of the content. Conversely, if generated information 58′ does not exceed a threshold level of similarity to information 58, the first version of the probabilistic model (e.g., probabilistic model 100) may be deemed not a good explanation of the content.”. As per claims 3, 10, and 17, which respectively depend on claims 2, 9, and 16, further teaches wherein the update information comprises at least one of: updated software code associated with the one or more software modules; or a new vulnerability detected at the computing system (see Murphy, [0198]: “When retroactively apply 1106 updated threat event information 270 to previously-generated information associated with one or more security-relevant subsystems 226, threat mitigation process 10 may: apply 1108 updated threat event information 270 to one or more previously-generated log files (not shown) associated with one or more security-relevant subsystems 226; apply 1110 updated threat event information 270 to one or more previously-generated data files (not shown) associated with one or more security-relevant subsystems 226; and apply 1112 updated threat event information 270 to one or more previously-generated application files (not shown) associated with one or more security-relevant subsystems 226.”; [0660]: “Monitoring and Updating: Cyber threats are constantly evolving; therefore, AI models require continuous monitoring and retraining to stay effective. This includes updating models with new data reflecting the latest threat patterns and re-deploying them. The repository (e.g., model repository 318) must support these iterative cycles of retraining and updating.”; and [0751]: “Threat mitigation process 10 may update 2720 the one or more detection rules (e.g., detection rules 324) based upon current suspect activity, current security events, future suspect activity and/or future security events.”). As per claims 5, 12, and 19, which respectively depend on claims 1, 8, and 15, further teaches wherein the performing the remedial action comprises at least one of: blocking deployment of a version of the one or more software modules; or providing replacement code (see Murphy, [0161]: “In response to detecting such a DOS attack, threat mitigation process 10 may effectuate 808 one or more remedial operations. For example and with respect to such a DOS attack, threat mitigation process 10 may effectuate 808 e.g., a remedial operation that instructs WAF (i.e., Web Application Firewall) 212 to deny all incoming traffic from the identified attacker based upon e.g., protocols, ports or the originating IP addresses.”; and [0651]: “Further, threat mitigation process 10 may automatically perform 2418 one or more remedial operations concerning the security event. For example, threat mitigation process 10 may automatically delete/quarantine any data that was received on Port A from BlackHat.RU.”). As per claims 6, 13, and 20, which respectively depend on claims 1, 8, and 15, teaches further comprising training the LLM using training data comprising: one or more second software modules; and one or more second threats labeled for the one or more second software modules (see Murphy, [0219]: “When generating 1302 a simulation of the specific attack (e.g., a Denial of Services attack) by executing training routine 272 within the controlled test environment (e.g., virtual machine 274), threat mitigation process 10 may render 1304 the simulation of the specific attack (e.g., a Denial of Services attack) on the controlled test environment (e.g., virtual machine 274).”; [0222]: “Referring also to FIG. 26, threat mitigation process 10 may be configured to allow for the automatic generation of testing routine 272. For example, threat mitigation process 10 may utilize 1350 artificial intelligence/machine learning to define training routine 272 for a specific attack (e.g., a Denial of Services attack) of computing platform 60.”; [0224]: “When using 1350 artificial intelligence/machine learning to define training routine 272 for a specific attack (e.g., a Denial of Services attack) of computing platform 60, threat mitigation process 10 may process 1352 security-relevant information to define training routine 272 for specific attack (e.g., a Denial of Services attack) of computing platform 60.”; [0390]: “Unlike traditional, discriminative models that classify input data into predefined categories (e.g., malicious or benign), generative models can learn to generate new data samples that are similar to the training data.”; and [0435]: “As discussed above, a generative AI model (e.g., generative AI model 302) is a type of artificial intelligence system designed to generate new, synthetic data that resembles its training data. It learns the patterns, features, and distributions of the input data and can produce novel outputs, such as images, text, or sound, that mimic the original dataset.”). As per claims 7 and 14, which respectively depend on claims 1 and 8, teaches further comprising: receiving an indication of a second threat to the computing system, wherein the second threat is not identified by the second version of the threat model; and training the LLM by inputting, to the LLM: the indication of the second threat; and the second version of the threat model (see Murphy, [0391]: “Synthetic Data Generation: One of the challenges in training effective network threat detection systems is the scarcity of labeled data, especially for new and emerging threats. Generative AI models can help by creating large volumes of synthetic network traffic data, including both normal operations and various types of attack scenarios. This synthetic data can help in training more robust discriminative models (such as deep learning-based classifiers) by providing a richer, more varied dataset that covers a wider range of possible threats.”; and [0714]: “As new types of attacks emerge and organizations' network environments change, playbooks must be regularly updated. This ensures that the response strategies remain effective against the latest threats and are aligned with the current network architecture and business processes.”). Conclusion 7. For the reasons above, claims 1-3, 5-10, 12-17, and 19-20 have been rejected and remain pending. 8. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL Y WON whose telephone number is (571)272-3993. The examiner can normally be reached on Wk.1: M-F: 8-5 PST & Wk.2: M-Th: 8-7 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas R Taylor can be reached on 571-272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Michael Won/Primary Examiner, Art Unit 2443
Read full office action

Prosecution Timeline

Apr 08, 2024
Application Filed
Sep 08, 2025
Non-Final Rejection — §103
Nov 24, 2025
Interview Requested
Dec 04, 2025
Examiner Interview Summary
Dec 04, 2025
Applicant Interview (Telephonic)
Dec 10, 2025
Response Filed
Jan 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598204
FEDERATED ABNORMAL PROCESS DETECTION FOR KUBERNETES CLUSTERS
2y 5m to grant Granted Apr 07, 2026
Patent 12596959
METHOD FOR COLLABORATIVE MACHINE LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12592926
RISK ASSESSMENT FOR PERSONALLY IDENTIFIABLE INFORMATION ASSOCIATED WITH CONTROLLING INTERACTIONS BETWEEN COMPUTING SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12587507
CONTROLLER-ENABLED DISCOVERY OF SD-WAN EDGE DEVICES
2y 5m to grant Granted Mar 24, 2026
Patent 12580929
TECHNIQUES FOR ASSESSING MALWARE CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+28.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 835 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month