Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The following is a final office action in response to the communication received December 30, 2025. Claims 1, 8 and 15 have been amended. Therefore, claims 1-20 are pending and addressed below.
Response to Arguments
Applicant's arguments filed December 30, 2025 have been fully considered but they are not persuasive for the following reasons:
Applicant’s arguments with respect to the rejections of amended claims 1, 8 and 15 under 35 U.S.C 102(a)(1) have been fully considered but are moot because additional citations from the same prior art (Boyer et al: US PG-PUB No. 20240045990 A1) are added to support the examiner’s response. (see below rejection details)
Therefore, claims 1, 8 and 15 are rejected under 35 U.S.C 102(a)(1). As claims 2-7 are dependent directly or indirectly on claim 1, claims 9-14 are dependent directly or indirectly on claim 8, claims 16-20 are dependent directly or indirectly on claim 16, applicant’s argument with respect to the rejections of claim 2-7, 9-14 and 16-20 are moot.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Boyer et al (US PG-PUB No. 20240045990 A1).
Regarding claim 1, 8 and 15, Boyer et al, hereinafter Boyer, teaches a system, comprising: at least one computing device comprising at least one processor and at least one memory; and machine-readable instructions stored in the at least one memory that, when executed by the at least one processor, cause the at least one computing device to at least: generate at least one user interface comprising instructions to provide audio data that describes, for a particular application, at least one of: threats, weaknesses, security controls, or any combination thereof; generate the at least one user interface comprising instructions to provide image data that describes, for the particular application, at least one of: the threats, the weaknesses, the security controls, or any combination thereof (Paragraph [0082]: “The interactive cyber security user interface 710 (e.g. a form a chatbot) receives supplied input from a user, whether it be via written or voice input (This AI-based cyber security system generates an interactive cyber security user interface and monitors a chat with user, receives text/image data or audio data from the chat, the audio, text/image describes cyber security, such as threats, weaknesses or security controls) from supplied from the user from a number of different input sources, such as the UI of the local cyber security appliance 100 (AI-based cyber security system), via chat services, such as Slack, Teams, etc., via the mobile app on a user's smart device, etc.”);
Boyer further teaches a system and a method, comprising: training a threat modeling multimodal Large Language Model (LLM) to use audio and images to generate application security data comprising at least one of: threat data, weakness data, security control data, a security risk summarization, an application threat model, or any combination thereof, wherein the threat modeling multimodal LLM is trained using an audio input training set, an image input training set, and an application security data training set (Paragraph [0028]: “the cyber threat analyst module 120 (threat modeling) can cooperate with the internal data sources (internal weakness data) as well as external data sources (external threat data) to collect data in its investigation. More specifically, the cyber threat analyst module 120 can cooperate with the other modules and the AI model(s) 160 in the cyber security appliance 100 (threat modeling multimodal LLM) to conduct a long-term investigation and/or a more in-depth investigation of potential and emerging cyber threats (security risk summarization) directed to one or more domains in an enterprise's system.”; Paragraph [0074]: “the LLMs of the LLM module can be trained (training a threat modeling multimodal Large Language Model (LLM)) on automatically generated data sets (generate application security data). The LLM responsible for analyzing the natural language input (audio and image inputs using chatbot) may be trained on data sets that model human inputs (LLM is trained using an audio input training set, an image input training set, and an application security data training set), created by using formal grammar rules to stochastically generate training data pairs of equivalent human and machine syntax strings”);
Boyer further teaches the system and method comprising: generate application architecture data based at least in part on a review of source code for the particular application, the application architecture data describing a plurality of application architecture components and a plurality of connections between the plurality of application architecture components; generate multimodal LLM prompting data based at least in part on the application architecture data, the audio data, and the image data (Paragraph [0182]: “the cyber security restoration engine to restore the protected system can use historic source code base information and modelling from the AI models (The Threat Modeling Multimodal LLM generates application architecture data based at least in part on a review of source code for the particular application) in the detection engine for development to revert commits and code changes that potentially introduce bad or compromised code. The cyber security restoration engine to restore the protected system can also use historic records of a source code database information to find out when during the development of a product that the cyber-attack occurred on the source code in order to restore the source code back to the state before the compromise occurred, as well as use historic code base analysis and understanding to identify supply chain and products vulnerable to bad code/compromised code and sending an update package/at least a notice to revert those products and further prevent the source code vulnerabilities from trickling down the supply chains from the vendor to the end user (the application architecture data describing a plurality of application architecture components and a plurality of connections between the plurality of application architecture components).Once file data of a cyber threat is identified, then that file data and its characteristics are captured in an inoculation package (identified application architecture data) and then cascade that file information to each cyber security appliance in the fleet of cyber security appliances, and quarantine the identical and very similar files in order to remove them from all of the environments before anything can spread even more than it has via immediate remediation and also using the system's own inoculation data.”).
Boyer further teaches the system and method comprising: inputting, into the threat modeling multimodal Large Language Model (LLM), the multimodal LLM prompting data comprising: audio data, image data, and LLM instructions for the threat modeling multimodal LLM to generate application security data using the audio data and the image data; and receiving, from the threat modeling multimodal LLM, the application security data comprising the at least one of: the threat data, the weakness data, the security control data, the security risk summarization, the application threat model, or any combination thereof (Paragraph [0067]: “Once the LLM module has generated one or more queries (generate application security data, such as threat data, in queries to request threat information from the components of the cyber security system identified by the LLM module) based on the received natural language input (input into the LLM module, the chatbot prompting data, comprising audio data and image data), the interactive cyber security user interface 710 communicates the queries to the relevant components of the cyber security system. Upon receiving a query from the interactive cyber security user interface 710, each of the relevant components of the cyber security system prepares a response to the query (security risk summarization), which it then returns to the interactive cyber security user interface 710.”; Paragraph [0068]: "the interactive cyber security user interface 710 (comprising LLM) may process the responses received from each of the queried components of the cyber security system using the LLM module (threat modeling multimodal LLM) to collate and/or summarize the information received in the responses (receive from the LLM, application security data comprising security risk summarization).").
Regarding claim 2, 9 and 16, Boyer teaches all of the features with respect to claim 1, 8 and 15, as outlined above.
Boyer further teaches wherein the LLM instructions comprise natural language instructions for the threat modeling multimodal LLM (Paragraph [0006]: “an interactive cyber security user interface (for threat modeling) is configured with software code and electronic hardware to comprise a large language model, LLM, module configured to receive a natural language input from a user (LLM instructions comprise natural language instructions)”).
Regarding claim 3, 10 and 17, Boyer teaches all of the features with respect to claim 1, 8 and 15, as outlined above.
Boyer further teaches wherein the LLM instructions comprise a first LLM instruction subset for the audio data and a second LLM instruction subset for the image data (Paragraph [0082]: “The interactive cyber security user interface 710 (comprises LLM) can convert the speech to text and/or text from the user into supplied text that is fed into a natural language processing module to generate both a specific question being asked as well as a dialog manager to keep track of the background contextual information associated with the question being asked. The speech to text module, natural language processing module that receives the text and audio file (audio data set) to derive what the user said and what the user intended, and the dialog manager module that manages and keeps track of a state of a dialog with a user (monitor and track dialog as an image data set) may all be part of a user interaction module.”).
Regarding claim 4, 11 and 18, Boyer teaches all of the features with respect to claim 1, 8 and 15, as outlined above.
Boyer further teaches wherein the application threat model comprises a data flow diagram that visually shows the threat data, the weakness data, and the security control data in a diagrammatic form (Paragraph [0158]: “FIG. 9 to FIG. 14 illustrate diagrams (data flow diagrams that visually shows the threat data, the weakness data and the security control data in a diagrammatic form via audio and/or dialog) of an embodiment of an intelligent orchestration component facilitating an example of an Artificial Intelligence augmented and adaptive interactive response loop between the four Artificial Intelligence-based engines.”).
Regarding claim 5, 12 and 19, Boyer teaches all of the features with respect to claim 4, 8 and 15, as outlined above.
Boyer further teaches wherein the data flow diagram comprises an interactive data flow diagram viewed using a threat modeling software (Paragraph [0040]: “The cyber threat detection engine can also have an anomaly alert system in a formatting module configured to report out anomalous incidents and events as well as the cyber threat detected to a display screen viewable by a human cyber-security professional (the interactive data flow diagram viewed using a threat modeling software). Each Artificial Intelligence-based engine has a rapid messaging system to communicate with a human cyber-security team to keep the human cyber-security team informed on actions autonomously taken and actions needing human approval to be taken.”).
Regarding claim 6, 13 and 20, Boyer teaches all of the features with respect to claim 4, 8 and 15, as outlined above.
Boyer further teaches wherein the data flow diagram comprises an image (Paragraph [0085]: “The interactive cyber security user interface 710 will engage in a dialog (dialog is viewed as text in image) to obtain additional contextual information from the user until a confidence level is achieved or exceeded on what specifically the user is querying on, and then maintaining that context thread for subsequent user queries regarding that same thread/topic so that the system can set the user's question itself within a certain context in order to return a more efficient and relevant answer to the user's question itself.”).
Regarding claim 7 and 14, Boyer teaches all of the features with respect to claim 1 and 8, as outlined above.
Boyer further teaches wherein the machine-readable instructions, when executed by the at least one processor, further cause the at least one computing device to at least: receive, from the threat modeling multimodal LLM, the application security data comprising: the threat data, the weakness data, and the security control data; input, into an LLM, the threat data, the weakness data, the security control data, and instructions for the LLM to generate the security risk summarization corresponding to a predetermined length of text that describes the threat data, the weakness data, and the security control data for the particular application (Paragraph [0073]: “In one example, the LLM module may comprise two (medium sized) LLMs (multimodal LLM). A first LLM is configured to receive and analyze the natural language input (The input of the first LLM is audio and/or image data from the UI) to determine what information is required to respond to the input and which components of the cyber security system need to be queried to obtain this information (The output of the first LLM is the application security data comprising the threat data, weakness data, and the security control data). A second LLM is configured to convert the output of the first LLM into the required formats for communication with the desired modules of the cyber security system via their APIs (The input of the second LLM is the output of the first LLM: the application security data comprising the threat data, weakness data, and the security control data),as determined by the first LLM. Each of these LLMs will be trained (in particular, fine-tuned) on labelled training data specific to their context and role.”; Paragraph [0068]: "the interactive cyber security user interface 710 may process the responses received from each of the queried components (the threat data, the weakness data, the security control data, and instructions) of the cyber security system using the LLM module to collate and/or summarize the information received in the responses ( The second LLM generates the security risk summarization corresponding to a predetermined length of text that describes the threat data, the weakness data, and the security control data for the particular application).").
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (see PTO-892 form for details).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASMINE DAY whose telephone number is (571)272-0204. The examiner can normally be reached Monday - Friday 9:00 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached at 571-272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.D./Examiner, Art Unit 2499 /PHILIP J CHEA/Supervisory Patent Examiner, Art Unit 2499