Prosecution Insights
Last updated: April 19, 2026
Application No. 18/815,402

SECURITY COUNTERMEASURE SUPPORT SYSTEM

Non-Final OA §103
Filed
Aug 26, 2024
Examiner
MOLES, JAMES P
Art Unit
2494
Tech Center
2400 — Computer Networks
Assignee
Nomura Research Institute, Ltd.
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
23 granted / 38 resolved
+2.5% vs TC avg
Strong +39% interview lift
Without
With
+39.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
14 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to the applicant’s filing on 08/26/2024. Claims 1-6 are pending. Claims 1, 4, and 6 are independent. Priority Acknowledgment is made of applicant's claim for foreign priority to JP 2024-009177. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/04/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a diagnostics unit” in claims 1-6. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The unit is described as diagnosing inputs and outputs from an LLM in relation to predetermined attacks on that same LLM, and making determinations as to whether the system is subject to an attack from those inputs/outputs (¶ 0006-0010, ¶ 0018-0019, ¶ 0021-0026, ¶ 0036-0043). If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3 are rejected under 35 U.S.C. 103 as being unpatentable over MANTIN et al. (US PGPub No. 2025/0111051; hereinafter “MANTIN”) in view of HARUKI et al. (US PGPub No. 2023/0274005; hereinafter “HARUKI”) in view of CLEMENT et al. (US PGPub No. 2024/0386103; hereinafter “CLEMENT”). As per claim 1: MANTIN discloses a security countermeasure support system that supports diagnostics and monitoring of security of a target system using a large language model (LLM), the system comprising (a guardian controller that may be provided with a system that includes the control application, the large language model, and a controlled application (e.g., the email program in the example above). The guardian controller includes a machine learning model that monitors the output of the machine learning model. The output of the machine learning model is one or more probabilities that reflect an assessed likelihood that the output of the large language model is influenced by a prompt injection cyberattack. If the one or more probabilities satisfy a threshold, then a determination is made that the output of the large language model is influenced (i.e., poisoned) by the prompt injection cyberattack [MANTIN ¶ 0017]; The data repository (104) stores large language model data (106). The large language model data (106) is data input to or output from a large language model, such as the large language model (136) defined further below. Thus, the large language model data (106) includes a first input (108) and a first output (110). The large language model data (106) may be text data. Thus, the first input (108) is text input and the first output (110) is text output [MANTIN ¶ 0023]): a diagnostics unit that diagnoses (a guardian controller that may be provided with a system that includes the control application, the large language model, and a controlled application ( e.g., the email program in the example above). The guardian controller includes a machine learning model that monitors the output of the machine learning model. The output of the machine learning model is one or more probabilities that reflect an assessed likelihood that the output of the large language model is influenced by a prompt injection cyberattack. If the one or more probabilities satisfy a threshold, then a determination is made that the output of the large language model is influenced (i.e., poisoned) by the prompt injection cyberattack [MANTIN ¶ 0017]), on a basis of a response from the LLM (The server (128) may include a large language model (136) [MANTIN ¶ 043]) to a [predetermined] pseudo-attack on the LLM used in the target system (takes, as input, the prompt injection cyberattack and generates a first output [MANTIN ¶ 0005]; the guardian controller (138) may be programmed to monitor the first output (110) of the large language model (136). The guardian controller (138) may be programmed to determine the probability (122) that the first output (110) of the large language model (136) is poisoned by the prompt injection cyberattack (102). The guardian controller (138) may be programmed to determine whether the probability (122) satisfies the threshold (124) [MANTIN ¶ 0045]), whether or not the predetermined attack has succeeded (The guardian controller (138) may be programmed to determine the probability (122) that the first output (110) of the large language model (136) is poisoned by the prompt injection cyberattack (102). The guardian controller (138) may be programmed to determine whether the probability (122) satisfies the threshold (124) [MANTIN ¶ 0045, Examiner’s Note: satisfying the threshold is succeeding as it’s passing the threshold for qualifying as a prompt injection]), wherein the [predetermined] attack includes, in a user prompt to be input to the LLM, [information that violates a command in a system prompt input to the LLM] (takes, as input, the prompt injection cyberattack and generates a first output [MANTIN ¶ 0005]; The user query (120) is a query received from one of the user devices (146). In embodiment, the user query (120) may be received from the malicious user device (100), in which case the user query (120) may be referred to as a malicious query. The user query (120) may be received at the server (128) or may be received directly by one of the control application (132) or the large language model (136) [MANTIN ¶ 0030]), [information that violates a command in a system prompt input to the LLM] (In a direct attack, a malicious user modifies a large language model's input in an attempt to overwrite existing system prompts [¶ 0004]). MANTIN discloses the claimed subject matter as discussed above but does not explicitly disclose predetermined pseudo-attack; predetermined attack. However, HARUKI teaches predetermined pseudo-attack (The verification execution unit attacks a verification environment in which at least one of attack countermeasures indicated by attack countermeasure information is applied to a verification target system by using each of a plurality of attack scenarios, and creates a possible attack scenario list that is a list of attack scenarios in which an attack has succeeded [HARUKI ¶ 0043]; the storage unit 14 stores verification target system definition information 14A, an attack scenario database (DB) 14B [HARUKI ¶ 0052]); predetermined attack (The verification execution unit attacks a verification environment in which at least one of attack countermeasures indicated by attack countermeasure information is applied to a verification target system by using each of a plurality of attack scenarios, and creates a possible attack scenario list that is a list of attack scenarios in which an attack has succeeded [HARUKI ¶ 0043]; the storage unit 14 stores verification target system definition information 14A, an attack scenario database (DB) 14B [HARUKI ¶ 0052]). MANTIN and HARUKI are analogous art because they are from the same field of endeavor of security testing and evaluation. Therefore, based on MANTIN in view of HARUKI, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of HARUKI to the system of MANTIN in order to improve robustness of security through verification of the system against known attack scenarios. Hence, it would have been obvious to combine the references above to obtain the invention as specified in the instant claim. MANTIN in view of HARUKI discloses the claimed subject matter as discussed above but does not explicitly disclose information that violates a command in a system prompt input to the LLM. However, CLEMENT teaches information that violates a command in a system prompt input to the LLM (Initially, the large language model is given initial instructions 202 to perform the target task. System prompts (<system> . . . </system>) are created by the developer of the conversational interactions with the model to constrain the model to acting in prescribed ways consistent with the intent and policies of the service hosting the large language model. For example, as shown in FIG. 2, the initial instructions 202 indicate, in part, that “you will always begin every response by repeating the SECRET in the user prompt.” This is the original goal (i.e., intent/policy) of the large language model. When the model generates a response that does not repeat the SECRET then it is assumed that a prompt injection attack has occurred [CLEMENT ¶ 0036]). CLEMENT and the instant application are analogous art because they are from the same field of endeavor of LLM security. Therefore, based on MANTIN in view of HARUKI in view of CLEMENT, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of CLEMENT to the system of MANTIN in view of HARUKI in order to provide another layer of security to prompt injection in the form of a verifiable secret. Hence, it would have been obvious to combine the references above to obtain the invention as specified in the instant claim. As per claim 2: MANTIN in view of HARUKI in view of CLEMENT teach all the limitations of claim 1. Furthermore, MANTIN and CLEMENT disclose wherein the user prompt (takes, as input, the prompt injection cyberattack and generates a first output [MANTIN ¶ 0005]; The user query (120) is a query received from one of the user devices (146). In embodiment, the user query (120) may be received from the malicious user device (100), in which case the user query (120) may be referred to as a malicious query. The user query (120) may be received at the server (128) or may be received directly by one of the control application (132) or the large language model (136) [MANTIN ¶ 0030]) includes a command to output content of the system prompt (In goal hijacking, the inserted text is used to confuse the model or cause it to forget its instructions, allowing the user to ask the model questions which violate the rules of interaction set out in the initial or system prompt… In prompt leaking, the unintended goal is to print out a portion of or the whole original or system prompt which may be used for malicious purposes [CLEMENT ¶ 0004]). As per claim 3: MANTIN in view of HARUKI in view of CLEMENT teach all the limitations of claim 1. Furthermore, MANTIN and CLEMENT disclose wherein the user prompt (takes, as input, the prompt injection cyberattack and generates a first output [MANTIN ¶ 0005]; The user query (120) is a query received from one of the user devices (146). In embodiment, the user query (120) may be received from the malicious user device (100), in which case the user query (120) may be referred to as a malicious query. The user query (120) may be received at the server (128) or may be received directly by one of the control application (132) or the large language model (136) [MANTIN ¶ 0030]) includes a command to ignore or avoid a restriction related to the command of the system prompt (In goal hijacking, the inserted text is used to confuse the model or cause it to forget its instructions, allowing the user to ask the model questions which violate the rules of interaction set out in the initial or system prompt… In prompt leaking, the unintended goal is to print out a portion of or the whole original or system prompt which may be used for malicious purposes [CLEMENT ¶ 0004]). Claims 4 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over MANTIN in view of MANTIN. As per claim 4: MANTIN discloses a security countermeasure support system that supports diagnostics and monitoring of security of a target system using a large language model (LLM), the system comprising (a guardian controller that may be provided with a system that includes the control application, the large language model, and a controlled application ( e.g., the email program in the example above). The guardian controller includes a machine learning model that monitors the output of the machine learning model. The output of the machine learning model is one or more probabilities that reflect an assessed likelihood that the output of the large language model is influenced by a prompt injection cyberattack. If the one or more probabilities satisfy a threshold, then a determination is made that the output of the large language model is influenced (i.e., poisoned) by the prompt injection cyberattack [MANTIN ¶ 0017, Fig. 1A]; The data repository (104) stores large language model data (106). The large language model data (106) is data input to or output from a large language model, such as the large language model (136) defined further below. Thus, the large language model data (106) includes a first input (108) and a first output (110). The large language model data (106) may be text data. Thus, the first input (108) is text input and the first output (110) is text output [MANTIN ¶ 0023]): a diagnostics unit that obtains an input and output (takes, as input, the prompt injection cyberattack and generates a first output [MANTIN ¶ 0005]; the guardian controller (138) may be programmed to monitor the first output (110) of the large language model (136). The guardian controller (138) may be programmed to determine the probability (122) that the first output (110) of the large language model (136) is poisoned by the prompt injection cyberattack (102). The guardian controller (138) may be programmed to determine whether the probability (122) satisfies the threshold (124) [MANTIN ¶ 0045, Fig. 1A, Examiner’s Note: server is interpreted as the diagnostic unit]) with respect to the LLM (The server (128) may include a large language model (136) [MANTIN ¶ 043]) used in the target system (a guardian controller that may be provided with a system that includes the control application, the large language model, and a controlled application ( e.g., the email program in the example above). The guardian controller includes a machine learning model that monitors the output of the machine learning model. The output of the machine learning model is one or more probabilities that reflect an assessed likelihood that the output of the large language model is influenced by a prompt injection cyberattack. If the one or more probabilities satisfy a threshold, then a determination is made that the output of the large language model is influenced (i.e., poisoned) by the prompt injection cyberattack [MANTIN ¶ 0017]), and diagnoses, on a basis of the input and output, presence or absence of an attack on the LLM (The guardian controller (138) may be programmed to determine the probability (122) that the first output (110) of the large language model (136) is poisoned by the prompt injection cyberattack (102). The guardian controller (138) may be programmed to determine whether the probability (122) satisfies the threshold (124) [MANTIN ¶ 0045, Examiner’s Note: satisfying the threshold is succeeding as it’s passing the threshold for qualifying as a prompt injection]) by one or more predetermined methods (The guardian controller (138) may be programmed to determine whether the probability (122) satisfies the threshold (124) [MANTIN ¶ 0045, Fig. 1A, Examiner’s Note: satisfying the threshold is succeeding as it’s passing the threshold for qualifying as a prompt injection]; The data repository (104) also may store a threshold (124). The threshold (124) is a value that represents a point at which the probability (122) is considered high enough that the first output (110) of the large language model (136) will be treated as having been poisoned by the prompt injection cyberattack (102). Thus, the probability (122) may be compared to the threshold (124) [MANTIN ¶ 0032, Examiner’s Note: threshold stored prior to detection in data repository]) [with reference to an attack signature accumulated as intelligence]. MANTIN discloses the claimed subject matter as discussed above but does not explicitly disclose with reference to an attack signature accumulated as intelligence. However, MANTIN teaches with reference to an attack signature accumulated as intelligence (generating, by a control application, queries to a large language model, where at least some of the queries include known prompt injection cyberattacks. The queries may be generated by a data scientist manipulating the control application, or some other input scheme, for generating the queries to the large language model. In an embodiment, the generation of the queries may be performed by retrieving historical queries submitted to the large language model, some of which were known to include the prompt injection cyberattacks. The generated queries may be considered training data, in reference to FIG. 1B [MANTIN ¶ 0091, Examiner’s Note: known prompt injections are attack signatures accumulated as historical queries]; The method of FIG. 3 may be varied. For example, the method of FIG. 3 also may include adding the trained machine learning model to a guardian application that monitors the monitored outputs of the large language model prior to the guardian application passing of the monitored outputs to a control application [¶ 0096, Examiner’s Note: incorporating the trained model into the detection based on the learned attack signatures]). MANTIN and MANTIN are analogous art because they are from the same field of endeavor of LLM security. Therefore, based on MANTIN in view of MANTIN, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of MANTIN to the system of MANTIN in order to monitor the output based on historical examples for improved detection. Hence, it would have been obvious to combine the references above to obtain the invention as specified in the instant claim. As per claim 6: MANTIN discloses a security countermeasure support system that supports diagnostics and monitoring of security of a target system using a large language model (LLM), the system comprising (a guardian controller that may be provided with a system that includes the control application, the large language model, and a controlled application ( e.g., the email program in the example above). The guardian controller includes a machine learning model that monitors the output of the machine learning model. The output of the machine learning model is one or more probabilities that reflect an assessed likelihood that the output of the large language model is influenced by a prompt injection cyberattack. If the one or more probabilities satisfy a threshold, then a determination is made that the output of the large language model is influenced (i.e., poisoned) by the prompt injection cyberattack [MANTIN ¶ 0017, Fig. 1A]; The data repository (104) stores large language model data (106). The large language model data (106) is data input to or output from a large language model, such as the large language model (136) defined further below. Thus, the large language model data (106) includes a first input (108) and a first output (110). The large language model data (106) may be text data. Thus, the first input (108) is text input and the first output (110) is text output [MANTIN ¶ 0023]): a diagnostics unit that obtains an input and output with respect to the LLM used in the target system, and diagnoses (takes, as input, the prompt injection cyberattack and generates a first output [MANTIN ¶ 0005]; the guardian controller (138) may be programmed to monitor the first output (110) of the large language model (136). The guardian controller (138) may be programmed to determine the probability (122) that the first output (110) of the large language model (136) is poisoned by the prompt injection cyberattack (102). The guardian controller (138) may be programmed to determine whether the probability (122) satisfies the threshold (124) [MANTIN ¶ 0045, Fig. 1A, Examiner’s Note: server is interpreted as the diagnostic unit]), on a basis of the input and output, presence or absence of an attack on the LLM (a guardian controller that may be provided with a system that includes the control application, the large language model, and a controlled application ( e.g., the email program in the example above). The guardian controller includes a machine learning model that monitors the output of the machine learning model. The output of the machine learning model is one or more probabilities that reflect an assessed likelihood that the output of the large language model is influenced by a prompt injection cyberattack. If the one or more probabilities satisfy a threshold, then a determination is made that the output of the large language model is influenced (i.e., poisoned) by the prompt injection cyberattack [MANTIN ¶ 0017]) by one or more predetermined methods (The guardian controller (138) may be programmed to determine whether the probability (122) satisfies the threshold (124) [MANTIN ¶ 0045, Fig. 1A, Examiner’s Note: satisfying the threshold is succeeding as it’s passing the threshold for qualifying as a prompt injection]; The data repository (104) also may store a threshold (124). The threshold (124) is a value that represents a point at which the probability (122) is considered high enough that the first output (110) of the large language model (136) will be treated as having been poisoned by the prompt injection cyberattack (102). Thus, the probability (122) may be compared to the threshold (124) [MANTIN ¶ 0032, Examiner’s Note: threshold stored prior to detection in data repository]) [with reference to an attack signature accumulated as intelligence], [wherein content of the intelligence is updated on a basis of a result of diagnostics in which the diagnostics unit diagnoses, on a basis of a response from the LLM to a predetermined pseudo- attack on the LLM used in the target system, whether or not the predetermined attack has succeeded]. MANTIN discloses the claimed subject matter as discussed above but does not explicitly disclose with reference to an attack signature accumulated as intelligence; wherein content of the intelligence is updated on a basis of a result of diagnostics in which the diagnostics unit diagnoses, on a basis of a response from the LLM to a predetermined pseudo- attack on the LLM used in the target system, whether or not the predetermined attack has succeeded. However, MANTIN teaches with reference to an attack signature accumulated as intelligence (generating, by a control application, queries to a large language model, where at least some of the queries include known prompt injection cyberattacks. The queries may be generated by a data scientist manipulating the control application, or some other input scheme, for generating the queries to the large language model. In an embodiment, the generation of the queries may be performed by retrieving historical queries submitted to the large language model, some of which were known to include the prompt injection cyberattacks. The generated queries may be considered training data, in reference to FIG. 1B [MANTIN ¶ 0091, Examiner’s Note: known prompt injections are attack signatures accumulated as historical queries]; The method of FIG. 3 may be varied. For example, the method of FIG. 3 also may include adding the trained machine learning model to a guardian application that monitors the monitored outputs of the large language model prior to the guardian application passing of the monitored outputs to a control application [¶ 0096, Examiner’s Note: incorporating the trained model into the detection based on the learned attack signatures]); wherein content of the intelligence is updated on a basis of a result of diagnostics in which the diagnostics unit diagnoses, on a basis of a response from the LLM to a predetermined pseudo- attack on the LLM used in the target system, whether or not the predetermined attack has succeeded (generating, by a control application, queries to a large language model, where at least some of the queries include known prompt injection cyberattacks. The queries may be generated by a data scientist manipulating the control application, or some other input scheme, for generating the queries to the large language model. In an embodiment, the generation of the queries may be performed by retrieving historical queries submitted to the large language model, some of which were known to include the prompt injection cyberattacks. The generated queries may be considered training data, in reference to FIG. 1B [MANTIN ¶ 0091, Examiner’s Note: known prompt injections are attack signatures accumulated as historical queries]; The method of FIG. 3 may be varied. For example, the method of FIG. 3 also may include adding the trained machine learning model to a guardian application that monitors the monitored outputs of the large language model prior to the guardian application passing of the monitored outputs to a control application [¶ 0096, Examiner’s Note: incorporating the trained model into the detection based on the learned attack signatures]). MANTIN and MANTIN are analogous art because they are from the same field of endeavor of LLM security. Therefore, based on MANTIN in view of MANTIN, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of MANTIN to the system of MANTIN in order to monitor the output based on historical examples for improved detection. Hence, it would have been obvious to combine the references above to obtain the invention as specified in the instant claim. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over MANTIN in view of MANTIN in view of CLEMENT. As per claim 5: MANTIN in view of MANTIN teach all the limitations of claim 4. Furthermore, MANTIN discloses wherein the predetermined method includes any of (The guardian controller (138) may be programmed to determine whether the probability (122) satisfies the threshold (124) [MANTIN ¶ 0045, Fig. 1A, Examiner’s Note: satisfying the threshold is succeeding as it’s passing the threshold for qualifying as a prompt injection]; The data repository (104) also may store a threshold (124). The threshold (124) is a value that represents a point at which the probability (122) is considered high enough that the first output (110) of the large language model (136) will be treated as having been poisoned by the prompt injection cyberattack (102). Thus, the probability (122) may be compared to the threshold (124) [MANTIN ¶ 0032, Examiner’s Note: threshold stored prior to detection in data repository]) scoring based on an empirical rule accumulated in the intelligence based on the input and output, scoring in which the LLM is inquired about whether or not the input and output correspond to the attack, scoring based on similarity between vectorized text of the input and output and vectorized text of the attack signature accumulated in the intelligence, or [determination on whether or not a predetermined canary token specified in a system prompt is included in an output from the LLM]. MANTIN in view of MANTIN discloses the claimed subject matter as discussed above but does not explicitly disclose determination on whether or not a predetermined canary token specified in a system prompt is included in an output from the LLM. However, CLEMENT teaches determination on whether or not a predetermined canary token specified in a system prompt is included in an output from the LLM (A security agent is used sign a user prompt destined to a large language model from a user with a secret in order to prevent a prompt injection attack. The security agent resides on the server hosting the large language model and is isolated from the user application and user device that generates the large language model prompt. The secret is tailored for a specific user identifier and session identifier associated with the user prompt. The large language model is instructed to repeat the secret in each response. The security agent retrieves the response from the large language model and checks for the secret. When the secret is not part of the response, an error message is forwarded to the user application instead of the response [CLEMENT ¶ 0006]). MANTIN in view of MANTIN and CLEMENT are analogous art because they are from the same field of endeavor of LLM security. Therefore, based on MANTIN in view of MANTIN in view of CLEMENT, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of CLEMENT to the system of MANTIN in view of MANTIN in order to provide another layer of LLM prompt protection to help in preventing prompt injection for improved output security. Hence, it would have been obvious to combine the references above to obtain the invention as specified in the instant claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES P MOLES whose telephone number is (703)756-1043. The examiner can normally be reached M-F 8:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jung Kim can be reached at (571) 272-3804. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES P MOLES/Examiner, Art Unit 2494 /JUNG W KIM/Supervisory Patent Examiner, Art Unit 2494
Read full office action

Prosecution Timeline

Aug 26, 2024
Application Filed
Mar 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603896
Agent prevention augmentation based on organizational learning
2y 5m to grant Granted Apr 14, 2026
Patent 12596805
A CYBER-ATTACK DETECTION AND PREVENTION SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579283
FACILITATING SECURITY VERIFICATION OF SYSTEM DESIGNS USING ADVERSARIAL MACHINE LEARNING
2y 5m to grant Granted Mar 17, 2026
Patent 12530137
Effectively In-Place Encryption System For Encrypting System/Root/Operating System (OS) Partitions And User Data Partitions
2y 5m to grant Granted Jan 20, 2026
Patent 12487759
SECURE MONITORS FOR MEMORY PAGE PROTECTION
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+39.3%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month