DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 4/30/2024 in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Status of Claims
Claims 2, 5, 12 and 15 are cancelled and claims 21-24 are newly added leaving claims 1, 3-4. 6-11, 13-14 and 16-24 pending in this application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite identifying two subjects in a prompt, determining if they are mutually opposed, then not submitting the prompt if they are, which is a mental process that could be performed by a person with pencil and paper. This judicial exception is not integrated into a practical application because the only additional elements are generic computing components performing generic computing tasks. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the only additional elements are generic computing components performing generic computing tasks.
As per claim 1, the claim recites the following limitations:
A) identifying, by a device, a first subject indicated by a prompt to a large language model;
B) identifying, by the device, a second subject indicated by the prompt to the large language model;
C) determining, by the device, whether the first subject and the second subject are mutually opposed subjects; and
D) preventing, by the device, the large language model from processing the prompt when the first subject and the second subject are mutually opposed subjects.
L) preventing the large language model from processing the prompt includes filtering a portion of the prompt from being provided to the large language model for processing.
R) the portion of the prompt prevented from being provided to the large language model comprises the second subject that is mutually opposed to the first subject
I) the second subject includes one or more of a task to be performed by the large language model, a reference to sensitive data, a portion of a constraint in a performance of the task, or an intended output of the performance of the task.
Limitations A-D, I, L & R are drawn to a mental process.
The subject matter eligibility analysis is as follows:
Step 1: Is the claim to a process, machine, manufacture or composition of matter? YES
Step 2A, prong 1: Does the claim recite an abstract idea, law of nature or natural phenomenon?
YES
Step 2A, prong 2: Does the claim recite additional elements that integrate the judicial exception
into a practical application? NO
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception? NO
Therefore the claim is not subject matter eligible.
As per claim 11, the claim recites the following limitations:
E) one or more network interfaces to communicate with a network;
F) a processor coupled to the one or more network interfaces and configured to execute one or more processes; and
G) a memory configured to store a process that is executable by the processor, the process, when executed, configured to:
A) identify a first subject indicated by a prompt to a large language model;
B) identify a second subject indicated by the prompt to the large language model;
C) determine whether the first subject and the second subject are mutually opposed subjects; and
D) prevent the large language model from processing the prompt when the first subject and the second subject are mutually opposed subjects.
L) preventing the large language model from processing the prompt includes filtering a portion of the prompt from being provided to the large language model for processing.
R) the portion of the prompt prevented from being provided to the large language model comprises the second subject that is mutually opposed to the first subject
I) the second subject includes one or more of a task to be performed by the large language model, a reference to sensitive data, a portion of a constraint in a performance of the task, or an intended output of the performance of the task.
Limitations A-D, I, L & R are drawn to a mental process and E-G are directed to generic computing components performing generic computing functions.
The subject matter eligibility analysis is as follows:
Step 1: Is the claim to a process, machine, manufacture or composition of matter? YES
Step 2A, prong 1: Does the claim recite an abstract idea, law of nature or natural phenomenon?
YES
Step 2A, prong 2: Does the claim recite additional elements that integrate the judicial exception
into a practical application? NO
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception? NO
Therefore the claim is not subject matter eligible.
As per claim 20, the claim recites the following limitations:
H) A tangible, non-transitory, computer-readable medium having computer-executable instructions stored thereon that, when executed by a processor on a computer, cause the computer to perform a method comprising:
A) identifying a first subject indicated by a prompt to a large language model;
B) identifying a second subject indicated by the prompt to the large language model;
C) determining whether the first subject and the second subject are mutually opposed subjects;
D) preventing the large language model from processing the prompt when the first subject and the second subject are mutually opposed subjects.
L) preventing the large language model from processing the prompt includes filtering a portion of the prompt from being provided to the large language model for processing.
R) the portion of the prompt prevented from being provided to the large language model comprises the second subject that is mutually opposed to the first subject
I) the second subject includes one or more of a task to be performed by the large language model, a reference to sensitive data, a portion of a constraint in a performance of the task, or an intended output of the performance of the task.
Limitations A-D, I, L & R are drawn to a mental process and limitation H is directed to generic computing components performing generic computing functions.
The subject matter eligibility analysis is as follows:
Step 1: Is the claim to a process, machine, manufacture or composition of matter? YES
Step 2A, prong 1: Does the claim recite an abstract idea, law of nature or natural phenomenon?
YES
Step 2A, prong 2: Does the claim recite additional elements that integrate the judicial exception
into a practical application? NO
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception? NO
Therefore the claim is not subject matter eligible.
As per claims 3, 13 and 21, the claim recites the following additional limitations:
J) identifying mutually opposed predicates associated with the first subject and the second subject within the prompt.
Limitation J is directed to a mental process, therefore the subject matter eligibility analysis remains unchanged.
As per claims 4, 14, and 22, the claim recites the following additional limitations:
K) the large language model is prevented from processing the prompt by blocking the prompt from being provided to the large language model for processing.
Limitation K is directed to a mental process, therefore the subject matter eligibility analysis remains unchanged.
As per claims 6, 16 and 23, the claim recites the following additional limitations:
M) sending the prompt back to a user to indicate which mutually opposed portion of the prompt is the portion of the prompt to be filtered.
Limitation M is directed to a mental process, therefore the subject matter eligibility analysis remains unchanged.
As per claims 7, 17 and 24, the claim recites the following additional limitations:
N) preventing the large language model from processing the prompt includes flagging the prompt for reengineering prior to being provided to the large language model for processing.
Limitation N is directed to a mental process, therefore the subject matter eligibility analysis remains unchanged.
As per claims 8 and 18, the claim recites the following additional limitations:
O) control over the prompt is retained by an intermediate layer prior to sending the prompt to an external entity.
Limitation O is directed to a mental process, therefore the subject matter eligibility analysis remains unchanged.
As per claim 9, the claim recites the following additional limitations:
P) parsing the prompt to generate a prompt characterization, wherein the prompt characterization includes one or more of a task requested in the prompt, sensitive data entailed in completing the task, a constraint applicable to completing the task, or a targeted output upon completion of the task.
Limitation P is directed to a mental process, therefore the subject matter eligibility analysis remains unchanged.
As per claim 10, the claim recites the following additional limitations:
Q) the first subject and the second subject are identified based on analysis of the prompt characterization.
Limitation Q is directed to a mental process, therefore the subject matter eligibility analysis remains unchanged.
As per claim 19, the claim recites the following additional limitations:
P) parse the prompt to generate a prompt characterization, wherein the prompt characterization includes one or more of a task requested in the prompt, sensitive data entailed in completing the task, a constraint applicable to completing the task, or a targeted output upon completion of the task; and
Q) identify the first subject and the second subject based on the prompt characterization.
Limitations P & Q are directed to a mental process, therefore the subject matter eligibility analysis remains unchanged.
Response to Arguments
Applicant's arguments filed 12/22/2025 have been fully considered but they are not persuasive. The Applicant first argues “The claims also require determining whether these subjects are mutually opposed within the context of prompt injection attacks, a cybersecurity threat unique to Al systems where attackers attempt to override system prompts with malicious instructions embedded in user input. Additionally, the claims require filtering the second subject that is mutually opposed to the first subject from being provided to the large language model. This is a selective technological operation that removes the malicious portion while allowing legitimate prompt content to proceed.” The Examiner notes that there is nothing within the claim language that restricts the invention to the context of prompt injection attacks. Filtering is removing selected content, which is not inherently technological.
The Applicant then argues that the claims do not recite a mental process. Further he argues “A human cannot realistically perform the claimed method because the method operates on prompts in real-time as they are transmitted to large language models through computational systems. The filtering operation requires selectively extracting and removing the mutually opposed second subject while preserving other prompt content, a precise data manipulation operation performed automatically within the system architecture. The identification must distinguish between legitimate prompt elements and malicious subjects containing conflicting tasks, sensitive data references, constraint modifications, or output redirections.” This is not persuasive because there is nothing restricting the claims to real-time operations within a system architecture. Identifying conflicting tasks or sensitive data references are easily done by a human without additional detail on how a computer would do it differently than a human being.
The Applicant then argues “The claims are not directed to the abstract concept of "identifying conflicting information and making a decision." Rather, they are directed to a specific technological solution of selectively filtering malicious content from prompts to large language models to mitigate direct prompt injection attacks.” The Examiner does not find this persuasive as the claims as written could be used for many purpose and are not tied to a specific technological improvement.
The Applicant further argues “Even if the claims recite a judicial exception (which Applicant does not concede), the claims integrate any such exception into a practical application under multiple grounds. For instance, the amended claims apply the identification and determination steps to solve a specific technological problem: direct prompt injection attacks against large language models. As explained in the specification, enterprises presently lack techniques to semantically understand the prompts and detect cases where an attacker may have overwritten a prompt. The claimed method addresses this gap through a concrete technical solution that includes selectively filtering the mutually opposed second subject while allowing the remainder of the prompt to proceed. This technique demonstrates the targeted removal of malicious content rather than merely blocking entire prompts. Additionally, the solution includes operating within the technical architecture of LLM-based applications, where prompts pass through intermediate layers before reaching external models. The solution includes analyzing and filtering specific LLM-related elements, including tasks to be performed by the LLM, references to sensitive data, portions of constraints in task performance, or intended outputs, which are technical components that are intrinsic to how LLMs operate.” The Examiner does not find this persuasive as eligibility under both Step 2A, prong two and Step 2B require additional elements that integrate the Judicial Exception into a practical application, which the current claims do not possess.
The Applicant then argues “ The amendment now makes clear that what gets filtered is the specific malicious subject (the second subject that is mutually opposed), not merely generic information. This selective filtering based on detecting mutual opposition between subjects within LLM prompts is a targeted solution for a specific cybersecurity vulnerability in AI systems, not a general-purpose mental process.” Filtering data is by its very nature abstract and performable by a human.
The applicant further argues “The claims require implementation "by a device" that performs specialized functions within an LLM application architecture. The specification describes this device as implementing Prompt Processing Units (PPUs)-specialized processing elements that parse and characterize prompts systematically to identify multiple subjects and their relationships, operate at an intermediate layer (such as an API gateway, inference system, or DLP tool) to retain control over prompts before they reach external entities, perform selective filtering that removes specific portions (the mutually opposed second subject) while preserving other prompt content, and interface with large language models to manage data flow and implement security controls. The device performing the claimed method is not a generic computer. It is specifically configured to intercept prompts, identify mutually opposed subjects containing LLM-specific elements (tasks, sensitive data, constraints, outputs), and selectively filter those subjects from the data stream.” The Courts have specifically held that “by a device” or “by a computer” are not sufficient to render abstract claims subject matter eligible.
The Applicant then argues “The amended claims now explicitly recite that "the portion of the prompt prevented from being provided to the large language model comprises the second subject that is mutually opposed to the first subject." This limitation demonstrates transformation of the prompt data structure. The prompt begins as a complete data structure containing both the first subject and the second subject. The method identifies the mutually opposed second subject, transforms the prompt by filtering out the second subject, and the transformed prompt (without the malicious second subject) is what may proceed to the LLM. This is not merely gathering and presenting information. The claims require actively modifying the prompt data by removing malicious content, thereby transforming the input into a sanitized output.” Transformation of data is not sufficient to render a judicial exception subject matter eligible.
Next, the applicant argues “ The claimed method addresses a novel problem that did not exist before the widespread adoption of generative AI and large language models. The specific combination of: identifying multiple subjects within LLM prompts, determining mutual opposition between those subjects to detect injection attacks; and selectively filtering the mutually opposed second subject (rather than blocking entire prompts) is not a well-understood, routine, or conventional approach in the field. The selective filtering of mutually opposed subjects within prompts represents a specific technical solution to an emerging cybersecurity challenge that existing techniques fail to address.” The Examiner notes that Novelty does not make the claims subject matter eligible.
The Applicant further argues “The amended claim 1 now explicitly recites filtering a portion of the prompt from being provided to the large language model for processing, wherein the portion comprises the second subject that is mutually opposed to the first subject. These limitations transform the claim from merely identifying a problem to implementing a concrete, specific technical solution and demonstrate a practical application of computer technology to solve a technological problem (securing LLM-based systems against prompt injection attacks through selective content filtering).
The claimed invention is analogous to security filtering systems that are routinely found patent-eligible, such as network firewall systems, email security filters, and web application firewalls that inspect HTTP requests, detect injection attacks, and filter malicious parameters while allowing legitimate requests to proceed. The present invention performs the same type of security function but addresses a novel threat vector: prompt injection attacks on LLMs. Just as a firewall selectively filters malicious packets based on content analysis, the claimed method selectively filters mutually opposed subjects from LLM prompts. This is technological problem- solving implemented through computer systems.” The Examiner notes that while the inventions may be analogous, subject matter eligibility is extremely claim language sensitive. Just because some Firewall systems are eligible does not render these claims, in their current state, eligible.
Examiner Notes
The Examiner cites particular columns and line numbers in the references as applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully considers the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or as disclosed by the Examiner.
Communications via Internet e-mail are at the discretion of the applicant and require written authorization. Should the Applicant wish to communicate via e-mail, including the following paragraph in their response will allow the Examiner to do so:
“Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with me concerning any subject matter of this application by electronic mail. I understand that a copy of these communications will be made of record in the application file.”
Should e-mail communication be desired, the Examiner can be reached at Edwin.Leland@USPTO.gov
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWIN S LELAND III whose telephone number is (571)270-5678. The examiner can normally be reached 8:00 - 5:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDWIN S LELAND III/Primary Examiner, Art Unit 2654