Prosecution Insights
Last updated: April 19, 2026
Application No. 18/806,572

Generative Artificial Intelligence Model Output Obfuscation

Non-Final OA §101§102§103§112§DP
Filed
Aug 15, 2024
Examiner
HARRIS, CHRISTOPHER C
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
HiddenLayer, Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
275 granted / 362 resolved
+18.0% vs TC avg
Strong +26% interview lift
Without
With
+26.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
21 currently pending
Career history
383
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
38.4%
-1.6% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
24.4%
-15.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 362 resolved cases

Office Action

§101 §102 §103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. DETAILED ACTION Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/10/2025, 04/29/2025, 05/30/2025 and 01/27/2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-22 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-22 of U.S. Patent No. 12,111,926 B1. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application is a broader variation of the patent application as indicated in the table below. Instant Application 18806572 U.S. Patent No. 12,111,926 B1 1. A computer-implemented method comprising: receiving, by a proxy executing in an environment executing a first generative artificial intelligence (GenAI) model, a prompt originating from a requestor and for ingestion by the first GenAI model; determining that the prompt comprises malicious content or elicits undesired model behavior; inputting at least a portion of the prompt into the first GenAI model to obtain a first output; inputting at least a portion of the first output along with obfuscation instructions into a second GenAI model to obtain a second output; and returning, by way of the proxy, data including or based on the second output to the requester. 1. A computer-implemented method comprising: receiving, from each of a plurality of requesters, data characterizing a corresponding prompt for ingestion by a first generative artificial intelligence (GenAI) model; determining, for each prompt, that the prompt comprises malicious content or elicits undesired model behavior; and initiating, in response to the determining, at least one remediation action; wherein, for a first subset of the prompts, the at least one remediation action comprises: inputting at least a portion of the received data into the first GenAI model to obtain a first output; inputting at least a portion of the first output along with obfuscation instructions into a second, different GenAI model to obtain a second output; and returning data characterizing the second output to the requester; wherein, for a second subset of the prompts, the at least one remediation action comprises: blocking an Internet Protocol (IP) and/or medium access control address (MAC) address of a corresponding requester from accessing the first GenAI model. 11. A computer-implemented method comprising: receiving, by a proxy executing in an environment executing a first generative artificial intelligence (GenAI) model, data characterizing a prompt initiated by a requestor and for ingestion by the first GenAI model; inputting at least a portion of the received data into the first GenAI model to obtain a first output; determining that the first output comprises or elicits malicious or undesired content; inputting at least a portion of the first output along with obfuscation instructions to a second GenAI model to obtain a second output; and returning data characterizing the second output to the requester. 9. A computer-implemented method comprising: receiving, from each of a plurality of requesters, data characterizing a corresponding prompt for ingestion by a first generative artificial intelligence (GenAI) model; inputting, for each prompt at least a portion of the received data into the first GenAI model to obtain a first output; determining, for each first output, that the first output comprises or elicits malicious or undesired content; and initiating, in response to the determining, at least one remediation action; wherein, for a first subset of the prompts, the at least one remediation action comprises: inputting at least a portion of the first output along with obfuscation instructions to a second, different GenAI model to obtain a second output; and returning data characterizing the second output to the requester; wherein, for a second subset of the prompts, the at least one remediation action comprises: blocking an Internet Protocol (IP) and/or medium access control address (MAC) address of a corresponding requester from accessing the first GenAI model. 21. A computer-implemented method comprising: receiving, from a requester by way of a proxy executing in a model environment, a prompt for ingestion by a first generative artificial intelligence (GenAI) model executing in the model environment; determining that the prompt comprises or elicits malicious content or undesired model behavior; inputting the prompt into the first GenAI model to obtain a first output; inputting the prompt, the first output, and obfuscation instructions into a second GenAI model to obtain a second output, the second GenAI model executing in an environment different than the model environment; and returning data characterizing the second output to the requester. 17. A computer-implemented method comprising: receiving, from each of a plurality of requesters, a corresponding prompt for ingestion by a first generative artificial intelligence (GenAI) model; determining, for each prompt, that the prompt comprises or elicits malicious content or undesired model behavior; initiating, in response to the determining, at least one remediation action; wherein, for a first subset of the prompts, the at least one remediation action comprises: inputting the prompt into the first GenAI model to obtain a first output; inputting the prompt, the first output, and obfuscation instructions into a second, different GenAI model to obtain a second output; and returning data characterizing the second output to the corresponding requester; wherein, for a second subset of the prompts, the at least one remediation action comprises: blocking an Internet Protocol (IP) and/or medium access control address (MAC) address of a corresponding requester from accessing the first GenAI model. 22. A computer-implemented method comprising: receiving, from a requester, a prompt for ingestion by a first generative artificial intelligence (GenAI) model; inputting the prompt into the first GenAI model to obtain a first output; determining whether the first output comprises or elicits malicious or undesired model behavior; initiating remediation actions when it is determined that the first output comprises or elicits malicious or undesired model behavior comprising: inputting the prompt along with obfuscation instructions into a second GenAI model to obtain a second output; and returning data characterizing the second output to the requester; and returning data characterizing the first output to the requester when it is determined that the first output does not comprise or elicit malicious or undesired model behavior. 18. A computer-implemented method comprising: receiving, from each of a plurality of requesters, a prompt for ingestion by a first generative artificial intelligence (GenAI) model; inputting, for each prompt, the prompt into the first GenAI model to obtain a first output; determining, for each first output, that the first output comprises or elicits malicious or undesired model behavior; initiating, in response to the determining, at least one remediation action; wherein, for a first subset of the prompts, the at least one remediation action comprises: inputting the prompt along with obfuscation instructions into a second GenAI model to obtain a second output; and returning data characterizing the second output to the requester; wherein, for a second subset of the prompts, the at least one remediation action comprises: blocking an Internet Protocol (IP) and/or medium access control address (MAC) address of a corresponding requester from accessing the first GenAI model. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. Claim Interpretation: Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. See MPEP 2111. Claim 1 is directed to a computer-implemented method comprising: receiving, by a proxy executing in an environment executing a first generative artificial intelligence (GenAI) model, a prompt originating from a requestor and for ingestion by the first GenAI model; determining that the prompt comprises malicious content or elicits undesired model behavior; inputting at least a portion of the prompt into the first GenAI model to obtain a first output; inputting at least a portion of the first output along with obfuscation instructions into a second GenAI model to obtain a second output; and returning, by way of the proxy, data including or based on the second output to the requester. Broadly, the following elements can be interpreted as: The specification states a “proxy" is a component that intermediates between machine learning model architecture (hereinafter “MLA”) and requestors (e.g. endpoint computing device, servers, that initiates queries) that can analyze, intercept and/or modify inputs and/or outputs of the MLA. The queries are considered to be prompts which are alphanumeric strings, videos, audio, images or other files. The model environment includes one or more servers and datastores executing MLA and responding to queries from client devices/requestors. Given their broadest and most reasonable interpretation in light of the specification, these elements invoke generic computing network components such as servers, client devices, intermediaries for transmitting and processing data. Furthermore, the claims require a first and second generative artificial intelligence (GenAI) model (hereinafter “GenAI model”) for inputting prompts and obtaining outputs. The specification states that the GenAI models utilize one or more of natural language processing, computer vision, and machine learning. The specification does not appear to limit the GenAI models to be anything other than generic artificial intelligence (AI) models including large language models that ingest large amounts of data and use pattern recognition and other techniques to make predictions and adjustments based on that data. Additionally, the models can be trained to convert prompts into sentence embeddings to be used in model predictions include generating scores. Thus when given their broadest and most reasonable interpretation in light of the specification, these GenAI models encompasses mathematical algorithms and calculations such as mathematical formulas used for natural language processing and sentence embeddings and do not require any specific or specialized type, hardware architecture or structural details of the GenAI model beyond operating within the MLA and ingesting data to obtain outputs. The claim recites determining that the prompt comprises malicious content or elicits undesired model behavior. The specification states malicious refer to actions which cause the GenAI model to respond in an undesired manner and states that determination can be performed by a classifier which can be machine learning models trained on a plurality of prompts that contain various character strings (which can include portions of alphanumeric symbols, non-printable characters, symbols, controls, etc.) and the like which encapsulate various malicious content, elicits malicious content, or otherwise elicits undesired model behavior by predicting the confidence of the classifier. The output of the model can take varying forms including, for example, a score closer to 1 indicating that the prompt is malicious and a score closer to 0 is indicating that the prompt is benign. The model prediction for the multi-class classifiers can identify a category for the prompt (i.e., a class for which the classifier 192, 194 has been trained). When given their broadest reasonable interpretation in light of the specification the determination encompasses mental observation of the text prompt and mathematical calculations for malicious determination. Lastly, the claim states inputting at least a portion of the first output along with obfuscation instructions into a second GenAI model to obtain a second output. The specification states the obfuscation instructions can take various forms including how to modify the original output, aspect of the original output to delete, and other measures to make the output benign or otherwise compliant (e.g., complying with a blocklist, policy, etc.). As such these have been broadly and reasonably interpreted as instructions (e.g. directives) that control the behavior of the GenAI model. Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claims recites a method. These are directed to a series of steps or acts, and falls within one of the statutory categories of invention. (Step 1: YES). Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). Claim 1 recites a computer-implemented method comprising: receiving, by a proxy executing in an environment executing a first generative artificial intelligence (GenAI) model, a prompt originating from a requestor and for ingestion by the first GenAI model; determining that the prompt comprises malicious content or elicits undesired model behavior; inputting at least a portion of the prompt into the first GenAI model to obtain a first output; inputting at least a portion of the first output along with obfuscation instructions into a second GenAI model to obtain a second output; and returning, by way of the proxy, data including or based on the second output to the requester. These limitation (determine), as drafted, are processes that, under its broadest reasonable interpretation, covers performance of mental evaluations that can be performed in the human mind but for the recitation of generic computer components (proxy, GenAI model, environment). That is, other than reciting “proxy, GenAI model, environment,” nothing in the claim element precludes a mental evaluation for the performance of the limitations. For example, but for the “proxy, GenAI model, environment”, components the steps of “determine” in the context of this claim encompasses a human reading a text prompt, evaluating the text meaning, and using judgment to decide if the prompt is malicious or otherwise seeking undesired model behavior. Additionally, the limitation directed to “inputting at least a portion of the first output along with obfuscation instructions into a second GenAI model to obtain a second output” can require editing or redacting a response based on instruction (e.g. strike out or delete social security numbers in a spreadsheet per the instruction of a supervisor) which can be performed in the human mind or with the aid of a tool such as pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Lastly, the specification further state the determination step and the operation of the GenAI models (e.g. inputs and outputs) are directed to sentence embeddings and generation of numeric scores which falls within the mathematical concepts groupings of abstract ideas. As the claim recites both mathematical concepts and mental processes, it is determined that the claim recites multiple abstract ideas and as MPEP 2106.04 requires that a claim should not be parse into multiple exceptions the limitations together are consider as a single abstract idea. (Step 2A, Prong One: YES). Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements – receiving, by a proxy executing in an environment , a requestor, a first GenAI mode, a second GenAI model and returning by way of the proxy data. The limitations “receiving, by a proxy executing in an environment executing a first generative artificial intelligence (GenAI) model, a prompt originating from a requestor and for ingestion by the first GenAI model;” and “returning, by way of the proxy, data including or based on the second output to the requester.” are mere data gathering and outputting and are recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and post-solution activity, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05. The limitations directed to the elements of a proxy, requestor and environment are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component.. They are not specialized components nor do they reflect any improvement the functioning of the environment. Furthermore, the recitation directed to inputting at least a portion of the prompt into the first GenAI model to obtain a first output; and inputting at least a portion of the first output along with obfuscation instructions into a second GenAI model to obtain a second output; and merely indicates a field of use or technological environment in which the judicial exception is performed. See MPEP 2106(h) which states “... that this type of limitation merely confines the use of the abstract idea to a particular technological environment and thus fails to add an inventive concept to the claims.” No technical improvement or transformation of data is disclosed nor any specific configuration of the hardware or specialized hardware is claimed. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element as described above amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES). Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. One way to determine integration into a practical application is when the claimed invention improves the functioning of a computer or improves another technology or technical field. To evaluate an improvement to a computer or technical field, the specification must set forth an improvement in technology and the claim itself must reflect the disclosed improvement. See MPEP 2106.04(d)(1) and 2106.05(a). Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the monitoring, determining and performing an action step was considered to a mental process in Step 2A, and thus it is re-evaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. Ad discussed in Step 2A, Prong Two, the only additional elements beyond the abstract idea are the storage medium and system which are generic and conventional. The additional element of receive a receiving prompts and returning responses was found to insignificant extra-solution activity in in Step 2A, Prong Two and are recited at a high level of generality. These elements amount to receiving or transmitting data over a network and are well understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The additional element as described above amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. (Step 2B: NO). Therefore, claim 1 is directed to non-statutory subject matter. Additionally, the claims 11, 21 and 22 are rejected for at least the reasons mentioned above. Additionally, the dependent claims 2-10, 12-20 (e.g. classification, model, obfuscation instruction, modification) are rejected as they do not recite additional elements that amount to significantly more than the judicial exception as they are only directed towards further limitations of mathematical formulas, mental evaluations and longstanding conventional human activities as these limitation merely provide details on how to perform the abstract idea, define the instructions and modification of data or use the model as tools to execute the abstract idea. Considered individually or in combination with claim 1 these claims lack an inventive concept and merely apply an abstract idea using well understood, routine, conventional activity. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7, 10, 17 and 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Regarding claims 7 and 17, the limitation directed to “which requires modification” renders the scope of the claim indefinite. Specifically, the metes and bounds of the claim are unclear because it is unclear as to what constitutes a “requirement” for modification. Specifically, the claim is able to identify a portion of an output that needs to be modified but fails to provide a standard to ascertain the boundary of the requirement. Is it a simple misspelling in the output, the inclusion of sensitive data, a policy violation? The claim appears to cover an infinite amount of possibilities including requirements that may not be known as of the effective filing date of the application. As the claim fails to define the triggers of what the requirement is, one of ordinary skill in the art would not be able to ascertain the scope of the claimed invention. Regarding claims 10 and 20, the claims appear to be verbatim duplicates of each other both depending on claim 1. For the purpose of examination the examiner will presume claims 20 dependency to be a typo and will treat it as being dependent on claim 11. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 11, 15-17, 20 and 22 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 12111747 to Jain et al. (hereinafter “Jain”) Claim 11 Jain teaches a computer-implemented method comprising: receiving, by a proxy executing in an environment executing a first generative artificial intelligence (GenAI) model, data characterizing a prompt initiated by a requestor and for ingestion by the first GenAI model; [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses receiving prompts from requestors via an access control 404 (e.g. proxy).] inputting at least a portion of the received data into the first GenAI model to obtain a first output; [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses inputting the received prompt to generate a first output in the form of a test output/first code sample.] determining that the first output comprises or elicits malicious or undesired content; [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses determines validation criteria indicates an anomaly through sentiment testing.] inputting at least a portion of the first output along with obfuscation instructions to a second GenAI model to obtain a second output; [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses providing a first code sample (e.g. first output), validation error and the input into a LLM (e.g. second GenAI model) to generate a second output and provides the second output to a requestor.] and returning data characterizing the second output to the requester. [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses providing data to the requester.] Claim 15 Jain teaches the method of claim 11, wherein the first GenAI model is the same as the second GenAI model. [Jain; Abstract, Col 26 Ln 61 – Col 27 Ln 9, Col 48 Ln 54 – Col 52 Ln 13 – Jain discloses the first GenAI model is the same as the second GenAI.] Claim 16 Jain teaches the method of claim 11, wherein at least one of the first GenAI model or the second GenAI model comprises a large language model. [Jain; Abstract, Col 26 Ln 61 – Col 27 Ln 9, Col 48 Ln 54 – Col 52 Ln 13 – Jain discloses the first or second models are large language models.] Claim 17 Jain teaches the method of claim 11, wherein the obfuscation instructions identify a portion of content in the first output which requires modification. [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses providing an indication of a validation error which include an indication of a criterion of the validation criteria that is not satisfied by the test output.] Claim 20 Jain teaches the method of claim 11 further comprising inputting at least a portion of the received data into the second GenAI model along with the at least a portion of the first output and the obfuscation instructions to obtain the second output. [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses providing a first code sample (e.g. first output), validation error and the input into a LLM (e.g. second GenAI model) to generate a second output and provides the second output to a requestor.] Claim 22 Jain teaches a computer-implemented method comprising: receiving, from a requester, a prompt for ingestion by a first generative artificial intelligence (GenAI) model; [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses receiving prompts from requestors via an access control 404 (e.g. proxy).] inputting the prompt into the first GenAI model to obtain a first output; [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses inputting the received prompt to generate a first output in the form of a test output/first code sample.] determining whether the first output comprises or elicits malicious or undesired model behavior; [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses determines if validation criteria indicates an anomaly through sentiment testing.] initiating remediation actions when it is determined that the first output comprises or elicits malicious or undesired model behavior comprising: inputting the prompt along with obfuscation instructions into a second GenAI model to obtain a second output; [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses providing a first code sample (e.g. first output), validation error and the input into a LLM (e.g. second GenAI model) to generate a second output and provides the second output to a requestor.]and returning data characterizing the second output to the requester; [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses providing data to the requester.]and returning data characterizing the first output to the requester when it is determined that the first output does not comprise or elicit malicious or undesired model behavior. [Jain; Abstract, Col 13 Ln 48 – Col 14 Ln12, Col 26 Ln 61 – Col 27 Ln 9, Col 46 Ln 10 – Col 52 Ln 13 – Jain discloses providing data to the requester if the first output passes validity.] Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 12-14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over US 12111747 to Jain et al. (hereinafter “Jain”) in view of US 20240054233 to Ohayon et al. (hereinafter “Ohayon”) retrieved from IDS dated 01/27/2026 Claim 12 While Jain teaches the method of claim 11 Jain fails to explicitly teach however, Ohayon teaches: wherein the determination is based on a classification by a classifier. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032, 0113, 0129 – Ohayon discloses the ML/DL/AI Protection Unit classifying inputs as legitimate and malicious/adversarial/etc.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Jain for protecting ML/DL/AI systems against attacks, or for mitigating or curing attacks, or for curing or reversing or decreasing or isolating the damage caused (or attempted) by attacks as disclosed by Ohayon Para. 0013. Claim 13 While Jain teaches the method of claim 11 Jain fails to explicitly teach however, Ohayon teaches: wherein the determination is based on a blocklist defining content deemed to be malicious or eliciting undesired model behavior. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032, 0113, 0129 – Ohayon discloses the ML/DL/AI Protection Unit utilizing a blacklist defining prohibited inputs.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Jain for protecting ML/DL/AI systems against attacks, or for mitigating or curing attacks, or for curing or reversing or decreasing or isolating the damage caused (or attempted) by attacks as disclosed by Ohayon Para. 0013. Claim 14 While Jain teaches the method of claim 11 Jain fails to explicitly teach however, Ohayon teaches: wherein the first GenAI model is different than the second GenAI model. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032, 0113, 0115, 0129 – Ohayon discloses the use of secondary ML/AI/DL engines that are different than the Main ML/AI/DL engine.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Jain for protecting ML/DL/AI systems against attacks, or for mitigating or curing attacks, or for curing or reversing or decreasing or isolating the damage caused (or attempted) by attacks as disclosed by Ohayon Para. 0013. Claim 18 While Jain teaches the method of claim 17 Jain fails to explicitly teach however, Ohayon teaches: wherein the modification comprises generating synthetic data corresponding to the identified portion of content. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032, 0113, 0115, 0129 – Ohayon discloses generating synthetic data by sending false results, random or pseudo-random results..] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Jain for protecting ML/DL/AI systems against attacks, or for mitigating or curing attacks, or for curing or reversing or decreasing or isolating the damage caused (or attempted) by attacks as disclosed by Ohayon Para. 0013. Claims 19 is rejected under 35 U.S.C. 103 as being unpatentable over US 12111747 to Jain et al. (hereinafter “Jain”) in view of US20240160902 to PADGETT et al. (hereinafter “Padgett) retrieved from IDS dated 01/10/2025 Claim 19 While Jain teaches the method of claim 17 Jain fails to explicitly teach however, Padgett teaches: wherein the modification comprises redacting data corresponding to the identified portion of content. [Padgett; Para. 0129, 0130 – Padgett discloses modifying the output by redacting identified portions with instructions to redact and re-running the model.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Jain in order to use as a mechanism for preventing the output of potentially sensitive information (such as sensitive data in Jain Abstract). Claims 1-8, 10 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over US 20240054233 to Ohayon et al. (hereinafter “Ohayon”) retrieved from IDS dated 01/27/2026 in view of US 12111747 to Jain et al. (hereinafter “Jain”) Claim 1 Ohayon teaches a computer-implemented method comprising: receiving, by a proxy executing in an environment executing a first generative artificial intelligence (GenAI) model, a prompt originating from a requestor and for ingestion by the first GenAI model; [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0033 – Ohayon discloses an AI Pipeline executing ML/AI/DL Engines that receive (prompts from a requestor by a ML/DL/AI Protection Unit.] determining that the prompt comprises malicious content or elicits undesired model behavior; [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032 – Ohayon discloses the ML/DL/AI Protection Unit classifying inputs as legitimate and malicious/adversarial/etc.] inputting at least a portion of the prompt into the first GenAI model to obtain a first output; [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 031, 0032 – Ohayon discloses the ML/DL/AI protection unit modifying outputs to adversarial entities (e.g. previous input determined to be malicious but no discarded)] inputting at least a portion of the first output along with obfuscation instructions into a second GenAI model to obtain a second output; and returning, by way of the proxy, data including or based on the second output to the requester. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 031, 0032 – Ohayon discloses the ML/DL/AI protection unit providing an output to the requester.] While Ohayon teaches the method of claim 1 Ohayon fails to explicitly teach however, Jain teaches: inputting at least a portion of the first output along with obfuscation instructions into a second GenAI model to obtain a second output; [Jain; Abstract, Col 26 Ln 61 – Col 27 Ln 9, Col 48 Ln 54 – Col 52 Ln 13 – Jain discloses providing a first code sample (e.g. first output), validation error and the input into a LLM (e.g. second GenAI model) to generate a second output and provides the second output to a requestor.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Ohayon in order to improve the security, reliability, and modularity of data pipelines as disclosed in Col 3 Ln 59-61. Claim 2 Ohayon teaches the method of claim 1, wherein the determination is based on a classification by a classifier. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032, 0113, 0129 – Ohayon discloses the ML/DL/AI Protection Unit classifying inputs as legitimate and malicious/adversarial/etc.] Claim 3 Ohayon teaches the method of claim 1, wherein the determination is based on a blocklist defining content deemed to be malicious or eliciting undesired model behavior. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032, 0113, 0129 – Ohayon discloses the ML/DL/AI Protection Unit utilizing a blacklist defining prohibited inputs.] Claim 4 Ohayon teaches the method of claim 1, wherein the first GenAI model is different than the second GenAI model. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032, 0113, 0115, 0129 – Ohayon discloses the use of secondary ML/AI/DL engines that are different than the Main ML/AI/DL engine.] Claim 5 While Ohayon teaches the method of claim 1 Ohayon fails to explicitly teach however, Jain teaches: wherein the first GenAI model is the same as the second GenAI model. [Jain; Abstract, Col 26 Ln 61 – Col 27 Ln 9, Col 48 Ln 54 – Col 52 Ln 13 – Jain discloses the first GenAI model is the same as the second GenAI.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Ohayon in order to improve the security, reliability, and modularity of data pipelines as disclosed in Col 3 Ln 59-61. Claim 6 Ohayon teaches the method of claim 1, wherein at least one of the first GenAI model or the second GenAI model comprises a large language model. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032, 0113, 0115, 0129 – Ohayon discloses the ML/AI/DL engine is a LLM.] Claim 7 While Ohayon teaches the method of claim 1 Ohayon fails to explicitly teach however, Jain teaches: wherein the obfuscation instructions identify a portion of content in the first output which requires modification. [Jain; Abstract, Col 26 Ln 61 – Col 27 Ln 9, Col 48 Ln 54 – Col 52 Ln 13 – Jain discloses providing an indication of a validation error which include an indication of a criterion of the validation criteria that is not satisfied by the test output.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Ohayon in order to improve the security, reliability, and modularity of data pipelines as disclosed in Col 3 Ln 59-61. Claim 8 Ohayon teaches the method of claim 7, wherein the modification comprises generating synthetic data corresponding to the identified portion of content. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032, 0113, 0115, 0129 – Ohayon discloses generating synthetic data by sending false results, random or pseudo-random results..] Claim 10 While Ohayon teaches the method of claim 1 Ohayon fails to explicitly teach however, Jain teaches: further comprising inputting at least a portion of the received data into the second GenAI model along with the at least a portion of the first output and the obfuscation instructions to obtain the second output. [Jain; Abstract, Col 26 Ln 61 – Col 27 Ln 9, Col 48 Ln 54 – Col 52 Ln 13 – Jain discloses providing a first code sample (e.g. first output), validation error and the input into a LLM (e.g. second GenAI model) to generate a second output and provides the second output to a requestor.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Ohayon in order to improve the security, reliability, and modularity of data pipelines as disclosed in Col 3 Ln 59-61. Claim 21 Ohayon teaches a computer-implemented method comprising: receiving, from a requester by way of a proxy executing in a model environment, a prompt for ingestion by a first generative artificial intelligence (GenAI) model executing in the model environment; [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0033 – Ohayon discloses an AI Pipeline executing ML/AI/DL Engines that receive (prompts from a requestor by a ML/DL/AI Protection Unit.] determining that the prompt comprises or elicits malicious content or undesired model behavior; [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 0031, 0032 – Ohayon discloses the ML/DL/AI Protection Unit classifying inputs as legitimate and malicious/adversarial/etc.] inputting the prompt into the first GenAI model to obtain a first output; [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 031, 0032 – Ohayon discloses the ML/DL/AI protection unit modifying outputs to adversarial entities (e.g. previous input determined to be malicious but no discarded)] returning data characterizing the second output to the requester. [Ohayon; Para. 0005, 0006, 00019, 0021, 0023, 031, 0032 – Ohayon discloses the ML/DL/AI protection unit providing an output to the requester.] While Ohayon teaches the method of claim 21 and teaches that the GenAI models operate in different model environments Ohayon fails to explicitly teach however, Jain teaches: inputting the prompt, the first output, and obfuscation instructions into a second GenAI model to obtain a second output,; [Jain; Abstract, Col 26 Ln 61 – Col 27 Ln 9, Col 48 Ln 54 – Col 52 Ln 13 – Jain discloses providing a first code sample (e.g. first output), validation error and the input into a LLM (e.g. second GenAI model) to generate a second output and provides the second output to a requestor.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Ohayon in order to improve the security, reliability, and modularity of data pipelines as disclosed in Col 3 Ln 59-61. Claims 9 is rejected under 35 U.S.C. 103 as being unpatentable over US 20240054233 to Ohayon et al. (hereinafter “Ohayon”) retrieved from IDS dated 01/27/2026 in view of US 12111747 to Jain et al. (hereinafter “Jain”) and further in view of US20240160902 to PADGETT et al. (hereinafter “Padgett) retrieved from IDS dated 01/10/2025 Claim 9 While Ohayon and Jain teaches the method of claim 1 the combination fails to explicitly teach however, Padgett teaches: wherein the modification comprises redacting data corresponding to the identified portion of content. [Padgett; Para. 0129, 0130 – Padgett discloses modifying the output by redacting identified portions with instructions to redact and re-running the model.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by the combination in order to use as a mechanism for preventing the output of potentially sensitive information (such as sensitive data in Jain Abstract or personal/private data of Ohayon (para. 0015)). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER C HARRIS whose telephone number is (571)270-7841. The examiner can normally be reached Monday through Friday between 8:00 AM to 4:00 PM CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L Nickerson can be reached on (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER C HARRIS/Primary Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

Aug 15, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §101, §102, §103
Mar 26, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602467
In-memory scan for threat detection with binary instrumentation backed generic unpacking, decryption, and deobfuscation
2y 5m to grant Granted Apr 14, 2026
Patent 12585746
AUTHENTICATION SYSTEM, USER DEVICE, AND KEY INFORMATION TRANSMISSION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12580915
SERVICE ACCESS METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12572668
DATA SECURITY USING REQUEST-SUPPLIED KEYS
2y 5m to grant Granted Mar 10, 2026
Patent 12561460
System And Method for Performing Security Analyses of Digital Assets
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+26.2%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 362 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month