DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: --Systems and Methods for Managing Sensitive Data in Large Language Model Prompting--.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 for being directed towards an abstract idea in the form of a patent ineligible mental process under the broadest reasonable interpretation (BRI).
Independent Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims regard a process that, as drafted under its BRI covers performance of the limitations as a mental process.
In regards to the process of claim 1, the claimed functionality could be practiced as a mental process in the following manner:
receiving data indicative of a request, wherein the data comprises sensitive information (a human can read text information pertaining to a request (e.g., for billing information, health information, etc.) and mentally understand the request that includes sensitive information such as personally identifiable information or personal health information);
transforming at least a portion of the data indicative of the request into a modified request based on replacing at least one portion of the sensitive information with generic information (a person using pen and paper can rewrite the request to include generic information in place of the sensitive information); and
causing generation of a response to the request based on sending the modified request to a machine learning model, wherein the machine learning model is configured to generate data indicative of the response to the request without accessing the at least one portion of the sensitive information (a human can use their hands to manually type in a request to an ML model that would effectively cause that request to be generated; Applicant should be reminded that the use of an ML model is passive and only a response is sent to such a model where the model is not performing any processing on the input data).
Method Claim 9 contains subject matter similar to claim 1 where the request takes the form of text that can be read and understood mentally by a human and an LLM that has passive involvement similar to the machine learning model of claim 1. Accordingly, this claim is directed towards a mental process under the BRI for similar rationale.
Method Claim 14 contains subject matter similar to claim 1 where an additional step is included that involves generating a response by replacing generic information with the at least one portion of the sensitive information. In this case a human can read the generic response and then re-insert the sensitive information using pen and paper where the remainder of the claim is directed towards an abstract idea for reasons similar to claim 1.
This judicial exception is not integrated into a practical application. Outside of the identified abstract idea, the claimed invention set forth in the independent claims does not include any other limitations other than the identified abstract idea under the BRI. It is once again pointed out that involvement of an LLM or machine learning model is only passive and even if such models had been recited in active processing steps, the LLMs recited in the current high level of generality to perform steps otherwise carried out as a mental process under the BRI would not be sufficient to qualify as an LLM invented/improved by Applicant (e.g., via a specific model structure or training process) or a particular technical LLM processing technique. Since there are no limitations in addition to the abstract idea and an inventive concept "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself," Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016), the independent claims are not found to be eligible in step 2A prong 2.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the independent claims do not include any elements in addition to the identified abstract idea under the BRI even those that are well-known, routine, and conventional.
Accordingly, at least independent claims 1, 9, and 14 are not patent eligible under 35 U.S.C. 101.
The remaining dependent claims fail to add patent eligible subject matter to their respective parent claims:
Claims 2 and 15-16 add text information that can be mentally considered and understood by a human along with an LLM that has passive involvement via the analysis of the independent claims.
Claim 3 regards a human receiving a generic response (e.g., via reading/mental processing) and using pen and paper to rewrite the response with the sensitive information.
Claim 4 regards a human manually deleting sensitive information such as by using an eraser or a pen.
Claim 5, 10, and 17 regard parsing/dividing text data that can be manually performed by a human logically segmenting text.
Claim 6, 11, and 18 regard manually calculating a score (e.g., a word count/weighted word count by type/etc.) for the amount of sensitive information.
Claims 7-8, 12-13, and 19-20 regard a simple mathematical relationship that can also be evaluated by a human by mentally comparing two numbers in the form of a score value and a threshold value to either deem the sensitive information successfully masked or to further manually mask such information.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-5, 9-10, and 14-17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kesarwani, et al. (U.S. PG Publication: 2025/0265420 A1).
With respect to Claim 1, Kesarwani discloses:
A method comprising:
receiving data indicative of a request, wherein the data comprises sensitive
information (receiving a conversational input from a user that can take the form of a query or request including PII, Paragraphs 0026, 0029-0030, and 0034);
transforming at least a portion of the data indicative of the request into a modified request based on replacing at least one portion of the sensitive information with generic information transforming at least a portion of the data indicative of the request into a modified request based on replacing at least one portion of the sensitive information with generic information (operations of a PII masking component that detects "any personally identifiable information (PII) contained within the received conversational user input...and replace[s] any PII terms...with generic nouns or other placeholders," Paragraph 0034; Fig. 2, Element 214); and
causing generation of a response to the request based on sending the modified request to a machine learning model, wherein the machine learning model is configured to generate data indicative of the response to the request without accessing the at least one portion of the sensitive information (after PII is masked, the machine learning model (i.e., large language model (LLM)), generates a response used the masked conversational input that does not access the PII/sensitive information, Paragraphs 0034 and 0038; see also the processing flow shown in Fig. 2 where PII is masked prior to prompt generation to an LLM and the LLM generates a response prior to having PII terms re-inserted via augmentation at a response generator).
With respect to Claim 2, Kesarwani further discloses:
The method of claim 1, wherein the data indicative of the request (Paragraph 0033- "input text data corresponding to a conversational input") and the data indicative of the response comprise text (Paragraph 0039- "textual content of the automated response"), and wherein the machine learning model comprises a large language model (LLM) (LLM, Paragraph 0033).
With respect to Claim 3, Kesarwani further discloses:
The method of claim 1, further comprising:
receiving, from the machine learning model, the data indicative of the response to the request, wherein the data indicative of the response to the request comprises at least one portion of the generic information (receiving a "conversational response" from the LLM that includes generic structure, intent, etc. and does not include the PII that has been masked with “semantically consistent” placeholders, Paragraph 0034 and 0038); and
generating the response to the request based on replacing the at least one portion of the generic information with the at least one portion of the sensitive information (NLP techniques are used to augment the response to replacing the generic response with the PII information while maintaining similar syntax, intent, etc., Paragraph 0038; Fig. 2, Element 222).
With respect to Claim 4, Kesarwani further discloses:
The method of claim 1, further comprising:
removing at least one other portion of the sensitive information from the data indicative of the request to generate the portion of the data indicative of the request (removal of more than one type of PII (i.e., see "replace any PII terms"), for example, a name of a person and address, Paragraph 0034).
With respect to Claim 5, Kesarwani further discloses:
Dividing the modified request into a plurality of modified requests, wherein sending the modified request to the machine learning model comprises sending the plurality of modified requests to the machine learning model ("any PII terms" are replaced such as a name and address by replacing those terms with "semantically consistent" placeholders, Paragraph 0034, in this manner the request would include a divided plurality of modified requests- one in which a placeholder is provided for a name request and one in which a placeholder is provided for an address request; see the processing flow of Fig. 2 where the modified request are provided to an LLM to generate a conversational reply).
With respect to Claim 9, Kesarwani discloses:
receiving text indicative of a first request, wherein the text comprises sensitive information (receiving a conversational input from a user that can take the form of a query or request that includes PII/sensitive information, Paragraphs 0026, 0029-0030, and 0034; see also Paragraph 0033- "input text data corresponding to a conversational input");
transforming at least a portion of the text indicative of the request into a second request based on replacing at least one portion of the sensitive information with generic information (operations of a PII masking component that detects "any personally identifiable information (PII) contained within the received conversational user input...and replace[s] any PII terms...with generic nouns or other placeholders," Paragraph 0034; Fig. 2, Element 214); and
causing generation of a response to the first request based on sending the second request to a large language model (LLM), wherein the LLM is configured to generate text indicative of the response to the first request without accessing the at least one portion of the sensitive information (after PII is masked a large language model (LLM)), generates a response used the masked conversational input that does not access the PII/sensitive information, Paragraphs 0034 and 0038; see also the processing flow shown in Fig. 2 where PII is masked prior to prompt generation to an LLM and the LLM generates a response prior to having PII terms re-inserted via augmentation at a response generator; see also Paragraph 0039- "textual content of the automated response").
With respect to Claim 10, Kesarwani further discloses:
The method of claim 9, further comprising:
dividing the second request into a plurality of second requests, wherein sending the second request to the LLM comprises sending the plurality of second requests to the LLM ("any PII terms" are replaced such as a name and address by replacing those terms with "semantically consistent" placeholders, Paragraph 0034, in this manner the request would include a divided plurality of modified requests- one in which a placeholder is provided for a name request and one in which a placeholder is provided for an address request; see the processing flow of Fig. 2 where the modified request are provided to an LLM to generate a conversational reply).
With respect to Claim 14, Kesarwani further discloses:
A method comprising:
receiving data indicative of a first request, wherein the data comprises sensitive information (receiving a conversational input from a user that can take the form of a query or request including PII/sensitive information, Paragraphs 0026, 0029-0030, and 0034);
transforming at least a portion of the data indicative of the first request into a second request based on replacing at least one portion of the sensitive information with generic information (operations of a PII masking component that detects "any personally identifiable information (PII) contained within the received conversational user input...and replace[s] any PII terms...with generic nouns or other placeholders," Paragraph 0034; Fig. 2, Element 214);
based on sending the second request to a machine learning model, receiving data indicative of a response to the first request, wherein the data indicative of the response to the first request comprises at least one portion of the generic information (after PII is masked, the machine learning model (i.e., large language model (LLM)), generates a response used the masked conversational input that does not access the PII/sensitive information, Paragraphs 0034 and 0038; see also the processing flow shown in Fig. 2 where PII is masked prior to prompt generation to an LLM and the LLM generates a response prior to having PII terms re-inserted via augmentation at a response generator; receiving a "conversational response" from the LLM that includes generic structure, intent, etc. and does not include the PII that has been masked with “semantically consistent” placeholders, Paragraph 0034 and 0038); and
generating the response to the first request based on replacing the at least one portion of the generic information with the at least one portion of the sensitive information (NLP techniques are used to augment the response to replacing the generic response with the PII information while maintaining similar syntax, intent, etc., Paragraph 0038; Fig. 2, Element 222).
With respect to Claim 15, Kesarwani further discloses:
The method of claim 14, wherein the machine learning model is configured to
generate the data indicative of the response to the first request without accessing the at least one portion of the sensitive information (after PII is masked, the machine learning model (i.e., large language model (LLM)), generates a response used the masked conversational input that does not access the PII/sensitive information, Paragraphs 0034 and 0038; see also the processing flow shown in Fig. 2 where PII is masked prior to prompt generation to an LLM and the LLM generates a response prior to having PII terms re-inserted via augmentation at a response generator).
With respect to Claim 16, Kesarwani further discloses:
The method of claim 14, wherein the data indicative of the first request (Paragraph 0033- "input text data corresponding to a conversational input") and the data indicative of the response to the first request comprise text (Paragraph 0039- "textual content of the automated response"), and wherein the machine learning model comprises a large language model (LLM) (LLM, Paragraph 0033).
With respect to Claim 17, Kesarwani further discloses:
The method of claim 14, further comprising:
dividing the second request into a plurality of second requests, wherein sending the second request to the machine learning model comprises sending the plurality of second requests to the machine learning model ("any PII terms" are replaced such as a name and address by replacing those terms with "semantically consistent" placeholders, Paragraph 0034, in this manner the request would include a divided plurality of modified requests- one in which a placeholder is provided for a name request and one in which a placeholder is provided for an address request; see the processing flow of Fig. 2 where the modified request are provided to an LLM to generate a conversational reply).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6-8, 11-13, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kesarwani, et al. in view of Summers, et al. (U.S. PG Publication: 2021/0004485 A1).
With respect to Claim 6, Kesarwani discloses the conversational system utilizing an LLM that replaces sensitive PII with a semantically consistent placeholder, as applied to Claim 1. Kesarwani, however, does not teach the determination of a score of a modified request indicating an amount of the sensitive information. Summers, however, discloses a confidence score indicative the amount of unmasked PII in a particular data instance that can be used identify/recover an original entity (Paragraph 0019-0020).
Kesarwani and Summers are analogous art because they are from a similar field of endeavor in text analysis with respect to sensitive terms. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date, to utilize the confidence scoring or PII/sensitive data taught by Summers in the PII removal process taught by Kesarwani to provide a predictable result of better ensuring that enough data has been removed to prevent recovery of a target entity (Summers, Paragraph 0019).
With respect to Claim 7, Summers further discloses:
The method of claim 6, further comprising adding obfuscation information into the modified request based on determining that the score associated with the modified request does not satisfy a threshold, wherein the score does not satisfy the threshold if the amount of the sensitive information associated with the modified request is greater than a target level of sensitive information (masking/redactions/obfuscations are added in an iterative process when a confidence score threshold related to excessive sensitive data is exceeded, Paragraphs 0019-0020 and 0030; note that input in the form of the request is taught by Kesarwani).
With respect to Claim 8, Summers further discloses:
The method of claim 6, wherein sending the modified request to the machine learning
model is based on determining that the score associated with the modified request satisfies a threshold, wherein the score satisfies the threshold if the amount of the sensitive information associated with the modified request is less than or equal to a target level of sensitive information (threshold for the confidence value is satisfied when the score "falls below the confidence threshold," Paragraph 0019 and 0030; note that input in the form of the request is taught by Kesarwani once masking processing is complete).
Claims 11-13 contains subject matter respectively similar to Claim 6-8, and thus, are rejected under similar rationale.
Claims 18-20 contains subject matter respectively similar to Claim 6-8, and thus, are rejected under similar rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Spencer, et al. (U.S. PG Publication: 2025/0307418 A1)- teaches redaction of PII within a LLM prompt followed by unredaction after response generation (0011).
Levin, et al. (U.S. PG Publication: 2025/0322216 A1)- teaches breach mitigation by removing sensitive data from an LLM prompt and replacing it with alternative, non-sensitive tokens (Paragraph 0093).
Usyuzhanin et al. (U.S. PG Publication: 2025/0317297 A1)- teaches the generation of an LLM output including secret tokens for confidential information and replacing a secret token with a value of the private data in a reply (Paragraphs 0115 and 0118-0119).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES S WOZNIAK whose telephone number is (571)272-7632. The examiner can normally be reached 7-3, off alternate Fridays.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant may use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JAMES S. WOZNIAK
Primary Examiner
Art Unit 2655
/JAMES S WOZNIAK/Primary Examiner, Art Unit 2655