Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The specification filed on September 23, 2024 is accepted.
Drawings
New corrected drawings in compliance with 37 CFR 1.121(d) are required in this application because Fig 3, Fig 5, Fig 8, Fig 9, Fig 10, Fig 12 and Fig 13 are illegible. Please submit legible copies of the noted figures. Applicant is advised to employ the services of a competent patent draftsperson outside the Office, as the U.S. Patent and Trademark Office no longer prepares new drawings. The corrected drawings are required in reply to the Office action to avoid abandonment of the application. The requirement for corrected drawings will not be held in abeyance.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/23/2024 was filed after the mailing date of the application no. 18/892952 on 09/23/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 19 objected to because of the following informalities:
Claim 19 calls for “a non-transitory computer-readable medium” and then recites “…………..the method comprising” the examiner suggests that the claim should read as “a non-transitory computer readable medium having stored thereon computer-readable instructions that, when executed by a computer-based processor, cause a computer to implement a method to control disclosure of sensitive information in a document”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1 recites the limitation " an application executing on the endpoint device". There is insufficient antecedent basis for this limitation in the claim.
Dependent claims 2-14 are also rejected under the same rationales as set forth above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-13 and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Gnanasekaran et al (hereinafter Gnanasekaran) (US 20250131126) in view of Koretz et (US 6978378).
Regarding claim 1 Gnanasekaran teaches a computer-implemented method of controlling disclosure of sensitive information in a document on a computer application that is configured to be able to provide in-line access to a tool hosted at a remote network location from within the document, the method comprising: (Gnanasekaran on [0002] teaches systems and method to provide a tool that regulates information sent to generative AI model (i.e., in line access to a tool) hosted outside of backend computer system 102 as shown in fig 1);
(Gnanasekaran on [0034-0035] teaches an AI gateway tool (i.e., application executed in backend computer server 102) to facilitate in-line access to generative AI model. Further teaches the AI gateway tool reads the prompt (i.e., document in view of [0030]) to identify PII information within the prompt. See on [0051] wherein the prompt is real-time driver license);
wherein the application executing on the endpoint device is configured to provide in-line access to a tool hosted at a remote network location from within the document when such in-line access is enabled in the application (Gnanasekaran on [0030] teaches a computer system that monitors and selectively permits the egress of PII from an enterprise to a generative AI language learning model (LLM) i.e., (enabling in-line access to generative AI tool remote from backend computer 102). See on [0034-0035] teaches an AI gateway tool to facilitate in-line access to generative AI model. Further teaches the AI gateway tool 122 may control access to generative AI models (LLMs), such that only approved LLMs 134 may receive the prompt);
obtaining text from the open document (Gnanasekaran on [0035-0036] teaches obtaining text from prompt i.e., document in view of [0030]. See on [0048] teaches the AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125, the text component 127);
determining, with a computer-implemented scanning engine, whether the text from the open document contains sensitive information based, at least in part, on content of the text (Gnanasekaran on [0036] teaches the image component 125 and the text component 127 may identify the PII information in the prompt. The image component 125 and the text component 127 may also sanitize (e.g., hide/remove) the PII information in the prompt before the prompt is sent to the LLM. See on [0048] teaches the AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125, the text component 127);
and disabling at the application the in-line access to the tool in response to determining that the content of the text from the open document contains sensitive information (Gnanasekaran Fig 2 block S212, S214, Fig 4 block 402 and text on [0048] teaches the AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125 and the text component 127, the image component 125, the text component 127 may analyze the prompt via an internal ML model to identify PII or confidential information, or may access an external service to identify PII so the image component 127 and text component 127 may determine the PII status. In a case it is determined at S212, PII status is “Contains PII”, the method proceeds to S214 and a “Contains PII” output 402 (FIG. 4) is returned to the display in the Vetted Response field 306 and “Send to AI” icon 316 are greyed out i.e., in-line access is disabled to generative AI model. See also on [0065] teaches preventing PII and confidential information to generative AI model).
Although Gnanasekaran teaches generating an alert when the user attempts to transmit sensitive information, but fails to explicitly teach receiving a notification, at a computer-implemented endpoint agent, that a document has been opened in an application executing on the endpoint device, however Koretz from analogous art teaches
receiving a notification, at a computer-implemented endpoint agent, that a document has been opened in an application executing on the endpoint device (Koretz on [col 4 line 10-15] teaches notify the sender when a recipient opens a sent file or document).
Thus, it would have been obvious to one ordinary skill in the art before the effective filing date to implement the teaching of Koretz into the teaching of Gnanasekaran by sending notification to the user regarding the document. One would be motivated to do so in order to inform the sender of the document that document has been opened by the recipient thereby monitoring and preventing untheorized opening of senders sensitive document (Koretz [col 3 line 60-67 and col 4 line 1-15]).
Regarding claim 15 Gnanasekaran teaches a system comprising: (Gnanasekaran on [0002] teaches systems and method to provide a tool that regulates information sent to generative AI model (i.e., in line access to a tool) hosted outside of backend computer system 102 as shown in fig 1);
a computer comprising: (Gnanasekaran Fig 1 block 102 and text on [0004 and 0035] teaches computer comprising processor and memory)
a computer processor (Gnanasekaran Fig 1 block 102 and text on [0004 and 0035] teaches computer comprising processor and memory)
and computer-based memory operatively coupled to the computer processor (Gnanasekaran Fig 1 block 102 and text on [0004 and 0035] teaches computer comprising processor and memory coupled to processor);
wherein the computer-based memory stores computer-readable instructions that, when executed by the computer processor, cause the computer to (Gnanasekaran Fig 1 block 102 and text on [0004 and 0035] teaches computer comprising processor and memory coupled to processor storing instructions executed by the processor);
control disclosure of sensitive information in a document on a computer application that is configured to be able to provide in-line access to a tool hosted at a remote network destination from within the document by a method comprising (Gnanasekaran on [0035] teaches an AI gateway tool 122 to: analyze a prompt including the identification of any PII that may be contained in the prompt. In the case the prompt is PII-free (e.g., does not contain any PII), the AI gateway tool 122 may control access to generative AI models (LLMs), such that only approved LLMs 134 may receive the prompt);
(Gnanasekaran on [0034-0035] teaches an AI gateway tool (i.e., application executed in backend computer server 102) to facilitate in-line access to generative AI model. Further teaches the AI gateway tool reads the prompt (i.e., document in view of [0030]) to identify PII information within the prompt. See on [0051] wherein the prompt is real-time driver license);
wherein the application executing on the endpoint device is configured to provide in-line access to a tool hosted at a remote network location from within the document when such in-line access is enabled in the application (Gnanasekaran on [0030] teaches a computer system that monitors and selectively permits the egress of PII from an enterprise to a generative AI language learning model (LLM) i.e., (enabling in-line access to generative AI tool remote from backend computer 102). See on [0034-0035] teaches an AI gateway tool to facilitate in-line access to generative AI model. Further teaches the AI gateway tool 122 may control access to generative AI models (LLMs), such that only approved LLMs 134 may receive the prompt);
obtaining text from the open document (Gnanasekaran on [0035-0036] teaches obtaining text from prompt i.e., document in view of [0030]. See on [0048] teaches the AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125, the text component 127);
determining, with a computer-implemented scanning engine, whether the text from the open document contains sensitive information based, at least in part, on content of the text (Gnanasekaran on [0036] teaches the image component 125 and the text component 127 may identify the PII information in the prompt. The image component 125 and the text component 127 may also sanitize (e.g., hide/remove) the PII information in the prompt before the prompt is sent to the LLM. See on [0048] teaches the AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125, the text component 127);
and disabling at the application the in-line access to the tool in response to determining that the content of the text from the open document contains sensitive information (Gnanasekaran Fig 2 block S212, S214, Fig 4 block 402 and text on [0048] teaches The AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125 and the text component 127, the image component 125, the text component 127 may analyze the prompt via an internal ML model to identify PII or confidential information, or may access an external service to identify PII so the image component 127 and text component 127 may determine the PII status. In a case it is determined at S212, PII status is “Contains PII”, the method proceeds to S214 and a “Contains PII” output 402 (FIG. 4) is returned to the display in the Vetted Response field 306 and “Send to AI” icon 316 are greyed out i.e., in-line access is disabled to generative AI model. See also on [0065] teaches preventing PII and confidential information to generative AI model).
Although Gnanasekaran teaches generating an alert when the user attempts to transmit sensitive information, but fails to explicitly teach receiving a notification, at a computer-implemented endpoint agent, that a document has been opened in an application executing on the endpoint device, however Koretz from analogous art teaches
receiving a notification, at a computer-implemented endpoint agent, that a document has been opened in an application executing on the endpoint device (Koretz on [col 4 line 10-15] teaches notify the sender when a recipient opens a sent file or document).
Thus, it would have been obvious to one ordinary skill in the art before the effective filing date to implement the teaching of Koretz into the teaching of Gnanasekaran by sending notification to the user regarding the document. One would be motivated to do so in order to inform the sender of the document that document has been opened by the recipient thereby monitoring and preventing untheorized opening of senders sensitive document (Koretz [col 3 line 60-67 and col 4 line 1-15]).
Regarding claim 2 and 18 the combination of Gnanasekaran and Koretz teaches all the limitations of claims 1 and 15 respectively, Gnanasekaran further teaches wherein the tool hosted at the remote network location comprises a generative artificial intelligence tool (Gnanasekaran on [0034-0035] teaches generative AI model).
Regarding claim 3 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 2 above, Gnanasekaran further teaches further comprising: in response to disabling the in-line access to the generative artificial intelligence tool at the application, producing a notification, with the endpoint agent, that the in-line access to the generative artificial intelligence tool has been disabled (Gnanasekaran on [0037] teaches the event tracker component 129 may generate an alert in a case a user attempts to transmit PII or other confidential information to an LLM i.e., transmitting an alert when an attempt is made to transmit sensitive information and in-line access is disabled. See Fig 18 and text on [0079] teaches the alert display 1810 may include a message indicating that a particular user tried to send a payload with PII data to an LLM).
Regarding claim 4 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 2 above, Gnanasekaran further teaches further comprising: after disabling the in-line access to the generative artificial intelligence tool at the application, receiving one or more edits to the open document in the application without providing in-line access to the generative artificial intelligence tool within the application (Gnanasekaran on [0036 and 0050] teaches the image component 125 and the text component 127 may also sanitize (e.g., hide/remove) the PII information in the prompt before the prompt is sent to the LLM 134).
Regarding claim 6 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 2 above, Gnanasekaran further teaches further comprising: enabling, or not disabling, the in-line access to the generative artificial intelligence tool at the application in response to determining that the content of the text from the open document lacks sensitive information (Gnanasekaran Fig 2 block s212, s222 and text on [0053 and 0056] teaches transmit the prompt when it is determined that the prompt is PII free).
Regarding claim 7 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 6 above, Gnanasekaran further teaches receiving one or more edits to the open document in the application with the in-line access to the generative artificial intelligence tool enabled at the application (Gnanasekaran on [0036 and 0050] teaches the image component 125 and the text component 127 may also sanitize (e.g., hide/remove) the PII information in the prompt before the prompt is sent to the LLM 134).
Regarding claim 8 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 7 above, Gnanasekaran further teaches further comprising: transmitting a prompt from the endpoint device to the generative artificial intelligence tool (Gnanasekaran on [0056] teaches the prompt may be transmitted to the AI via selection of the “Send to AI” icon 316 in S222. The prompt may be transmitted via a suitable Application Programming Interface (API). Prior to transmission of the prompt, the AI gateway tool 122: may append any additional prompts, as described above, to the prompt for transmission to the LLM, and may append any formatting parameters for the response to the prompt. See on [0032] teaches These features (e.g., role, context, output format) may be attached to the user prompt/query via an AI gateway tool, prior to sending the query to the LLM. Embodiments provide for the creation and management of multiple versions of prompts with respect to various LLMs.).
Regarding claim 9 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 8 above, Gnanasekaran further teaches wherein the prompt includes contextual information from the open document in the application, wherein the contextual information includes, or is based on, at least a portion of the text in the open document (Gnanasekaran on [0056] teaches the prompt may be transmitted to the AI via selection of the “Send to AI” icon 316 in S222. The prompt may be transmitted via a suitable Application Programming Interface (API). Prior to transmission of the prompt, the AI gateway tool 122: may append any additional prompts, as described above, to the prompt for transmission to the LLM, and may append any formatting parameters for the response to the prompt. See on [0032] teaches These features (e.g., role, context, output format) may be attached to the user prompt/query via an AI gateway tool, prior to sending the query to the LLM. Embodiments provide for the creation and management of multiple versions of prompts with respect to various LLMs).
Regarding claim 10 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 9 above, Gnanasekaran further teaches wherein the generative artificial intelligence tool is configured to receive the prompt, create a response to the prompt based at least in part on the contextual information, and transmit the response back to the endpoint device (Gnanasekaran Fig 2 block S224 and text on [0056] teaches the LLM output/response 1902 may be received, via a suitable API, at the AI response user interface display 1900 in S224, as shown in FIG. 19. The AI response display 1900 may also include the prompt 1904 as sent to the LLM, and the selected LLM).
Regarding claim 11 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 10 above, Gnanasekaran further teaches wherein the application is configured to display the response within the document in the application (Gnanasekaran Fig 2 block S224 and text on [0056] teaches the LLM output/response 1902 may be received, via a suitable API, at the AI response user interface display 1900 in S224, as shown in FIG. 19. The AI response display 1900 may also include the prompt 1904 as sent to the LLM, and the selected LLM).
Regarding claim 12 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 8 above, Gnanasekaran further teaches wherein the prompt is transmitted either: automatically, without a specific direction from a human user to transmit the prompt, or in response to a specific direction to transmit the prompt entered into the document in the application by the human user (Gnanasekaran on [0046 and 0056] teaches the prompt may be transmitted by a user).
Regarding claim 13 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 8 above, Gnanasekaran further teaches wherein the computer-implemented endpoint agent is deployed on an endpoint device within an organization's private network, wherein the computer-implemented scanning engine is deployed on the endpoint device within the organization's private network (Gnanasekaran Fig 1 block 102, block 134 and text on [0035] teaches back-end application computer server and generative AI tool outside of backend computer. See on [0038] teaches the back-end application computer server 102 may also transmit (via a firewall) information (e.g., prompts) to LLMs 134 after being approved by the AI gateway tool 122. Note that the back-end computer device 102 implements local area network as private network owned by an enterprise or organization [0040 and 0060]);
and wherein the generative artificial intelligence tool is hosted by one or more servers at a network destination outside of the organization's private network (Gnanasekaran Fig 1 block 102, block 125, 127 and 129 and text on [0035-0037] teaches event tracker component (i.e., endpoint agent in instant case) and text and image component i.e., image and text component as scanning engine) within the back-end computer i.e., Note that the back-end computer device 102 implements local area network as private network owned by an enterprise or organization [0040 and 0060]).
Regarding claim 16 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 15 above, Gnanasekaran further teaches further comprising: one or more servers hosting the tool at the remote network destination, wherein the computer is an endpoint device within an organization's private network and wherein the one or more servers are at a remote network destination outside of the organization's private network (Gnanasekaran Fig 1 block 102, block 134 and text on [0035] teaches back-end application computer server and generative AI tool outside of backend computer. See on [0038] teaches the back-end application computer server 102 may also transmit (via a firewall) information (e.g., prompts) to LLMs 134 after being approved by the AI gateway tool 122. Note that the back-end computer device 102 implements local area network as private network owned by an enterprise or organization [0040 and 0060]);
wherein the computer-implemented endpoint agent and the computer-implemented scanning engine are deployed within the organization's private network (Gnanasekaran Fig 1 block 102, block 125, 127 and 129 and text on [0035-0037] teaches event tracker component (i.e., endpoint agent in instant case) and text and image component i.e., image and text component as scanning engine) within the back-end computer i.e., Note that the back-end computer device 102 implements local area network as private network owned by an enterprise or organization [0040 and 0060]).
Regarding claim 17 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 16 above, Gnanasekaran further teaches further comprising: a firewall or other network security protection measures demarcating a barrier between the organization's private network and outside the organization's private network (Gnanasekaran [0038] teaches the back-end application computer server 102 may also exchange information with a remote user device 124 (e.g., via a firewall 126). The back-end application computer server 102 may also exchange information via communication links 128 (e.g., via communication port 130 that may include a firewall) to communicate with different systems. The back-end application computer server 102 may also transmit information directly to an email server, workflow application, and/or calendar application 132 to facilitate automated communications and/or other actions. The back-end application computer server 102 may also transmit (via a firewall) information (e.g., prompts) to LLMs 134 after being approved by the AI gateway tool 122).
Regarding claim 19 Gnanasekaran teaches a non-transitory computer readable medium having stored thereon computer-readable instructions that, when executed by a computer-based processor (Gnanasekaran on [0075] teaches non-transitory computer readable memory for storing instructions executed by processor) cause a computer to control disclosure of sensitive information in a document on a computer application that is configured to be able to provide in-line access to a generative artificial intelligence tool from within the document, the method comprising (Gnanasekaran on [0035] teaches an AI gateway tool 122 to: analyze a prompt including the identification of any PII that may be contained in the prompt. In the case the prompt is PII-free (e.g., does not contain any PII), the AI gateway tool 122 may control access to generative AI models (LLMs), such that only approved LLMs 134 may receive the prompt);
Gnanasekaran on [0034-0035] teaches an AI gateway tool (i.e., application executed in backend computer server 102) to facilitate in-line access to generative AI model. Further teaches the AI gateway tool reads the prompt (i.e., document in view of [0030]) to identify PII information within the prompt. See on [0051] wherein the prompt is real-time driver license);
wherein the application executing on the computer is configured to provide in-line access to a generative artificial intelligence tool from within the document when such in-line access is enabled in the application (Gnanasekaran on [0030] teaches a computer system that monitors and selectively permits the egress of PII from an enterprise to a generative AI language learning model (LLM) i.e., enabling in-line access to generative AI tool. See on [0034-0035] teaches an AI gateway tool to facilitate in-line access to generative AI model. Further teaches the AI gateway tool 122 may control access to generative AI models (LLMs), such that only approved LLMs 134 may receive the prompt);
obtaining text from the open document (Gnanasekaran on [0035-0036] teaches obtaining text from prompt i.e., document in view of [0030]. See on [0048] teaches the AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125, the text component 127);
determining, with a computer-implemented scanning engine, whether the text from the open document contains sensitive information based, at least in part, on content of the text (Gnanasekaran on [0036] teaches the image component 125 and the text component 127 (i.e., image and text component as scanning engine) may identify the PII information in the prompt. The image component 125 and the text component 127 may also sanitize (e.g., hide/remove) the PII information in the prompt before the prompt is sent to the LLM. See on [0048] teaches the AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125, the text component 127);
and disabling the in-line access to the generative artificial intelligence tool at the application in response to determining that the content of the text from the open document contains sensitive information (Gnanasekaran Fig 2 block S212, S214, Fig 4 block 402 and text on [0048] teaches The AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125 and the text component 127, the image component 125, the text component 127 may analyze the prompt via an internal ML model to identify PII or confidential information, or may access an external service to identify PII so the image component 127 and text component 127 may determine the PII status. In a case it is determined at S212, PII status is “Contains PII”, the method proceeds to S214 and a “Contains PII” output 402 (FIG. 4) is returned to the display in the Vetted Response field 306 and “Send to AI” icon 316 are greyed out i.e., in-line access is disabled to generative AI model. See also on [0065] teaches preventing PII and confidential information to generative AI model);
and wherein the computer is an endpoint device within an organization's private network, wherein the generative artificial intelligence tool is hosted on one or more servers at a remote network destination outside of the organization's private network (Gnanasekaran Fig 1 block 102, block 134 and text on [0035] teaches back-end application computer server and generative AI tool outside of backend computer. See on [0038] teaches the back-end application computer server 102 may also transmit (via a firewall) information (e.g., prompts) to LLMs 134 after being approved by the AI gateway tool 122. Note that the back-end computer device 102 implements local area network as private network owned by an enterprise or organization [0040 and 0060]);
and wherein the computer- implemented endpoint agent and the computer-implemented scanning engine are deployed within the organization's private network (Gnanasekaran Fig 1 block 102, block 125, 127 and 129 and text on [0035-0037] teaches event tracker component (i.e., endpoint agent in instant case) and text and image component i.e., image and text component as scanning engine) within the back-end computer i.e., Note that the back-end computer device 102 implements local area network as private network owned by an enterprise or organization [0040 and 0060]).
Although Gnanasekaran teaches generating an alert when the user attempts to transmit sensitive information, but fails to explicitly teach receiving a notification, at a computer-implemented endpoint agent, that a document has been opened in an application executing on the endpoint device, however Koretz from analogous art teaches
receiving a notification, at a computer-implemented endpoint agent, that a document has been opened in an application executing on the endpoint device (Koretz on [col 4 line 10-15] teaches notify the sender when a recipient opens a sent file or document).
Thus, it would have been obvious to one ordinary skill in the art before the effective filing date to implement the teaching of Koretz into the teaching of Gnanasekaran by sending notification to the user regarding the document. One would be motivated to do so in order to inform the sender of the document that document has been opened by the recipient thereby monitoring and preventing untheorized opening of senders sensitive document (Koretz [col 3 line 60-67 and col 4 line 1-15]).
Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Gnanasekaran et al (hereinafter Gnanasekaran) (US 20250131126) in view of Koretz et (US 6978378) and further in view of Walton (US 20170004316).
Regarding claim 5 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 4 above, Gnanasekaran further teaches further comprising: (Gnanasekaran Fig 2 block S212, S214, Fig 4 block 402 and text on [0048] teaches the AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125 and the text component 127, the image component 125, the text component 127 may analyze the prompt via an internal ML model to identify PII or confidential information, or may access an external service to identify PII so the image component 127 and text component 127 may determine the PII status. In a case it is determined at S212, PII status is “Contains PII”, the method proceeds to S214 and a “Contains PII” output 402 (FIG. 4) is returned to the display in the Vetted Response field 306 and “Send to AI” icon 316 are greyed out i.e., in-line access is disabled to generative AI model. See also on [0065] teaches preventing PII and confidential information to generative AI model).
The combination fails to explicitly teach periodically obtaining up-to-date text from the open document, however Walton from analogous art teaches periodically obtaining up-to-date text from the open document (Walton on [0003] teaches periodically check the contents of the clipboard and eliminate any sensitive data).
Thus, it would have been obvious to one ordinary skill in the art before the effective filing date to implement the teaching of Walton into the combined teaching of Gnanasekaran and Koretz by periodically scanning the document for sensitive information. One would be motivated to do so in order to secure the sensitive information from being exposed to unauthorized access (Walton [0003]).
Regarding claim 14 the combination of Gnanasekaran and Koretz teaches all the limitations of claim 7 above, Gnanasekaran further teaches further comprising:
determining, with the computer-implemented scanning engine, whether the up-to-date text contains sensitive information; and disabling the in-line access to the generative artificial intelligence tool at the application in response to determining that the up-to-date text contains sensitive information; but leaving the in-line access to generative artificial intelligence tool enabled at the application in response to determining that the up-to-date text lacks sensitive information (Gnanasekaran Fig 2 block S212, S214, Fig 4 block 402 and text on [0048] teaches the AI gateway tool 122 identifies PII and other confidential text in the prompt 304 via the image component 125 and the text component 127, the image component 125, the text component 127 may analyze the prompt via an internal ML model to identify PII or confidential information, or may access an external service to identify PII so the image component 127 and text component 127 may determine the PII status. In a case it is determined at S212, PII status is “Contains PII”, the method proceeds to S214 and a “Contains PII” output 402 (FIG. 4) is returned to the display in the Vetted Response field 306 and “Send to AI” icon 316 are greyed out i.e., in-line access is disabled to generative AI model. See also on [0065] teaches preventing PII and confidential information to generative AI model).
The combination fails to explicitly teach periodically obtaining up-to-date text from the open document, however Walton from analogous art teaches periodically obtaining up-to-date text from the open document (Walton on [0003] teaches periodically check the contents of the clipboard and eliminate any sensitive data).
Thus, it would have been obvious to one ordinary skill in the art before the effective filing date to implement the teaching of Walton into the combined teaching of Gnanasekaran and Koretz by periodically scanning the document for sensitive information. One would be motivated to do so in order to secure the sensitive information from being exposed to unauthorized access (Walton [0003]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
CZERKIES et al (US 20250371261) is directed towards methods and systems for managing sensitive data are disclosed. Data indicative of a request may be received. The data may comprise sensitive information, such as information that a user does not want a machine learning model to access. The data may be transformed into a modified request based on replacing at least one portion of the sensitive information with generic information. A response to the request may be generated based on sending the modified request to the machine learning model. The machine learning model may be configured to generate data indicative of the response to the request without accessing the sensitive information.
Salim et al (US 20250278574) is directed towards systems and methods for perspective-based validation of prompts to generative artificial intelligence. More specifically, the present disclosure relates to, but is not limited to, generating prompts that cause a generative artificial intelligence (AI) to generate accurate output, and to provide one or more suggested prompts that have been validated to provide an accurate output.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOEEN KHAN whose telephone number is (571)272-3522. The examiner can normally be reached 7AM-5PM EST M-TH Alternate Fridays.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached at (571)272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOEEN KHAN/Primary Examiner, Art Unit 2436