DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see Remarks, filed 11/03/2025, with respect to the rejection(s) of independent claims 1, 9 and 17 under 35 USC § 103 have been fully considered but are not persuasive.
With regards to claim 1, the Applicant's amendment necessitated a 112(a)-rejection presented in this Office action, and the rejection of the independent claims is maintained with the existing prior arts. The newly introduced claims, claims 24-28, have also been rejected based on newly found prior arts.
With regards to the rejection of claims 9 and 17, the Applicant on page 4/7 argues, “the proposed modification is inconsistent with Kwok’s operation and would render it unsatisfactory for its intended purpose. Kwok is directed to detecting and protecting sensitive information in real time as it is entered into an electronic form field (see, e.g., Kwok ¶¶ [0003]-[0005]). Incorporating Korpal’s downstream tokenization gateway into Kwok would permit sensitive data to traverse the application layer in cleartext, directly undermining Kwok’s immediate protection model.”. The Examiner respectfully disagrees. Korpal does not permit sensitive data to traverse the application layer in cleartext, rather as indicated in the most recent Non-Final Office Action, in paragraphs [0029]-[0030], (“… the tokenization gateway can tokenize the clear text data … and send the tokenized data and the authorization identifier to the application server … [0030] … tokenized data can be passed between devices in a network so that, in some cases, only the tokenized data can be stored in databases or flat files of various network devices.”). Thus, it is the tokenized data that leaves the tokenization gateway to the application server, not the clear text. Hence, modifying the teachings of the combination of Kwork, Mohaisen, Lucas and Hakim to incorporate the functionality of the tokenization gateway of Korpal would enable the combined system to tokenize clear text data, and send/transmit tokenized data from the application gateway. Therefore, the Examiner did not find the Applicant’s argument persuasive, and the rejection of claim 9, and its dependent claims, is maintained. Claim 17 recites substantially the same limitations as claim 9 in the form of a non-transitory computer-readable media for storing computer instructions. Therefore, claim 17, and its dependent claims, are rejected by the same rationale.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-6, 21-22 and 24-25 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Amended claim 1 recites the limitation “tokenizing the sensitive data such that the sensitive data is tokenized prior to being transmitted from the active user interface.”. However, the instant Application’s Specification in paragraph [0091] teaches the “tokenizing” is performed prior to transmitting from either the API gateway or the router, and only “redacting” of the sensitive data is performed prior to transmitting from the active user interface, as “the masking technique comprises tokenizing the sensitive data prior to transmitting the sensitive data from the router 134A or 134B. In some embodiments, the masking technique comprises redacting the sensitive data from display (e.g., on active user interface 104) on a user device. For example, the redaction may occur in real-time as a user is typing into the sensitive field.”. Therefore, there is no support for “tokenizing” the sensitive data prior to transmitting from the active user interface, and thus the claims are rejected failing to comply with the written description requirement. To expedite the execution of this Office Action, the Examiner considers the limitation as the “tokenizing” being performed prior to transmitting from the API gateway, as noted above in paragraph [0091] in Applicant’s Specification.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2 are rejected under 35 U.S.C. 103 as being unpatentable over US-PGPUB No. 2024/0061952 A1 to Kwok et al. (hereinafter “Kwok”), US-PGPUB No. 2016/0359831 A1 to Berlin et al. (hereinafter “Berlin”), and further in view of US-PGPUB No. 2020/0034832 A1 to Korpal et al. (hereinafter “Korpal”)
Regarding claim 1:
Kwok discloses:
A system (see Fig. 2, Sensitive Information Protection System 200) comprising:
at least one machine learning model (see Fig. 2, Machine Learning Model 202);
one or more processors (see Fig. 2, Processor 212); and
computer memory (see Fig. 2, Memory 214) storing computer-usable instructions (¶21: “a memory device containing instructions,”) that, when executed by the one or more processors, perform operations comprising:
detecting, by using the at least one machine learning model (¶05: “… invoking a machine learning model to detect entry of personal data in an electronic form field, …”), an entry of sensitive data (¶04: “… detecting entry of sensitive data in an electronic form field in real time, …”) […] on an active user interface (¶17: “… client enters information into an electronic form, …”);
However, Kwok does not explicitly disclose the following limitations taught by Berlin:
[…] at a sensitive field (Berlin, ¶211: “… the text entry field 608 (e.g., credential/password field)”)
based on detecting the entry of the sensitive data at the sensitive field, redacting at least a portion of the sensitive data as the sensitive data is being received on the active user interface (Berlin, ¶211: “… while the mark-up language for a webpage designates a text field as having the PASSWORD attribute (indicating that the value is to be obscured as it is being entered), such fields are known to be used for purposes other than credential entry (e.g., for entry of sensitive data such as birth dates)”);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of Kwok to incorporate the functionality of the mark-up language for a web page to obscure sensitive data as it is being entered, as disclosed by Berlin, such modification would allow the system to mask sensitive information while being entered into form fields so that it won’t be visible to malicious actors.
The combination of Kwok and Berlin does not explicitly disclose the following limitation taught by Korpal:
tokenizing the sensitive data received such that the sensitive data is tokenized prior to being transmitted from the active user interface (Korpal, ¶29-30: “… the tokenization gateway can tokenize the clear text data … and send the tokenized data and the authorization identifier to the application server … [0030] … tokenized data can be passed between devices in a network so that, in some cases, only the tokenized data can be stored in databases or flat files of various network devices.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok and Berlin to incorporate the functionality of the tokenization gateway to tokenize clear text data, and send the tokenized data to an application server, wherein the tokenized data can be passed between devices in a network, as disclosed by Korpal, such modification would enable the system to reduce, minimize, and/or obviate unauthorized access of the clear text data by way of hacking or attacking.
Regarding claim 2:
The combination of Kwok, Berlin and Korpal discloses:
The system of claim 1, further comprising detecting the entry of the sensitive data at the sensitive field using natural language processing (Kwok, ¶04: “… invoking natural language processing to detect the entry of sensitive data.”, see Fig. 2, Natural Language Processing Logic 216).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Berlin, Korpal, and further in view of US-PGPUB No. 2023/0403256 A1 to Sheedy et al. (hereinafter “Sheedy”)
Regarding claim 3:
The combination of Kwok, Berlin and Korpal discloses the system of claim 1, but does not explicitly disclose the following limitation taught by Sheedy:
further comprising determining that a website corresponding to the active user interface and the sensitive field is malicious by comparing a URL and letterhead associated with the website (Sheedy, ¶72-73: “AI engine 134 may compare the header data to the URL. The header data may correspond to the URL if the objective of the header data is similar to the concept addressed in the URL. … If, at step 214, AI engine 134 determines that the header data does not correspond to the URL, then, at step 215a, AI engine 134 may flag the requested webpage as containing malicious, or potentially malicious, data that may jeopardize sensitive enterprise organization data.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Berlin and Korpal to incorporate the functionality of the method to determine whether the header data within the source code corresponds to the URL associated with the requested webpage, as disclosed by Sheedy, such modification would allow the system to determine whether a URI associated with a webpage have been re-written in an attempt to access malicious data that may jeopardize sensitive enterprise organization data.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Berlin, Korpal, Sheedy, and further in view of USPAT No. 11470113 B1 to Orhan et al. (hereinafter “Orhan”)
Regarding claim 4:
The combination of Kwok, Berlin, Korpal and Sheedy discloses the system of claim 3, but does not explicitly disclose the following limitation taught by Orhan:
further comprising preventing, [by a network router] (Orhan, col 2, lines 40-41: “a data deception layer”) and based on determining that the website is malicious (Orhan, col 2, line 62: “… the website is malicious/phishing …”), transmission of the sensitive data prior to transmitting the sensitive data from an application programming interface gateway (Orhan, col 2, lines 60-63: “The method then blocks the website in the event or situations where the URL is found in a blacklist; and informs the user that the website is malicious/phishing in cases where the URL is found in blacklist.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Berlin, Korpal and Sheedy to incorporate the functionality of the method to eliminate data-theft through a phishing website by deploying a data deception layer in a network to track submit activity of a browser initiated by a user, as disclosed by Orhan, and incorporate such functionality into the combination, such modification would allow the system to protect the submission of sensitive information to a malicious website.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Berlin, Korpal, Sheedy, Orhan, and further in view of USPAT No. 10474836 B1 to Cieslak et al. (hereinafter “Cieslak”)
Regarding claim 5:
The combination of Kwok, Berlin, Korpal, Sheedy and Orhan discloses the system of claim 4, but does not explicitly disclose the following limitation taught by Cieslak:
further comprising transmitting an alert to a user device (Cieslak, col 1, lines 59-61: “… transmit, … an alert to the user computing device,”) corresponding to the entry of the sensitive data at the sensitive field (Cieslak, col 1, lines 44-55: “… requested content includes at least one field into which the user may input sensitive information, … receive, by the network interface, a user input, the user input containing sensitive information regarding the user.”), wherein the alert indicates that the website is malicious (Cieslak, col 1, lines 61-62: “… the alert informing the user that the network destination from which the content was requested is illegitimate.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Berlin, Korpal, Sheedy and Orhan to incorporate the functionality of the method to transmit an alert to a user computing device when a user inputs sensitive information into the user computing device, informing the user that the network destination from which the content was requested is illegitimate, as disclosed by Cieslak, such modification would allow the system to protect disclosure of sensitive information to illegal websites, and take corrective actions as needed.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Berlin, Korpal, and further in view of US-PGPUB No. 2020/0053059 A1 to Huang et al. (hereinafter “Huang”)
Regarding claim 6:
The combination of Kwok, Berlin and Korpal discloses the system of claim 1, but does not explicitly disclose the following limitation taught by Huang:
further comprising:
causing the tokenized sensitive data to be transmitted to a first endpoint within a network (see Fig. 1, Internet 50) that includes the network layer (Huang, ¶62: “… The encryption module 320 may then send … the encrypted sensitive information to a connector (e.g., first connector 310A).”, see Fig. 3);
causing the tokenized sensitive data to be transmitted from the first endpoint to a second endpoint (Huang, ¶73: “… the first connector (Fig. 3, connector 310A) may transmit the encrypted sensitive information to the second connector (Fig. 3, connector 310B) (412).”) within the network (Huang, see Fig. 3, Enterprise Network).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Berlin and Korpal to incorporate the functionality of the method to transmit encrypted sensitive information from a first connector to a second connector within a network, as disclosed by Huang, such modification would allow the system to share sensitive information between trusted devices within a network.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Berlin, Korpal, and further in view of US-PGPUB No. 2019/0019154 A1 to Girdhar
Regarding claim 21:
The combination of Kwok, Berlin and Korpal discloses the system of claim 1, but does not explicitly disclose the following limitation taught by Girdhar:
further comprising validating, based on user permissions, a request to un-redact at least a portion of the sensitive data (Girdhar, ¶37: “… the sensitive content may be unmasked in response to a user request and re-authentication of first user 106.”); and
based on the validation, un-redacting the at least portion of the sensitive data for display on the active user interface (Girdhar, ¶36-37: “… the unmasking of the sensitive content may occur when first user 106 requests that the sensitive content be displayed … the sensitive content may be unmasked in response to a user request and re-authentication of first user 106.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Berlin and Korpal to incorporate the functionality of the method to unmask a sensitive content when a client device becomes more secure or when a user and/or the client device is authenticated by an enterprise, as disclosed by Girdhar, such modification would allow the system to display the sensitive content making it readable to the user in a secured environment.
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Berlin, Korpal, and further in view of US-PGPUB No. 2009/0208142 A1 to Treadwell et al. (hereinafter “Treadwell”)
Regarding claim 22:
The combination of Kwok, Berlin and Korpal discloses the system of claim 1, but does not explicitly disclose the following limitation taught by Treadwell:
wherein the sensitive data is detected based on listening to keystrokes on the active user interface and identifying, by the at least one machine learning model, a pattern of the keystrokes (Treadwell, ¶34: “… the add-in can be "always on" and monitor substantially every keystroke by examining the syntactical typography for confidential information and/or expressions that following patterns that typically include confidential information.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Berlin and Korpal to incorporate the functionality of the document monitor add-in to monitor every keystroke by examining the syntactical typography for confidential information, as disclosed by Treadwell, and train the machine learning model of Kwok to accomplish the same, wherein such modification would allow the system to identify the entry of sensitive data by monitoring keystrokes and identifying patterns that match confidential information, and take appropriate action to secure the sensitive data (see Fig. 4 of Treadwell, step 408, “correction/highlight/redaction”).
Claims 9, 12 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Kwok, US-PGPUB No. 2014/0337460 A1 to Mohaisen, US-PGPUB No. 2021/0304143 A1 to Lucas, US-PGPUB No. 2023/0421562 A1 to Hakim et al. (hereinafter “Hakim”), and further in view of Korpal
Regarding claim 9:
Kwok discloses:
A computerized method for centralized and decentralized data protection (¶67: “… method 600 for protecting sensitive information may execute instructions on a processor that cause the processor to perform operations associated with the method.”), the method comprising:
detecting, […] that data being received […] includes sensitive data (¶04: “detecting entry of sensitive data in an electronic form field”);
and based on detecting that the data being received […] includes the sensitive data, applying, […], a masking technique to at least a portion of the sensitive data prior to transmitting the sensitive data […] to [… a layer] (¶42: “when an agent enters an SSN, they may receive a message that they typed in an SSN, and the SSN is then redacted. For example, all of the SSN numbers may be replaced with asterisks.”);
Kwok does not explicitly disclose the following limitation taught by Mohaisen:
detecting, […] that data being received by a router includes sensitive data (Mohaisen, ¶37-38: “… a computing device (e.g. a router in an ICN) receives a request for sensitive content from a first user. … the computing device may determine that the content is sensitive content … when the content is eventually received from the content server.”);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of Kwok to incorporate the functionality of the method to determine that content received at a computing device (router) is sensitive content , as disclosed by Mohaisen, such modification would enable the system to identify potentially sensitive content prior to transmitting to a network layer, to apply the appropriate protection to the sensitive content.
The combination of Kwok and Mohaisen does not explicitly disclose the following limitation taught by Lucas:
[detecting], […] by an application programming interface (API) gateway [….] that data being received [by a router] includes sensitive data (Lucas, ¶88: “API gateway 224 may also identify, using API contract data stored in data repository 222, that the “name” data field from an API definition of service 210 is associated with a “PII” data classification.”);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok and Mohaisen to incorporate the functionality of the method implementing an api gateway to identify that a data field is associated with a “PII” classification, as disclosed by Lucas, such modification would enable the system to inspect API calls in real time or asynchronously to ensure sensitive information is not improperly sent or leaked..
However, the combination of Kwok, Mohaisen and Lucas does not explicitly disclose the following limitation taught by Hakim:
[….] [using at least] a machine learning model trained using a plurality of application programming interface requests (Hakim, ¶98-99: “The machine learning training module 163, … may train the machine learning model 400 using one or more training datasets 402 comprising information collected from received messages comprising API requests and/or from the API requests themselves. … to train the machine learning model 400, the gateway device 140 may send, … messages comprising API requests …”, ¶24: “The gateway device 140 may be an API gateway,”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen and Lucas to substitute the gateway device by an API gateway as taught by Hakim in paragraph [0024], and to incorporate the functionality of the machine learning training module to train the machine learning model using training datasets comprising a plurality of API requests, and the trained machine learning model determines the likelihood that a newly received API request is associated with potential harmful activity, as disclosed by Hakim, such modification would enable the system to provide a centralized management point for API access, enabling features like authentication, rate limiting, load balancing, and analytics on top of simple network connectivity, and allow the system to identify an API request that attempts to access highly sensitive data in a plurality of API requests.
The combination of Kwok, Mohaisen, Lucas and Hakim does not explicitly disclose the following limitation taught by Korpal:
based on detecting that the data being received [by the router] includes the sensitive data, applying, [by the API gateway], a masking technique comprising tokenization to at least a portion of the sensitive data prior to transmitting the sensitive data [from the API gateway] to a network layer (Korpal, ¶29: “… the tokenization gateway can tokenize the clear text data used to complete a payment transaction, and send the tokenized data and the authorization identifier to the application server upon authorization and/or completion of the payment transaction.”, see Fig. 1, step 108, the tokenization gateway receives encrypted clear text data. Note: a payment transaction is a sensitive data);
and transmitting, [by the API gateway], at least the portion of the masked sensitive data to the network layer (Korpal, ¶29: “… the tokenization gateway can … send the tokenized data and the authorization identifier to the application server ….”.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas and Hakim to incorporate the functionality of the tokenization gateway to tokenize clear text data, and send the tokenized data to an application server, wherein the tokenized data can be passed between devices in a network, as disclosed by Korpal, such modification would enable the system to reduce, minimize, and/or obviate unauthorized access of the clear text data by way of hacking or attacking.
Regarding claim 12:
The combination of Kwok, Mohaisen, Lucas, Hakim and Korpal discloses:
The computerized method of claim 9, wherein the masking technique comprises redacting the sensitive data from display on an active user interface of a user device (Kwok, ¶42: “… when an agent enters an SSN, they may receive a message that they typed in an SSN, and the SSN is then redacted. For example, all of the SSN numbers may be replaced with asterisks.”).
Regarding claim 17:
Kwok discloses:
Non-transitory computer-readable media (see Fig. 5, Memory 512) having computer-usable instructions embodied thereon (¶62: “… a memory that stores instructions …”) that, when executed by a processor (see Fig. 5, Processor 508), perform operations for centralized and decentralized data protection (¶62: “… a memory that stores instructions that, when executed, cause the processor to perform the functionality of each component or logic.”), the operations comprising:
In addition to the above limitations, claim 17 recites substantially the same limitations as claim 9 in the form of a non-transitory computer-readable media for storing computer instructions. Therefore, it is rejected by the same rationale.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Mohaisen, Lucas, Hakim, Korpal, US-PGPUB No. 2023/0396438 A1 to Mahoney et al. (hereinafter “Mahoney”), and further in view of Sancheti
Regarding claim 13:
The combination of Kwok, Mohaisen, Lucas, Hakim and Korpal discloses:
The computerized method of claim 9, further comprising:
receiving the sensitive data from an application downloaded on at least one user device (Kwok, ¶31: “… the sensitive information protection system 220 receives a dataset associated with a user entering information into an electronic form where the information may contain sensitive information.”, ¶57: “The electronic device 506 includes a sensitive information input screen 510 … The sensitive information input screen 510 may be any suitable software (application) such as a website page, electronic form, or another display on the electronic device 506 for entering sensitive information.”),
The combination of Kwok, Mohaisen, Lucas, Hakim and Korpal does not explicitly disclose the following limitation taught by Mahoney:
detecting that the data being received by the router includes the sensitive data based on an application programming interface request, from the application, for a security token for encryption (Mahoney, ¶33-35: “… a given field or portion of an API call may be associated with a character string (e.g., “cipher”) that designates the field or portion as having been deemed sensitive, and requiring encryption. … the request includes the access token …”);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim and Korpal to incorporate the functionality of the method to require one or more fields or portions that are seemed sensitive to be encrypted (masked) before being transmitted to a user application, as disclosed by Mahoney, such modification would allow the system to protect the information in the one or more fields or portions from being exposed to a third party.
The combination of Kwok, Mohaisen, Lucas, Hakim, Korpal and Mahoney does not explicitly disclose the following limitation taught by Sancheti:
and wherein the application is managed by a container orchestration platform (Sancheti, ¶27: “… a container command 113 issued to a user interface 209, such as the API 127, of a container orchestration platform 200. The container command 113 may correspond to (e.g., be designated for) a particular container 211 managed by the container orchestration platform 200.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim, Korpal and Mahoney to incorporate the functionality of the method to implement a container orchestration platform integrated with a user interface, as disclosed by Sancheti, such modification would allow the system to manage containers and reduce the workload on users.
Claims 14 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Mohaisen, Lucas, Hakim, Korpal, Treadwell, and further in view of US-PGPUB No. 2023/0229246 A1 to Gerhard et al. (hereinafter “Gerhard”)
Regarding claim 14:
The combination of Kwok, Mohaisen, Lucas, Hakim and Korpal discloses the computerized method of claim 9, but does not explicitly disclose the following limitation taught by Treadwell:
wherein the sensitive data is detected based on listening to keystrokes on at least one user device and identifying, [via the machine learning model], a pattern of the keystrokes (Treadwell, ¶34: “… the add-in can be "always on" and monitor substantially every keystroke by examining the syntactical typography for confidential information and/or expressions that following patterns that typically include confidential information.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim and Korpal to incorporate the functionality of the document monitor add-in to monitor every keystroke by examining the syntactical typography for confidential information, as disclosed by Treadwell, and train the machine learning model of Kwok to accomplish the same, wherein such modification would allow the system to identify the entry of sensitive data by monitoring keystrokes and identifying patterns that match confidential information, and take appropriate action to secure the sensitive data (see Fig. 4 of Treadwell, step 408, “correction/highlight/redaction”).
The combination of Kwok, Mohaisen, Lucas, Hakim, Korpal and Treadwell does not explicitly disclose the following limitation taught by Gerhard:
[identifying,] via the machine learning model, [a pattern of the keystrokes] (Gerhard, ¶128: “a machine learning model (e.g., a trained neural network) is applied to identify patterns in … keystroke dynamics of the user,”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim, Korpal and Treadwell to incorporate the functionality of the method to apply machine learning model to identify patterns in keystroke dynamics of a user, as disclosed by Gerhard, and train the machine learning model of Kwok to accomplish the same, wherein such modification would enable the system to detect deviations from a user's established typing pattern, potentially indicating unauthorized access or the entry of sensitive information.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Mohaisen, Lucas, Hakim, Korpal, and further in view of US-PGPUB No. 2012/0137368 A1 to Vanstone et al. (hereinafter “Vanstone”),
Regarding claim 18:
The combination of Kwok, Mohaisen, Lucas, Hakim and Korpal discloses the non-transitory computer-readable media of claim 17, but does not explicitly disclose the following limitation taught by Vanstone:
wherein the sensitive data is detected based on the entry of the sensitive data within a sensitive field of an active user interface (Vanstone, ¶101: “Content entered into either the credential field 820 or the password field 822 may be identified as protected information by the device 500.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim and Korpal to incorporate the functionality of the method to compare content field identifiers with known list of identifiers and identify content fields that seek protected (sensitive) information from a user within a web page or an electronic form, as disclosed by Vanstone, such modification would allow the system to detect malicious actors when a non-trusted application is executed within a user device, thus providing protection from phishing attacks.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Mohaisen, Lucas, Hakim, Korpal, and further in view of Mahoney
Regarding claim 20:
The combination of Kwok, Mohaisen, Lucas, Hakim and Korpal discloses the non-transitory computer-readable media of claim 17, but does not explicitly teach the following limitation taught by Mahoney:
wherein the sensitive data is detected based on an application programming interface request, from an application corresponding to the user device, for a security token for encryption (Mahoney, ¶33-35: “… a given field or portion of an API call may be associated with a character string (e.g., “cipher”) that designates the field or portion as having been deemed sensitive, and requiring encryption. … the request includes the access token …”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim and Korpal to incorporate the functionality of the method to require one or more fields or portions that are seemed sensitive to be encrypted (masked) before being transmitted to a user application, as disclosed by Mahoney, such modification would allow the system to protect the information in the one or more fields or portions from being exposed to a third party.
Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Berlin, Korpal, and further in view of US-PGPUB No. 2016/0006760 A1 to Lala et al. (hereinafter “Lala”)
Regarding claim 24:
The combination of Kwok, Berlin and Korpal discloses the system of claim 1, but does not explicitly disclose the following limitation taught by Lala:
further comprising determining that a website corresponding to the active user interface is malicious based on an IP destination address not having access to an edge node of a network (Lala, p-52: “The user may provide input at their electronic device 503 (such as a smart phone, tablet or laptop), or at another computing system via a physical keyboard 502. … The phishing prevention service 505 may be running as part of a browser, or as part of an operating system service, or as part of a web traffic monitoring service that monitors the user's interaction with internet websites 508. The phishing prevention service 505 may include a navigation blocker that blocks navigation to suspicious or known-bad websites, especially those determined by module 110 to have a mismatch between hyperlink display text and hyperlink destination.”), and preventing transmission of the sensitive data from a network router prior to transmitting the sensitive data from the application programming interface gateway (Lala, p-53: “the sensitive information blocker 507 will prevent the sensitive information from being sent to the destination address, and may further notify the user that data loss to a suspected phishing web site was prevented.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Berlin and Korpal to incorporate the functionality of the phishing prevention service 505 running as a web traffic monitoring service that monitors the user's interaction with internet websites 508, and also include a sensitive information blocker 507 that prevents sensitive information from being transmitted to other internet websites 508, as disclosed by Lala, such modification would enable the system to detect phishing attacks, and implement proper mitigation actions.
Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Berlin, Korpal, and further in view of US-PGPUB No. 2020/0334381 A1 to Yarowsky et al. (hereinafter “Yarowsky”)
Regarding claim 25:
The combination of Kwok, Berlin and Korpal discloses the system of claim 1, but does not explicitly disclose the following limitation taught by Yarowsky:
wherein detecting the entry of the sensitive data comprises using natural language processing (Yarowsky, see Fig. 1B, Natural Language Processing 120) to detect a medical professional identifier having a prefix or suffix associated with an individual's name, wherein the prefix or suffix comprises at least one of ''Dr.'', "ID", or "RN" (Yarowsky, p-68: “the system's best estimation of their sensitive information type based on the context patterns used for classification in module 170 (e.g. a likely first name identified based on its occurrence between a title (e.g. Dr.) and known/likely surname …”, see Fig. 1B, Module 170 is part of NLP 120).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim and Korpal to incorporate the functionality of the method to implement a natural language processor to identify entry of a sensitive data by detecting titles of personnel, as disclosed by Yarosky, such modification improves cybersecurity by normalizing data, reducing noise, and enabling faster, more accurate identification of malicious patterns in unstructured text. .
Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Mohaisen, Lucas, Hakim, Korpal, US-PGPUB No. 2013/0067225 A1 to Shochet et al. (hereinafter “Shochet”), and further in view of US-PGPUB No. 2017/0346792 A1 to Nataros et al. (hereinafter “Nataros”)
Regarding claim 26:
The combination of Kwok, Mohaisen, Lucas, Hakim and Korpal discloses the computerized method of claim 9, but does not explicitly disclose the following limitation taught by Shochet:
further comprising:
detecting, by using the machine learning model, that sensitive data is being entered at a sensitive field within a user interface (Shochet, ¶118: “… once the users are entering a credit card number as part of the call description, the device will detect this sensitive element in the call description and will mask it according to the company policy.”);
encrypting one or more attributes of the sensitive data based on a type of sensitive data being encrypted (Shochet, p-116: “… only highly sensitive parameters may be encrypted whereas sensitive parameters may be encrypted only if the level of confidence that the encryption will not cause any undesired effect is substantially high.”);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim and Korpal to incorporate the functionality of the device to detect an entry of a sensitive element, and mask it according to company policy, as disclosed by Shochet, such modification secures PII/financial data in real-time, maintains customer trust, reduces liability, and allows safe, authorized use of data for testing.
The combination of Kwok, Mohaisen, Lucas, Hakim, Korpal and Shochet does not explicitly disclose the following limitation taught by Nataros:
and requesting a security token from an enterprise token repository for encrypting the sensitive data prior to transmitting the sensitive data across a layer of a network (Nataros, p-332: “the client front-end 302 will request a public key from the master authentication server 306. The client front-end 302 will then use the obtained public key to encrypt the client frontend's login credentials before transmitting them to the master authentication server 306.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim, Korpal and Shochet to incorporate the functionality of the client front-end to request a public key from a master authentication server to credentials before transmitting them to the master authentication server, as disclosed by Sheedy, such modification guarantees that sensitive data remains safe from unauthorized access during transit, supports authentication, and ensures data integrity.
Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Kwok, Mohaisen, Lucas, Hakim, Korpal, US-PGPUB No. 2013/0067225 A1 to Coviello et al. (hereinafter “Coviello”), and further in view of US-PGPUB No. 20220385647 A1 to Pabón et al. (hereinafter “Pabón”)
Regarding claim 27:
The combination of Kwok, Mohaisen, Lucas, Hakim and Korpal discloses the computerized method of claim 9, but does not explicitly disclose the following limitation taught by Coviello:
wherein the machine learning model is deployed on a Kubernetes master node that employs sensitive data detection rules to orchestrate worker nodes and pods of a Kubernetes cluster (Coviello, ¶21-23: “… microservices are encapsulated into containerized environments, typically orchestrated by Kubernetes, and deployed in a set of instances that varies with traffic behavior. … Kubernetes is an operating system capable of running modern applications across multiple clusters and infrastructures on cloud services and private data center environments. Kubernetes include two layers including of the head nodes and worker nodes.”),
wherein the worker nodes and pods are associated with a network provider (Coviello, p-23: “The worker nodes act as the workhorses that run applications.”),
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim and Korpal to incorporate the functionality of the method to implement Kubernetes capable of running modern applications across multiple clusters and infrastructures on cloud services and private data center environments, as disclosed by Coviello, such modification provides significant advantages in automation, scalability, reliability, and portability of containerized workloads.
The combination of Kwok, Mohaisen, Lucas, Hakim, Korpal and Coviello does not explicitly disclose the following limitation taught by Pabón:
and wherein the Kubernetes master node detects the sensitive data based on an input received at a sensitive field on an active user interface (Pabón, p-262: “master node 404 may allow worker node 406 to access sensitive data such as application credentials, encryption keys, user tokens, etc.”, p-287: “Master node 404 may also block worker node 406 from obtaining sensitive data such as application credentials, encryption keys, user tokens, etc.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Kwok, Mohaisen, Lucas, Hakim, Korpal and Coviello to incorporate the functionality of the master node to allow or block a worker node from accessing sensitive data, as disclosed by Pabón, such modification provides critical security advantages by enforcing authentication, authorization, and network policies that regulate worker node access to sensitive data.
Allowable Subject Matter
Claim 28 is objected to as being dependent upon a rejected independent base claim 17, but would be allowable if rewritten in independent form including all of the limitations of the base claim and
any intervening claims.
With respect to claim 28, the combination of Kwok, Mohaisen, Lucas, Hakim and Korpal teaches the subject matter of claim 17, but fails to disclose the limitations of claim 28: receiving, at an API gateway or router, a request from a user interface to unmask at least a portion of the sensitive data; validating permissions based on accessing an active directory; receiving approval and a decryption token from an enterprise token repository in response to the approval; and permitting un-redaction of at least a portion of the sensitive data via an un-redact button on a navigation menu, wherein a record of the un-redaction is stored at an offline database. Thus, the combined subject matter disclosed by claims 17 and 28 is deemed allowable over the prior art of record.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHIAS HABTEGEORGIS whose telephone number is (571)272-1916. The examiner can normally be reached M-F 8am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William R. Korzuch can be reached on (571)272-7589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.H./Examiner, Art Unit 2491
/DANIEL B POTRATZ/Primary Examiner, Art Unit 2491