DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to communication filed on 06/25/2024.
Status of claims in the instant application:
Claims 1-20 are pending.
Election/Restrictions
No claim restrictions warranted at the applicant’s initial time of filing for patent.
Priority
The instant application claims priority benefit of “AUSTRALIA 2023902421 filed on 07/31/2023”. A certified copy of the priority document has been provided.
Information Disclosure Statement
No Information Disclosure Statements (IDS) has been filed by the Applicant.
Drawings
The drawings are objected to under 37 CFR 1.83(a) because they fail to show details, legends, texts as described in the specification.
The texts, markings and labels on the drawings are not legible.
Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Appropriate corrections required.
Specification
The abstract of the disclosure is objected.
Applicant is reminded of the proper content of an abstract of the disclosure.
A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art.
If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives.
Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps.
Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length.
See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts.
A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Claim Objections
Claims 6 and 13 are objected to because of the following informalities:
Claim 6 recites, “The method according to claim 1 wherein the subscribed analyst compromised data report is assessed structurally, grammatically, technically and genuity”
Claim 13 recites, “… determining if the scrubbing routine/s are applicable to the redacted object data …”.
There appears to be grammatical errors in the claims above. Claim 6 should recite “analyst’s”, and claim 13 recite “routines”.
Appropriate correction is required.
Claim Interpretation
No claim interpretation is warranted under 35 USC 112(f).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 4 recites, “The method according to claim 1 wherein the reward rules include defining a minimum predetermined level of trust, and/or skillset of the subscribing analysts.”
The use of “and/or” in the same claim limitation makes the claim language ambiguous/indefinite; “and” requires both the terms, however “or” requires only one of the terms. Therefore the claim is rejected as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Appropriate correction(s) required.
*** Note: For examination purposes the claim is interpreted as, “The method according to claim 1 wherein the reward rules include defining a minimum predetermined level of trust, [[and/]]or skillset of the subscribing analysts.”
Claims 1-15 and 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 1 recites the limitation “preferably enriching the prepared owner data”
As claimed (written/recited) the above step of claim 1 is not required, it’s just preferable (i.e. optional). The language of the claim limitation makes the claim is ambiguous/indefinite, and hence rejected as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
The dependent claims (2-15) are also rejected for the same reason as the independent claim, as they do not rectify the issue in the independent claim.
Claims 19-20 are also similarly rejected.
Appropriate corrections required.
*** Note: For examination purposes the limitation is interpreted as, ““”
Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 10 recites, “The method according to claim 9 including the step of converting the owner data into an object wherein the original data is an element”.
However, there is no other recitation of “original data” providing any antecedent basis, making the claim limitation ambiguous/indefinite, and hence rejected as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Appropriate corrections required.
*** Note: For examination purposes the limitation is interpreted as, “The method according to claim 9 including the step of converting the owner data into an object wherein the [[original]] owner data is an element”.
Claim Rejections - 35 USC § 101
No claim rejection is warranted under 35 USC 101.
Double Patenting
No claim rejection is warranted.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Pub. No.: US 20180268135 A1 to Nachenberg et al. (hereinafter “Nachenberg”) in view of Pub. No.: US 20220198059 A1 to Hatcher et al. (hereinafter “Hatcher”).
Regarding Claim 1. Nachenberg discloses A method of detecting compromised digital assets of a data owner (Nachenberg, Abstract, FIG. 2-3: … The subject matter of this specification generally relates to data security. In some implementations, a method includes receiving, from data owners, a first cryptographically secure representation of data to be monitored for data breaches …), the method comprising the steps of:
selecting a digital asset data of the data owner (Nachenberg, Para [0030]: … The secure data application 154 may generate a secure representation of less than all of a data owner's private data 152. For example, the data owner may select a representative set of its data for breach monitoring by the data breach detection system 110. In another example, the data owner may select certain types of data to be monitored such as credit card and customer data rather than purchase information …);
uploading selected owner data to a receiving system (Nachenberg, Para [0025-0028]: … the front-end server 112 can receive potentially stolen data 142 and/or secure representations 146 of the potentially stolen data 142 from data finders' computers 140 and secure representations 156 of private data 152 from data owners' computers 150 over the network 130 … The secure data application 154 may generate, as the secure representations, cryptographically secure representations of the private data 152 … the secure data application 154 may use one or more one-way cryptographic hash functions to map the private data 152 …);
identifying, by said receiving system, predetermined data types of the uploaded owner data (Nachenberg, Para [0031-0032]: … the secure data application 154 generates a secure representation of tuples of private data. As described above, the private data may be organized based on entity. The secure data application 154 may generate one or more tuples of data for each entity and generate the secure representation using the tuples. Each tuple can include one or more types of data and each tuple can include different types of data than each other tuple … As described in more detail below, if potentially stolen data includes the same tuples of data represented by the secure representation generated for a data owner, the data breach detection system 110 may determine that the data owner's data has been breached. In the credit card example, if the stolen data includes the credit card number, cardholder name, and cardholder billing address and/or the credit card number, expiration date, and cardholder name for at least a threshold number of credit cards, the data breach detection system 110 may determine that the credit card data owner's data has been breached …) and [performing on identified data types one or more selected sequentially from the group consisting of:
redacting owner specific data, scrubbing predetermined common data, and tokenising predetermined data;
assessing and certifying the accuracy of the redaction, scrubbing and tokenisation of the data and producing prepared owner data]; and
However, Nachenberg does not explicitly teach, but Hatcher from same or similar field of endeavor teaches:
“performing on identified data types one or more selected sequentially from the group consisting of:
redacting owner specific data, scrubbing predetermined common data, and tokenising predetermined data (Hatcher, Abstract, Para [0018-0019, 0114, 0118]: … A computer-implemented method of restricting access to a data owner's data comprising the steps of storing a record associated with a data owner; receiving a request to protect data from the data owner; protecting the data by way of encryption, tokenization or other data protection mechanism … When a data owner initiates a transaction with a merchant/service provider, the security provider converts the data owner's personal identifiable information to format-preserving, smart tokens. The security provider transmits these tokens to the service provider, who then stores the tokens (not the actual personal identifiable information) in its database … The data owner uses a simple Allow/Not Allow toggle button in the user interface to trigger the tokenization process of their personal identifiable information. If the data owner selects “Not Allow,” her personal identifiable information is tokenized and masked from the service provider. While in this form, the tokenized personal identifiable information is not stored in un-tokenized form anywhere on the service provider's infrastructure. If an employee of the service provider attempts to look up the tokenized part of a data owner's record, he will only see tokens. Further, the security provider cannot view nor access the data owner's tokenized personal identifiable information. It is “forgotten.” As long as a data owner's unique Identifier is not present, the service provider may not de-tokenize (i.e., may not view) the consumer's “forgotten” personal identifiable information. Thus, once the data owner has performed the RtBF process using his or her unique Identifier, the service provider can no longer see the personal identifiable information … Data masking (also known as data scrambling and data anonymization) is the process of replacing sensitive information copied from production databases to test non-production databases with realistic, but scrubbed, data based on masking rules …);
assessing and certifying the accuracy of the redaction, scrubbing and tokenisation of the data and producing prepared owner data (Hatcher, Abstract, Para [0015]: … In some embodiments of the present invention, there is absolute confidence that personal identifiable information tokens, which the data owner can restrict access to, are unreadable to the service provider and the organization (e.g., merchant). In such embodiments, once the data owner toggles the “Right to be Remembered” button, all of the data owner's personal identifiable information and history are once again available to the service provider. If the data owner wants to be forgotten again, she can toggle the switch to the ‘Forgotten’ setting, and her personal identifiable information is rendered un-readable once more. All requests and actions are fully auditable …); ”
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hatcher into the teachings of Nachenberg, because it discloses that, “The present invention solves these challenges. It is frictionless and flexible. The data owner is empowered by the service provider (e.g., online merchant) to control access to her personal identifiable information, allowing the service provider to have access to personal identifiable information data only when it is necessary to complete a transaction. Through this partnership, the risk of inappropriate access or breach of personal identifiable information Data. The invention lowers the cost to the service provider and gives control and peace of mind to the data owner (Hatcher, Para [0014])”.
Nachenberg further discloses:
preferably enriching the prepared owner data (Nachenberg, Para [0074-0076]: … The system receives a set of private data (402). For example, a data owner may identify a set of private data for which the owner would like a breach detection system to monitor for data breaches. The set of private data may be all or some portion of the data maintained by the data owner. For example, the set of private data may be data that is more likely to be stolen, such as credit card data that could be sold. In another example, the set of private data may be a representative sample of the data maintained by the data owner … The system generates tuples using the data (404). Each tuple can include one or more types of data and each tuple can include different types of data than each other tuple. For example, one tuple may include a credit card number and expiration date and a second tuple may include a credit card number, expiration date, and cardholder name. The system can generate one or more tuples for each individual data record. For example, the system may generate the first and second tuple for each credit card data record included in the private data … );
“defining, on the receiving system, reward rules for detection of compromised prepared owner data including a tangible reward (Nachenberg, Para [0024, 0039]: … Various compensation techniques can be used to incentivize users that come across potentially stolen data to submit secure representations of the data to the system. For example, data owners may pay a subscription fee for use of the service and users that submit secure representations of stolen data may be paid a portion of the subscription fee when the submitted data is used to detect an actual breach … If at least a threshold number of the secure representations received from the particular data finder match a corresponding portion of the secure representation of a particular data owner, the breach detection server 114 may determine that the particular data owner's data has been breached. For example, the particular data finder may provide secure representations for each of ten stolen social security numbers and related data (e.g., names, addresses, etc.). Similarly, the particular data owner may have provided a secure representation for its customers' data, including social security numbers, names, addresses, etc. If at least a threshold number (e.g., 5, 7, or some other appropriate threshold) of the secure representations received from the particular data finder match corresponding portions of the secure representation received from the particular data owner, the breach detection system 110 may determine that the particular data owners' data has been breached as the particular data finder has found a sufficient amount of the data owners' data. This use of a threshold prevents a user for simply submitting secure representations of the user's own data in an attempt to collect compensation or a reward …);
subscribing one or more predetermined registered third party analysts to the prepared owner data (Nachenberg, Para [0064]: … The system receives secure representations of data to be monitored for breaches from data owners (302). For example, multiple data owners, e.g., credit card companies, retailers, insurance companies, corporations, businesses, and/or other organizations, may subscribe to a data breach monitoring service provided by the system. The data owners may pay a subscription fee (e.g., a periodic fee) to have their data monitored or may pay a fee in response to the detection of a breach of their data …);
receiving by the receiving system, from at least one subscribed analyst, a compromised data report indicating one or more compromises of the prepared data (Nachenberg, Para [0022, 0066]: … A data breach detection system can detect data breaches based on cryptographically secure representations of potentially stolen data provided by a user that gained access to the data. For example, bounty hunters that search for stolen data or other users may come across stolen credit card data in underground Internet forums, within the dark web, in real world locations (e.g., at a coffee shop, bar, or park), or other locations where such data is sold or shared. The user can use an application to generate secure representations (e.g., a probabilistic representation) of the data and submit the secure representations of the data to the system. Or, the user can provide the actual data and the data breach detection system can generate the secure representations of the data. As described in more detail below, an example of a probabilistic representation of data is a Bloom filter …);
validating by the receiving system the received compromised data report (Nachenberg, Para [0044-0045]: … In another example, a data owner may provide one or more individual secure representations for each database record in its database. Each secure representation may be associated with an identifier for its corresponding database record. The secure representations of the potentially stolen data can be compared to the individual secure representations. If there is a match, the breach detection system 110 can provide, to the data owner, the identifiers for the database records that have a secure representation that matches a secure representation of potentially stolen data. The data owner can use the identifier(s) to identify the person(s) associated with the database records identified by the received identifier(s) and notify the person(s) of the breach … secure representations of potentially stolen data received from a particular data finder may match the secure representations received from multiple different data owners …);
sending, by the receiving system, data indicative of the compromised data report to the data owner (Nachenberg, Para [0046, 0071], Claim 19: … the system may determine that the secure representations of the potentially stolen data match the secure representation of a data owner's data and that a data breach occurred for the data owner (310). In response, the system may notify the data owner of the breach (312). For example, the system may send an e-mail or text message to a device of the data owner to notify the data owner of the breach. In another example, if at least a threshold percentage of the secure representations of potentially stolen data match corresponding portions of the data owner's secure representation, the system may determine that a data breach occurred for the data owner and notify the data owner in response to the determination …); and
facilitated by the receiving system transfer of the reward to the subscribed analyst (Nachenberg, Para [0033, 0047, 0072]: … In some implementations, data finders are compensated or rewarded with a breach amount 148 (e.g., a monetary amount or amount of rewards) for providing potentially stolen data and/or secure representations of potentially stolen data. For example, the data breach detection system 110 may provide monetary compensation (e.g., in the form of digital currency) to a data finder that provides secure representations that are used to detect a data breach. The amount of compensation may be a pre-specified amount for each breach. For example, the breach detection system 110 may provide a particular amount irrespective of the type of breach or severity of the breach … The system may provide compensation or a reward to the user that provided the potentially stolen data or the secure representations of the potentially stolen data (314). As described above, the amount may be based on the type of data breached, whether the user was the first to submit the data, an amount requested by the user, an amount the data owner is willing to pay, and/or an amount negotiated between the user and the data owner via the system …).”
Regarding Claim 2. The combination of Nachenberg-Hatcher discloses the method according to claim 1, Nachenberg further discloses, “wherein the owner transfers the tangible reward to the receiving system when defining the reward rules (Nachenberg, Para [0047-0048]: … In some implementations, data finders are compensated or rewarded with a breach amount 148 (e.g., a monetary amount or amount of rewards) for providing potentially stolen data and/or secure representations of potentially stolen data. For example, the data breach detection system 110 may provide monetary compensation (e.g., in the form of digital currency) to a data finder that provides secure representations that are used to detect a data breach. The amount of compensation may be a pre-specified amount for each breach. For example, the breach detection system 110 may provide a particular amount irrespective of the type of breach or severity of the breach … Data owners may be required to pay fees to the data breach detection system 110 that are used to compensate the data finders. For example, data owners may be required to pay periodic fees (e.g., monthly or annually) to have secure representations of their data monitored by the data breach detection system 110. In another example, data owners may be required to pay a fee only if a breach of the data owner's data is detected by the breach detection server 114. In yet another example, the data owners may be required to pay a periodic fee and a fee in response to a breach of the data owner's data being detected by the breach detection server 114 …).”
Regarding Claim 3. The combination of Nachenberg-Hatcher discloses the method according to claim 2, Nachenberg further discloses, “wherein the receiving system holds the tangible reward directly or indirectly in escrow until the subscriber analyst compromised data report is validated (Nachenberg, Para [0048]: … Data owners may be required to pay fees to the data breach detection system 110 that are used to compensate the data finders. For example, data owners may be required to pay periodic fees (e.g., monthly or annually) to have secure representations of their data monitored by the data breach detection system 110. In another example, data owners may be required to pay a fee only if a breach of the data owner's data is detected by the breach detection server 114. In yet another example, the data owners may be required to pay a periodic fee and a fee in response to a breach of the data owner's data being detected by the breach detection server 114 …).”
Regarding Claim 4. The combination of Nachenberg-Hatcher discloses the method according to claim 1, Nachenberg further discloses, “wherein the reward rules include defining a minimum predetermined level of trust (Nachenberg, Para [0008, 0023, 0039, 0068]: … The system determines whether the secure representations of the potentially stolen data match the secure representation of a data owner's data (306). In some implementations, the system determines that a breach occurred for a data owner if at least a threshold number of secure representations of potentially stolen data matches corresponding portions of the data owner's secure representation …), and/or skillset of the subscribing analysts.”
Regarding Claim 5. The combination of Nachenberg-Hatcher discloses the method according to claim 1, Nachenberg further discloses, “wherein the step of the receiving system validating the compromised data report further includes validation by the data owner (Nachenberg, Para [0034]: … The data finders' computers 140 include a secure data application 144, which may be the same as or similar to the secure data application 154 of the data owners' computers. The secure data application 154 may generate secure representations of potentially stolen data 142. For example, a data finder may provide, as input to the secure data application 154, potentially stolen data found in an Internet forum. The potentially stolen data may include multiple data records. For example, the stolen data may include a data record for each stolen credit card number and each data record may include data related to the stolen credit card number, e.g., the expiration date of the credit card, a security code for the credit card, the name of the cardholder, the cardholder's address, and/or other data found by the data finder. As a hacker may only provide a small subset of stolen data for authentication purposes, the stolen data may be in a different format from the format the data was in when stolen and may include incomplete data for each data record …).”
Regarding Claim 6. The combination of Nachenberg-Hatcher discloses the method according to claim 1, Nachenberg further discloses, “wherein the subscribed analyst compromised data report is assessed structurally, grammatically, technically and Genuity (Nachenberg, Para [0012, 0034]: … Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages. A system that allows users to anonymously provide cryptographically secure representations of data that may have been stolen or otherwise obtained in an unauthorized manner can lead to earlier detection of data breaches and increase the likelihood that data breaches are detected prior to the data being misused. As hackers typically must provide some of their stolen data to potential buyers to demonstrate the authenticity of the data, allowing users to submit secure representations of this data to the system makes it more difficult for hackers to monetize stolen data … The secure data application 154 may generate secure representations of potentially stolen data 142. For example, a data finder may provide, as input to the secure data application 154, potentially stolen data found in an Internet forum. The potentially stolen data may include multiple data records. For example, the stolen data may include a data record for each stolen credit card number and each data record may include data related to the stolen credit card number, e.g., the expiration date of the credit card, a security code for the credit card, the name of the cardholder, the cardholder's address, and/or other data found by the data finder. As a hacker may only provide a small subset of stolen data for authentication purposes, the stolen data may be in a different format from the format the data was in when stolen and may include incomplete data for each data record …).”
Regarding Claim 7. The combination of Nachenberg-Hatcher discloses the method according to claim 6, , Nachenberg further discloses, “wherein the structural assessment includes verifying: the subscriber identity, the format of the compromised data report, and the data of the compromised data report matches the prepared owner data (Nachenberg, Para [0038-0039]: … If at least a threshold number of the secure representations received from the particular data finder match a corresponding portion of the secure representation of a particular data owner, the breach detection server 114 may determine that the particular data owner's data has been breached. For example, the particular data finder may provide secure representations for each of ten stolen social security numbers and related data (e.g., names, addresses, etc.). Similarly, the particular data owner may have provided a secure representation for its customers' data, including social security numbers, names, addresses, etc. If at least a threshold number (e.g., 5, 7, or some other appropriate threshold) of the secure representations received from the particular data finder match corresponding portions of the secure representation received from the particular data owner, the breach detection system 110 may determine that the particular data owners' data has been breached as the particular data finder has found a sufficient amount of the data owners' data. This use of a threshold prevents a user for simply submitting secure representations of the user's own data in an attempt to collect compensation or a reward …).”
Regarding Claim 16. This claim contains all the same or similar limitations as claim 1, and hence similarly rejected as claim 1.
*** Note: Nachenberg also discloses the system (Nachenberg: FIG.1, Para [0015]).
Regarding Claim 19. This claim contains all the same or similar limitations as claim 1, and hence similarly rejected as claim 1.
*** Note: Nachenberg also discloses the non-transitory computer readable storage medium (Nachenberg: Para [0081]).
Claims 8-11, 17-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Pub. No.: US 20180268135 A1 to Nachenberg et al. (hereinafter “Nachenberg”) in view of Pub. No.: US 20220198059 A1 to Hatcher et al. (hereinafter “Hatcher”), as applied to claim 1 above, and further in view of Pub. No.: US 20240362345 A1 to Babani (hereinafter “Babani”)
Regarding Claim 8. The combination of Nachenberg-Hatcher discloses the method according to claim 1 including the step of remediating the owner data (Nachenberg, Para [0042]: … the breach detection server 114 may also notify the people affected by the breach. For example, a data owner may provide contact data (e.g., e-mail address, mobile phone numbers, etc.) for its customers whose data is represented by a secure representation. If the breach detection server 114 determines that the data represented by the secure representation was breach, the breach detection server 114 may send notifications to the affected people using the contact data. These notifications may specify that their data may have been compromised and include instructions for remedying the situation …) [to remove one or more compromises].”
However the combination of Nachenberg-Hatcher does not explicitly teach, but Babani from same or similar field of endeavor teaches, “ … to remove one or more compromises (Babani, Para [0079]: … In some embodiments, management console 504 may enable data provider 506 and/or data access network 510 to revoke access to a data recipient on-the-fly, e.g., upon determining the particular data recipient has been compromised. In some embodiments, management console 504 may enable data provider 506 to specify, or otherwise instruct data access network 510, that data (or requests to access user data stored at data provider 506) should not be sent to data provider 506 (or data recipient 512) at certain times, or only certain data should be sent at certain times. For example, a particular data provider may prefer not to be flooded with data traffic from the hours of 9 AM EST-10 AM EST of a business day, since many users may log into data provider 506 at this time to manage their stock portfolio, and the data provider may communicate this preference to data access network 510 …)”.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Babani into the combined teachings of Nachenberg-Hatcher, because it discloses that, “features may enable an intermediary entity (e.g., sitting between the data provider and the data recipient) to enhance the security, scalability and reliability of the intermediary entity's network, e.g., to allow the intermediary entity to be less dependent on the demands of a single data provider (and/or single data recipient). Such features may enable a distributed network to be established in which each data provider has its own dedicated resources (e.g., routing path and/or data flow), independent of other data providers, to reduce or eliminate the impact of a particular data provider (or data recipient) on other data providers and/or data recipients in the network. For example, in a cloud network, each data provider may be associated with a different network or instance than other data providers, whether in the same or different data center. Moreover, such an arrangement may enable cloud computing logs of network activity associated with (and/or resources expended on) each data provider to be separated natively in the cloud (Babani, Para [0012])”.
Regarding Claim 9. The combination of Nachenberg-Hatcher discloses the method according to claim 1, however it does not explicitly teach but Babani from same or similar field of endeavor teaches, “wherein the step of identifying predetermined data types includes the steps of defining existing data log identification routines and determining if an application has been previously defined, wherein the identification routines include assigned telemetry data (Babani, Para [0027, 0091, 0097]: … In some embodiments, the systems, methods and apparatuses provided herein may be further configured to generate a first log associated with requests received at the first network resource from one or more data recipients, and generate a second log associated with requests received at the second network resource from one or more data recipients … The transfer of user data 804 from data provider 806 to data access network 810 may be caused in response to receiving an indication from a user desiring to share his or her data (e.g., stored in association with data provider 806) with data recipient 812. Based on such request, a data access API, which may be externally facing for use by data recipients to access user data, may communicate with data provider 806 to facilitate user data elements for a particular product (e.g., checking account or VISA account) to be sent to collector 726 of FIG. 7, where the data and its format may vary based on bank and product type. Such user data elements may be filtered by data access network 810 based on data directive 808, prior to being provided to data recipient 812. In some embodiments, data access network 810 may define and configure the attributes and fields of various accounts and account types from the various data providers 806, based on information received from data provider 806, in order to implement rules and entitlements of each data provider when providing data to data recipient 812. In some embodiments, data 804 may be stored in any suitable structured or semi-structured data format (e.g., JSON, XML) … Data 823 may correspond to the remaining data fields that may be present after filtering is performed. In some embodiments, once the filtered data is obtained, data access network 810 may perform a check to ensure that no other data directives have been received from data provider 806, and may perform filtering if an intervening data directive is detected. Data access network 810 may convert data 823 into a format (e.g., JSON or XML) that is suitable for data recipient 812. In some embodiments, data access network 810 may consume data in any format, perform any suitable filtering on the data, and provide data in a suitable format to data recipient 812 on the downstream side. In some embodiments, data of byte stream 811 may flow encrypted into data access network 810, and data may be provided to data recipient 812 in an encrypted manner, where data may be encrypted and decrypted by respective parties using any suitable method (e.g., using private-public key pairs). In some embodiments, a header portion of data may indicate a type of data included in the payload Data 823 may correspond to the remaining data fields that may be present after filtering is performed. In some embodiments, once the filtered data is obtained, data access network 810 may perform a check to ensure that no other data directives have been received from data provider 806, and may perform filtering if an intervening data directive is detected. Data access network 810 may convert data 823 into a format (e.g., JSON or XML) that is suitable for data recipient 812. In some embodiments, data access network 810 may consume data in any format, perform any suitable filtering on the data, and provide data in a suitable format to data recipient 812 on the downstream side. In some embodiments, data of byte stream 811 may flow encrypted into data access network 810, and data may be provided to data recipient 812 in an encrypted manner, where data may be encrypted and decrypted by respective parties using any suitable method (e.g., using private-public key pairs). In some embodiments, a header portion of data may indicate a type of data included in the payload …).”
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Babani into the combined teachings of Nachenberg-Hatcher, because it discloses that, “features may enable an intermediary entity (e.g., sitting between the data provider and the data recipient) to enhance the security, scalability and reliability of the intermediary entity's network, e.g., to allow the intermediary entity to be less dependent on the demands of a single data provider (and/or single data recipient). Such features may enable a distributed network to be established in which each data provider has its own dedicated resources (e.g., routing path and/or data flow), independent of other data providers, to reduce or eliminate the impact of a particular data provider (or data recipient) on other data providers and/or data recipients in the network. For example, in a cloud network, each data provider may be associated with a different network or instance than other data providers, whether in the same or different data center. Moreover, such an arrangement may enable cloud computing logs of network activity associated with (and/or resources expended on) each data provider to be separated natively in the cloud (Babani, Para [0012])”.
Regarding Claim 10. The combination of Nachenberg-Hatcher-Babani discloses the method according to claim 9, Babani further discloses, “including the step of converting the owner data into an object wherein the original data is an element (Babani, Para [0091, 0094, 0099]: … The transfer of user data 804 from data provider 806 to data access network 810 may be caused in response to receiving an indication from a user desiring to share his or her data (e.g., stored in association with data provider 806) with data recipient 812. Based on such request, a data access API, which may be externally facing for use by data recipients to access user data, may communicate with data provider 806 to facilitate user data elements for a particular product (e.g., checking account or VISA account) to be sent to collector 726 of FIG. 7 … Key-value map 816 may temporarily store user data elements in a hierarchical manner to store certain data fields at predefined memory locations (e.g., corresponding to a memory address of buffer memory 729) such that a memory location of certain data fields may be known and indexed. Thus, user data that complies with data directive 808, and user data that does not comply with data directive 808, may be identified using index key identifiers in key-value map 816 without having to process the entirety of the received chunk of user data, thereby reducing the time and processing power required to perform filtering of data … data provider 806 may instruct data access network (e.g., by way of data directive 808) to mask certain data elements, e.g., based on the identify of data recipient 812. For example, a predefined number of digits or characters of an account number may be masked (e.g., replaced with an asterisk or star character) such that only a subset of the digits are readable. In some embodiments, masking may be performed on data elements received in a particular format (e.g. 5 JSON) to mask private information (e.g., social security numbers of a user). Such masking operation may be used by data access network 810 to perform the filtering operation. For example, any suitable syntax (*.accountId) may be used to specify that a particular data element (e.g., accountID) should be returned regardless of a parent eleent in which it is contained (e.g., InvestmentAccount) …).”
The motivation to further combine Babani remains same as in claim 9.
Regarding Claim 11. The combination of Nachenberg-Hatcher-Babani discloses method according to claim 10, Babani further discloses, “wherein the step of redacting owner specific data includes the steps of predefining one or more redaction routines according to the object data wherein object data requiring redaction is identified and replaced with predetermined redacted data (Babani, Para [0091, 0094, 0099]: …in some embodiments, data access network 810 may be configured to perform one or more of a variety of masking operations on data elements 804 received from data provider 806. For example, data provider 806 may instruct data access network (e.g., by way of data directive 808) to mask certain data elements, e.g., based on the identify of data recipient 812. For example, a predefined number of digits or characters of an account number may be masked (e.g., replaced with an asterisk or star character) such that only a subset of the digits are readable …).”
The motivation to further combine Babani remains same as in claim 10.
Regarding Claim 17. This claim contains all the same or similar limitations as claim 8, and hence similarly rejected as claim 8.
Regarding Claim 18. This claim contains all the same or similar limitations as claim 9, and hence similarly rejected as claim 9.
Regarding Claim 20. This claim contains all the same or similar limitations as claim 9, and hence similarly rejected as claim 9.
Allowable Subject Matter
Claims 12-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
As allowable subject matter has been indicated, applicant's reply must either comply with all formal requirements or specifically traverse each requirement not complied with. See 37 CFR 1.111(b) and MPEP § 707.07(a).
Examiner further notes that should the Applicant amends claim as directed above, all the independent claims be made similar in scope.
Reasons for allowance will be furnished upon allowance.
Pertinent Prior Art
The following priors art made of record and not relied upon are considered pertinent to applicant's disclosure.
US 20220067207 A1; Lindsay: Lindsay discloses generation of multiple types of tokens that are utilized in a highly structured document with freeform text. For example, a tokenization system may receive a request for tokenizing a document with a first portion having structured content and a second portion having unstructured or semi-structured content. In response, the tokenization system identifies sensitive information in the first portion of the document, generates format-preserving tokens for the sensitive information in the first portion of the document, identifies sensitive information in the second portion of the document, and generates self-describing tokens for the sensitive information in the second portion of the document. The self-describing tokens reference the sensitive information in the first portion of the document. The tokenization system may then communicate the format-preserving tokens and the self-describing tokens to the first client computing system or to a second client computing system.
The objective of Lindsay’s invention is to protect sensitive data values, such as a credit card number, the industry has evolved a “tokenization” strategy, which entails providing a surrogate value, called a “token,” to be used in place of the actual value. That is, in data security, tokens are surrogate values which are substitutes for the actual data (e.g., credit card number, social security number, account number, etc.), while the actual data is encrypted and stored elsewhere (e.g., in a secure data vault).
A tokenization operation takes as input a sensitive data value such as a credit card number, creates a randomized token, connects or associates the token with the original value, and returns the token, so that the application and any downstream processing can use the token in place of the original sensitive value without risking security breaches. The token-value pair is stored in a secure data vault, which is protected using strong encryption. The token can be used in all other systems outside the tokenization system that generated it. This minimizes the footprint of sensitive data in the computing environment (e.g., an enterprise computer network) where processing of the data takes place. As will be discussed below further, the original value can be restored if and when needed.
US 20090048997 A1; Manickam et al.: Manickam discloses an approach to provide for de-personalizing data. Content from a data source is retrieved in response to a request by a user. A rule for masking data (e.g., web data) is determined, wherein the rule is specified in a profile associated with the user. A search, within the content, for data that satisfy the rule is performed. The data that satisfy the rule is masked. The content is then modified with the masked data for delivery to the user.
US 12316610 B1; Muth et al.: Muth discloses A privacy network and unified trust model using privacy algorithms that can completely obfuscate any data or rendering the data opaque and meaningless so they can be freely aggregated and shared without risk of security or privacy breach. The obfuscated algorithms can be a