Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Allowable Subject Matter
Claims 6 – 7 and 15 – 16 are objected to as being dependent upon a rejected based claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 102/103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
MPEP 2112 Section III.
Where applicant claims a composition in terms of a function, property or characteristic and the composition of the prior art is the same as that of the claim but the function is not explicitly disclosed by the reference, the examiner may make a rejection under both 35 U.S.C. 102 and 103, expressed as a 102/ 103 rejection. "There is nothing inconsistent in concurrent rejections for obviousness under 35 U.S.C. 103 and for anticipation under 35 U.S.C. 102." In re Best, 562 F.2d 1252, 1255 n.4, 195 USPQ 430, 433 n.4 (CCPA 1977). This same rationale should also apply to product, apparatus, and process claims claimed in terms of function, property or characteristic. Therefore, a 35 U.S.C. 102/ 103 rejection is appropriate for these types of claims as well as for composition claims.
Claims 1 – 5, 8 – 14 and 17 – 18 are rejected under 35 U.S.C. 102/103 as being unpatentable over Davey (US Pub. No. 20220200969 A1).
Per claim 1, Davey (US Pub. No. 20220200969 A1) suggests a system comprising a computerized method of processing protected information comprising: processing (reads on receiving first user data, see Davey Figure 10 block 1002 and para 0170) a first set of protected information (reads on the totality of confidential first user data that includes identifiers of one or more users, see Davey Figure 10 block 1002 and para 0072,0079, 0170) in a first secure enclave (reads on a data exchange platform corresponding to healthcare platforms of the first party, see Davey para 0082 and 0146) to generate a first output (reads on the user data desired to be provided to the second party, see Davey para 0079 and 0146); encrypting identifiers (reads on obtaining first user identifiers corresponding to the first set of one or more users known to the first party and encrypting them using a homomorphic encryption algorithm, see Davey para 0147 – 0148. The Examiner notes Applicant’s disclosure teaches this is the same as encrypting an identifier as a first hash) of the first output (reads on the user data desired to be provided to the second party, see Davey para 0079 and 0146) as a first hash (reads on the encrypted first user identifiers that may be transmitted from the first party to the second party, see Davey para 0153 and Figure 8); encrypting the entire (reads on encrypting first user data that may include one or more identifiers of users in the first set of users, see Davey para 0170 and Figure 10 block 1002) first output (reads on the user data desired to be provided to the second party, see Davey para 0079 and 0146) as a first encrypted payload (reads on the data encrypted and transmitted to the second party, where the transmission may also include the encrypted first user identifiers, see Davey para 0172 and Figure 8 block 808 and Figure 10 block 1004); transfer the (reads on transmit the encrypted first user data, see Davey Figure 10 block 1004) first encrypted payload (reads on the data encrypted and transmitted to the second party, where the transmission may also include the encrypted first user identifiers, see Davey para 0172 and Figure 8 block 808 and Figure 10 block 1004) to a second secure enclave (reads on to a second party, see Davey para 0079, 0082, 0146, 0153, 0168); process (reads on receiving second user data, see Davey Figure 10 block 1002 and para 0170 and Figure 4 blocks 422 and 430 and para 0121 and 0173) a second set of protected information (reads on the totality of confidential second user data that includes identifiers of one or more users, see Davey Figure 10 block 1002 and para 0072,0079, 0121, 0170 and Figure 4 blocks 430 and 432) in the second secure enclave (reads on a data exchange platform corresponding to healthcare platforms of the second party, see Davey para 0082 and 0146 and Figure 4 block 422) to generate a second output (reads on the user data desired to be provided to the first party, see Davey para 0079 and 0146); encrypting identifiers of the second output as a second hash (reads on encrypting the second user identifiers, see Davey Figure 8 block 814); match hashes between the first hash and the second hash (reads on determine numerical differences between the encrypted first user identifiers and the second user identifiers, see Davey Figure 8 block 816 and para 0155 – 0158); identify candidate data based on the matches (reads on determine common users known to each party/intersection data based on the overlap in the encrypted identifiers, see Davey para 0080 and 0158).
[0071] As used herein, a party may include an individual, organization, company and/or corporation, for example. In some implementations, a party owns, operates or is otherwise associated with a device, system and/or computing platform that provides products and/or services to users. By way of example, the e-commerce platform 100 may be considered a computing platform that is owned and operated by a particular party.
[0072] In some embodiments, data that is exchanged using the data exchange engine 300 may be or include user data. This user data may be stored in the data facility 134, for example. The data exchange engine 300 may control the exchange of user data to help avoid the unnecessary disclosure of user data to another party, which may help protect user privacy. For example, privacy restrictions implemented by the e-commerce platform 100 may limit what user data can be shared and/or which parties user data can be shared with. Privacy restrictions may be imposed or defined by users (for example, based on the privacy settings users select through their account on the e-commerce platform 100), by the e-commerce platform 100 (for example, based on the privacy policies of the e-commerce platform 100) and/or by government regulations. As discussed in further detail elsewhere herein, the data exchange engine 300 may implement encryption to control the exchange of user data.
[0073] Although the data exchange engine 300 is illustrated as a distinct component of the e-commerce platform 100 in FIG. 3, this is only an example. A data exchange engine could also or instead be provided by another component of the e-commerce platform 100 or be offered as a stand-alone component or service that is external to the e-commerce platform 100. The e-commerce platform 100 could include multiple data exchange engines that are provided by one or more parties. The multiple data exchange engines could be implemented in the same way, in similar ways and/or in distinct ways. In addition, at least a portion of a data exchange engine could be implemented on a user device. For example, the merchant device 102 could store and run a data exchange engine locally as a software application to help control the exchange of data.
[0079] Some embodiments of the present disclosure provide systems and methods for sharing user data between two parties with confidential user data sets. Each party may encrypt their user data set before sending it to the other party. The user data sets may include identifiers of one or more users, which allows the common users known to each party to be determined. The parties are not able to directly decrypt each other's user data sets, and therefore the identifiers included in the user data sets remain confidential. However, one or both of the parties may use the encrypted user data sets to identify the common users that are known to each party. Once the common users are identified, user data pertaining to the common users may be exchanged between the two parties.
[0080] Encrypted data that is used to identify common users known to multiple parties may be referred to as “intersection data”. As used herein, intersection data is based on encrypted user data obtained from multiple parties. No single party can decrypt the intersection data and read the user data obtained from another party. However, the intersection data does allow at least one party to determine the commonalities or overlap between the user data obtained from two or more parties. This overlap may provide the identity of common users known to the parties, while the identity of users that are not known to a party remain confidential. In this way, intersection data may indicate the commonalities between multiple user data sets while maintaining the confidentiality of the user data sets.
[0082] The data exchange engines 402, 422 may be devices that are associated with respective parties interested in exchanging data with each other and/or with other parties. Computing platforms owned and/or operated by the respective parties may host, implement and/or use the data exchange engines 402, 422 to facilitate this exchange of data. In one example, the data exchange engines 402, 422 correspond to an online store and a social media platform, respectively, that are interested in exchanging digital advertising data. In another example, the data exchange engines 402, 422 correspond to two health care platforms that are interested in exchanging medical data. In yet another example, the data exchange engines 402, 422 correspond to a banking platform and a government agency, respectively, that are interested in exchanging financial data. The data exchange engines 402, 422 may implement one or more of the methods disclosed herein to help control the exchange of user data in a manner that limits the loss of user privacy.
[0083] The network 420 may be a computer network implementing wired and/or wireless connections between two or more devices, including but not limited to the data exchange engines 402, 422. The network 420 may implement any communication protocol known in the art. Non-limiting examples of communication protocols include a local area network (LAN), a wireless LAN, an internet protocol (IP) network, and a cellular network.
[0084] In FIG. 4, two data exchange engines are shown by way of example. More than two data exchange engines may be in communication via the network 420 to exchange data.
[0085] The data exchange engine 402 includes a processor 404, memory 406, a network interface 408, an encrypter 416 and a decrypter 418. The memory 406 stores user data 410 including user identifiers 412 and activity data 414. Similarly, the data exchange engine 422 includes a processor 424, memory 426, a network interface 428, an encrypter 436 and a decrypter 438. The memory 426 stores user data 430 including user identifiers 432 and activity data 434. The data exchange engine 402 will be described by way of example below. However, it should be noted that the data exchange engine 422 may be implemented in a similar manner.
[0146] FIG. 8 is a flow diagram illustrating a process 800 for exchanging user data between a first party 850 and a second party 852, according to another embodiment. Homomorphic encryption is implemented in the process 800 to help limit the exchange of user data to users that are known to both the first party 850 and the second party 852. In some implementations, the process 800 is performed at least in part by the system 400 of FIG. 4. For example, when performing the process 800, the first party 850 may use the data exchange engine 402 and the second party 852 may use the data exchange engine 422.
[0147] In step 802, the first party 850 obtains first user identifiers corresponding to a first set of one or more users known to the first party 850. The first party 850 may want to exchange information with the second party 852 pertaining to any users in the first set of users that the second party 852 already knows about. Further details regarding obtaining first user identifiers for a first set of users are provided above with reference to step 502 of the process 500.
[0148] Step 804 includes encrypting the first user identifiers using a homomorphic encryption algorithm. This algorithm allows mathematical operations that are performed on the encrypted first user identifiers to be maintained once the encryption is removed. In some implementations, step 804 is performed using the encrypter 416 in the data exchange engine 402.
[0149] In some implementations, the homomorphic encryption algorithm used in step 804 is an asymmetric algorithm that encrypts data using a first key and decrypts data using a second key. The first key may be shared with the second party 852 and the second key may be kept confidential. In this way, the first key may be considered a public key and the second key may be considered a private key.
[0150] Optionally, the homomorphic encryption applied in step 804 is non-repeatable. This may help maintain the privacy of the first user identifiers. For example, even if another party has the encrypted first user data and the first key, that party would not be able to determine the first user data by guessing user identifiers, encrypting those guesses with the first key and comparing the encrypted guesses to the encrypted first user data.
[0151] Step 806 is an optional step that includes obtaining short identifiers for one or more of the first user identifiers. Further details regarding obtaining short identifiers are provided above with reference to step 506 of the process 500.
[0152] FIG. 6 provides an example of first user identifiers, short identifiers and encrypted first user identifiers that could be obtained and/or generated by the first party 850 following steps 802, 804, 806.
[0153] The encrypted first user identifiers obtained in step 804 and, optionally, the short identifiers obtained in step 806 are transmitted by the first party 850 in step 808 and are received by the second party 852 in step 810. Further, if the second party 852 does not already know the first key that is used to perform the encryption in step 804, then the first key may also be transmitted by the first party 850 in step 808 and received by the second party 852 in step 810. The second party 852 does not know the second key needed to decrypt the encrypted first user identifiers, and therefore the second party 852 is generally unable to decipher or read the first user identifiers. However, the second party 852 may be able to decipher or read the short identifiers.
[0158] In step 818, the numerical differences calculated in step 816, which are optionally multiplied by a random number, are transmitted from the second party 852 to the first party 850. The first party 850 receives the numerical differences in step 820. The numerical differences provide an example of intersection data that may be used by the first party 850 to determine user identifiers that are included in both the first user identifiers and the second user identifiers.
[0168] FIG. 10 is a flow diagram illustrating a method 1000 for exchanging user data, according to an embodiment. The method 1000 may be performed by the first party 550 during the process 500 of FIG. 5 and/or by the first party 850 during the process 800 of FIG. 8. In this way, the method 1000 may be considered a partial generalization of the processes 500, 800. However, it should be noted that the method 1000 is in no way limited to the processes 500, 800.
[0169] The method 1000 will be described as being performed by the data exchange engine 402 of FIG. 4 to help facilitate data exchange with the data exchange engine 422, but this is only an example. The method 1000 could more generally be performed by other systems and/or devices. Further, the method 1000 may be performed to help facilitate data exchange with systems and/or devices other than the data exchange engine 422.
[0170] Step 1002 includes the encrypter 416 encrypting first user data with a first key to obtain encrypted first user data. The first user data and the encrypted first user data may correspond to a first set of one or more users. For example, the first user data may include one or more identifiers of users in the first set of users. These identifiers may be obtained from the user identifiers 412 stored in the memory 406 and/or from another computer readable medium, for example. The first user data may also include padding data. Examples of first user data are described above with reference to step 502 of the process 500 and to step 802 of the process 800.
[0171] In some implementations, step 1002 includes commutatively encrypting the first user data with the first key to obtain the encrypted first user data, as in step 504 of the process 500, for example. Alternatively or additionally, step 1002 may include homomorphically encrypting the first user data with the first key to obtain the encrypted first user data, as in step 804 of the process 800, for example.
[0172] Step 1004 includes the processor 404 transmitting the encrypted first user data to the data exchange engine 422 and/or to another device. Step 1006 is an optional step that includes the processor 404 transmitting at least one short identifier of the first user data to the data exchange engine 422 and/or to another device. Step 508 of the process 500 and step 808 of the process 800 provide examples of steps 1004, 1006.
[0173] Step 1008 includes the processor 404 receiving intersection data from the data exchange engine 422 and/or from another device. The intersection data is based on the encrypted first user data and second user data corresponding to a second set of users. The second user data may include one or more identifiers of users in the second set of users and/or padding data. In some implementations, the second user data is associated with the at least one short identifier optionally transmitted in step 1006. For example, the second set of users may have been selected, at least in part, based on the at least one short identifier.
[0174] In some implementations, step 1008 may be similar to step 520 of the process 500. For example, the intersection data may include or may otherwise be based on double-encrypted first user data and encrypted second user data. The double-encrypted first user data may correspond to the encrypted first user data being further encrypted with a third key by the data exchange engine 422, for example. The encrypted second user data may also correspond to the second user data encrypted with the third key by the data exchange engine 422, for example. The third key may be different from the first key used in step 1002.
[0175] In some implementations, step 1008 may be similar to step 820 of the process 800. For example, the intersection data may be based on a difference between the encrypted first user data and encrypted second user data. The encrypted second user data may correspond to the second user data encrypted with the first key by the data exchange engine 422, for example. Optionally, the intersection data is further based on or includes the numerical difference multiplied by a random number.
[0176] Step 1010 includes the decrypter 418 decrypting at least some of the intersection data with a second key to obtain decrypted intersection data. The decryption using the second key may remove the encryption applied using the first key in step 1002. However, if a symmetric encryption algorithm is used in step 1002, then the first key and the second key may be identical.
[0177] In some implementations, step 1010 further includes decrypting double-encrypted first user data with the second key to obtain the decrypted intersection data, as in step 522 of the process 500, for example. Alternatively or additionally, step 1010 may include decrypting the numerical differences using the second key, as in step 822 of the process 800, for example.
[0178] Step 1012 includes the processor 404 determining, based on the decrypted intersection data, an overlap between at least some of the first user data and at least some of the second user data. This overlap could correspond to one or more users that are in both of the first set of users and the second set of users. For example, step 1012 could include determining that an identifier of a user is in both the first user data and the second user data.
[0179] In some implementations, step 1012 may include determining an overlap between the decrypted intersection data and the encrypted second user data, similar to step 524 of the process 500, for example. Alternatively or additionally, step 1012 may include determining that at least one decrypted numerical difference equals zero, similar to step 824 of the process 800, for example.
[0180] Step 1014 includes the processor 404 exchanging activity data with the data exchange engine 422 and/or another device. The activity data may correspond to the users that are in both of the first and second sets of users. Step 1014 may include transmitting and/or receiving the activity data. Optionally, the activity data includes personal information corresponding to a user that is in both the first set of users and the second set of users. An example of activity data is a record of digital advertising presented to the user. In some implementations, the activity data is obtained from the activity data 414 stored in the memory 406 and/or from another computer readable medium. Step 526 of the process 500 and step 826 of the process 800 are two possible implementations of step 1014.
PNG
media_image1.png
620
846
media_image1.png
Greyscale
PNG
media_image2.png
618
844
media_image2.png
Greyscale
PNG
media_image3.png
700
500
media_image3.png
Greyscale
Per claim 2, the prior art of record further suggests wherein the first secure enclave is located within a first data steward infrastructure, and the second secure enclave is located within a second data steward infrastructure (reads on each data exchange engine has its own memory and processor and is associated with a respective party and is owned/operated by that party, see Davey para 0079, 0082 and 0085).
Per claim 3, the prior art of record further suggests wherein the first set of protected information and the second set of protected information is protected healthcare information (reads on two healthcare platforms exchanging medial data, see Davey para 0082).
Per claim 4, the prior art of record further suggests wherein the candidate data is a patient record (The Examiner asserts this is an obvious limitation of the disclosure of Davey because the disclosure teaches healthcare information being protected and amongst that data being health card numbers and one of ordinary skill in the art would know the conventional reason for storing health card numbers is to associate those numbers with particular patient records, see Davey para 0082 and 0091).
Per claim 5, the prior art of record further suggests further comprising performing a medical procedure on patients identified by the patient record (The Examiner construes this to be extra-solution activity because one of ordinary skill in the art would consider the act of performing a medical procedure on patients to be the conventional activity of a healthcare provider and would also consider it immaterial to the process of transferring encrypted data between distinct and separate healthcare platforms, see Davey para 0082, 0091 and 0097).
Per claim 8, the prior art of record further suggests wherein the processing the first set of protected information includes running a first algorithm on the first set of protected information (reads on running an encryption algorithm, see Davey para 0103 and Figure 5 block 504 and Figure 8 block 804 and Figure 10 block 1002 and claim 1).
Per claim 9, the prior art of record further suggests wherein the processing the second set of protected information includes running a second algorithm on the second set of protected information (reads on running an encryption algorithm, see Davey para 0103 and Figure 5 block 504 and Figure 8 block 804 and Figure 10 block 1002 and claim 8).
Claim 10 the system comprising a processor unit (see Davey para 0191 – 0192) is analyzed with respect to claim 1.
Claim 11 is analyzed with respect to claim 2.
Claim 12 is analyzed with respect to claim 3.
Claim 13 is analyzed with respect to claim 4.
Claim 14 is analyzed with respect to claim 5.
Claim 17 is analyzed with respect to claim 8.
Claim 18 is analyzed with respect to claim 9.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brian Shaw whose telephone number is (571)270-5191. The examiner can normally be reached on Mon-Thurs from 6:00 AM-3:30 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeff Nickerson can be reached on (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 703-872-9306.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRIAN F SHAW/
Primary Examiner, Art Unit 2432