Detailed Action
This non-final office action is in response to amendment filed on 12/08/2025. Claims 1, 13, and 20 have been amended, no claims have been canceled. Claims 1 - 21 remain pending in the application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/23/2025 has been entered.
Response to Amendment
The amended filed on 12/08/2025 has been entered. See above on lines 1-3 of
this office action.
Response to Arguments
Remarks regarding rejections under 35 U.S.C § 103 filed 10/08/2025 Applicant’s amendment to Claims 1, 13, and 20 arguments are carefully considered and are persuasive. However, upon further consideration, arguments are moot in view of new grounds of rejection.
With respect to applicant’s argument to the remaining dependent claims 2 - 12, 14 - 19 and 21 on pages 10 - 15 of the remark, the applicant is relying on the newly added amendments of the independent claims 1, 13, and 20. Please see examiner’s response above and the detail of the rejection below.
The Applicant argument regarding (remark pages 10-11):
“Notably, Kacsmar does not disclose parallel or distributed private set intersection. As acknowledged by the Examiner, who relied on other references (such as Ghazaleh and Pinkas)
to reject claims presenting limitations related to parallel computation. See e.g., the rejection of claim 1 on page 5 of the Final Office Action:
PNG
media_image1.png
123
653
media_image1.png
Greyscale
... and the rejection of claim 4 on page 17 of the Final Office Action:
PNG
media_image2.png
189
655
media_image2.png
Greyscale
As Kacsmar does not disclose parallel or distributed PSI, Kacsmar cannot disclose various parallel or distributed PSI-related limitations present in amended claims 1, 13, and 20,”
The Examiner respectfully disagrees and argument is not persuasive because there is no recitation of limitation “parallel or distributed private set intersection” in the claims 1, 13, 20. Moreover Kacsmar does disclose private set intersection between first party and second party (see the rejection below).
The Applicant argument regarding (remark pages 10-11),
“the Examiner, who relied on other references (such as Ghazaleh and Pinkas) to reject claims presenting limitations related to parallel computation. See e.g., the rejection of
claim 1 on page 5 of the Final Office Action:
PNG
media_image1.png
123
653
media_image1.png
Greyscale
... and the rejection of claim 4 on page 17 of the Final Office Action:
PNG
media_image2.png
189
655
media_image2.png
Greyscale
The Examiner respectfully disagrees and argument is not persuasive. The reason the Examiner used the reference Ghazaleh for claim 4 is for the implicit disclose of the term first party driver node. (emphasis added).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 1, 3, 6 – 9, 12, and 20 is rejected under 35 U.S.C. 103 as being unpatentable over B. Kacsmar et al. ("Differentially Private Two-Party Set Operations," hereafter Kacsmar), in view of McFall et al. (US-11698990-B2 hereafter McFall).
Regarding claim 1 Kacsmar discloses a method comprising performing, by a first party computing system:
tokenizing a first party set of data records, thereby generating a tokenized first party set, wherein the tokenized first party set comprises a plurality of first party tokens; (Kacsmar Fig. 1 and section 6.1: “"Our DiPSI protocols use (leveled) homomorphic encryption and proceed in three steps. First, the client encrypts its elements and sends them to the server."), note: the encrypted elements can be considered tokens),
generating a plurality of tokenized first party subsets by assigning each first party token of the plurality of first party tokens to a tokenized first party subset of the plurality of tokenized first party subsets using an assignment function; (Kacsmar fig 1 and section 6.1: “Second, the server uses the evaluation function of homomorphic encryption to evaluate a circuit that computes the Di.PSI functionality (either MRR---Sior MLAP---CA) and returns the result to the client. We employ a number of further optimization techniques. As we explained in Section 4, we use hashing-to-bins techniques to place elements into bins before matching them." and Section 4.4: "One solution to this problem is to organize the set elements into ~bins”),
combining the plurality of intersected token subsets, thereby generating an intersected token set; (Kacsmar section 6.4: “comparison is achieved by first computing the vectors for all j ∈ and then performing an AND operation over all of them. The result, denoted s an encryption of a vector that contains ones in those positions k where xi,k was found in the i_th server batch, and zeroes otherwise. Then, the server homomorphically adds Mi_,i over all the server batches i_ ∈ [μ], such that the resulting vector Mi contains (encrypted) ones where xi,k was found in the server database. Note that all these operations are performed in the encrypted domain, so the server never learns any information about the client database or the intersection X ∩ Y.”),
detokenizing the intersected token set, thereby generating an intersected set of data records. (Kacsmar Fig 1 and section 6.1: “The client decrypts the result”).
Kacsmar appear to be silence however MacFall teaches
assigning each tokenized first party subset to a corresponding first party worker node of a plurality of first party worker nodes, wherein each first party worker node comprises a different first party computing device; (see MacFall Col.41 lines 34-45: “Each node now generates a candidate token for each of the input values that remain to be tokenised. This converts the set of input values into a map where, as with the vault, the key is the raw input value and the value is the token (or, in this case, candidate token since the assignation is not yet finalised). The input value and vault value maps are then ‘inverted’, so that they are keyed by token rather than value (i.e. the keys are swapped with the values, and the maps go from input→token to token→input). The token→input maps are now shuffled around the cluster to group them by token (this is achieved by partitioning both maps by the token”);
generating a plurality of intersected token subsets by (see MacFall Col.42 lines 35-37: “The input values are joined with the vault value keys after ensuring that the vault entry for a given input resides on the same node as that input value.”):
for each tokenized first party subset of the plurality of tokenized first party
subsets, causing the corresponding first party worker node to (see MacFall Col.41 lines 43-46: “The token→input maps are now shuffled around the cluster to group them by token (this is achieved by partitioning both maps by the token). Each token is then processed using the following logic:”):
[[perform a private set intersection protocol]] with a corresponding second party worker node of a plurality of second party workers nodes from a second party computer system, thereby [[privately determining an intersection between the tokenized first party subset and a tokenized second party subset]] assigned to the corresponding second party worker node, wherein each second party worker node comprises a different second party computing device, the intersection providing an intersected token subset of the plurality of intersected token subsets (see MacFall Col.42 lines 24-45 and lines 51-60: “Each block of the file is loaded by the node on which it resides. The values for each of the columns that are to be tokenised by the rule are combined to form the set of input values for tokenisation. Duplicate values within this set for each block are discarded since each input should only be tokenised once. Each value is then tagged with the ID of the block within which it resides. As previously seen, the vault of token assignations created in the token generation step is also split across nodes, and each node loads its block(s) into memory, as a map of input value (key)→token (value). The input values are joined with the vault value keys after ensuring that the vault entry for a given input resides on the same node as that input value. This involves partitioning (see previous section) both the input dataset and the vault entries and then shuffling the data across the cluster. Note that, since the values within each input data partition were made distinct in step 1, there will be at most a single occurrence of each value within each partition and therefore the maximum number of occurrences of any single input value is equal to the number of input data partitions….The joined tuples of input value, block ID and tokenised value are now partitioned across the nodes of the cluster, this time by the block ID. This involves a further shuffle of data across the cluster, returning the input value entries back to their originating node. Now the input→token mapping for every input value exists on the same node as that input, and the original file may be processed and the tokenised value looked up for each input value. This process may be done independently on each node.”, Col. lines 61-67: “Privitar enables this safe pooling by enabling a central
aggregator to collect the data from each party and then make the pooled data available to each party in a privacy-preserving manner. Where each party's data contains information about common individuals or entities, SecureLink oblivious matching may be used to join these records without revealing the sensitive identifier.”);
MacFall does not disclose perform a private set intersection protocol, privately determining an intersection between the tokenized first party subset and a tokenized second party subset however Kacsmar discloses perform a private set intersection protocol, privately determining an intersection between the tokenized first party subset and a tokenized second party subset (fig 3 and section 6.4: the server creates the table Ts by hashing each element y ∈ Y using both H1 and H2, and appending y to both bins Ts[H1(y)] and Ts[H2(y)]. Then, the algorithm proceeds independently for each client batch i ∈ [γ], using a fresh copy T_s of Ts. We describe the matching process for a single client batch below. the server encodes T_s following the same approach as the client (first party subset) (lines 16-19). This encoding generates the plaintext vectors veci_,j (i_ ∈ [μ], j ∈ [_]), where each vector contains the jth bit of all the m elements in the i_th server batch. The algorithm then proceeds to the matching phase. First, the server (second party subset) compares the ith client batch with every server batch i_ ∈ [μ]. This comparison is achieved by first computing the vectors Mi_,i,j = ci,j ⊕ ¬veci_,j for all j ∈ [_] and then performing an AND operation over all of them. The result, denoted Mi_,i, is an encryption of a vector that contains ones in those positions k where xi,k was found in the i_th server batch, and zeroes otherwise. Then, the server homomorphically adds Mi_,i over all the server batches i_ ∈ [μ], such that the resulting vector Mi contains (encrypted) ones where xi,k was found in the server database”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine kacsmar, teaching “Instead of encrypting each client element x ∈ X individually, we first
encode the client’s data by spreading the bits of each client element over multiple ciphertext vectors. This greatly simplifies the computational cost of element comparison
on the server side, since it avoids rotation operations”, (see Kacsmar section 6.1), with MacFall teaching “This involves partitioning (see previous section) both the input dataset and the vault entries and then shuffling the data across the cluster. Note that, since the values within each input data partition were made distinct in step 1, there will be at most a single occurrence of each value within each partition and therefore the maximum number of occurrences of any single input value is equal to the number of input data partitions.).”, (see MacFall Col.42 lines 37-45) The Motivation would have been to modified kacsmar teaching to incorporate the teaching of MacFall “This avoids a common problem with such input data shuffling schemes where the presence of a single massively over-represented value can result in almost all of the data being sent to a single partition during the shuffle and causing that node to run out of memory (the so-called ‘hot value’ problem.”, (see MacFall Col.42 lines 45-50).
Regarding claim 20 is a computer system claim that recites similar
limitations as the method claim 1 and is rejected based on the same rational as claim 1. (see Kacsmar abstract: “protocol produces differentially private output for set intersection and set intersection cardinality that is optimal in terms of communication and computation complexity.” Introduction: “we improve over those protocols in communication complexity and memory consumption. For large circuits the memory consumption is commonly the bottleneck. our communication cost when comparing m = 4 096 client elements to n = 106 server elements is only 232MB whereas other circuit-based approaches.”, section 8.1: “Our experiments were executed on a single threaded
process on an Intel E5-2650 clocked at 2.00 GHz with access to 256GB of main memory).
Regarding claim 3 Kacsmar in view of MacFall disclose The method of claim 1, Kacsmar further disclose further comprising, for each tokenized first party subset of the plurality of tokenized first party subsets:
determining a padding value, the padding value comprising a difference between a size of the tokenized first party subset and a target value; (see Kacsmar section 4.5.3: “dual hashing, consists of two hash functions. A table contains m bins and unlike cuckoo hashing each bin contains an agreed upon γ number of bin elements where γ ≥ 1. For any element x, compute H1(x) and H2(x). Append the value x to whichever bin, T[H1(x)] or T[H2(x)], contains the fewer number of elements.”);
generating a plurality of random dummy tokens, the plurality of random dummy tokens comprising a number of random dummy tokens equal to the padding value; (see Kacsmar section 4.5.3 : “After hashing all of the inputs, the client appends dummy elements for each bin in T until all bins hold γ elements so as not to leak information about the data set in this manner.”);and
assigning the plurality of random dummy tokens to the tokenized first party subset. (see Kacsmar Section 6.3.1: “Recall that if a bin does not hold γ elements at the conclusion of the initial hashing, dummy elements are added to that bin, and any
other such bin, until all bins do contain γ elements.”).
Regarding claim 6, Kacsmar in view of MacFall disclose The method of claim 1, Kacsmar further disclose further comprising retrieving the first party set from a first party database. (see Kacsmar Section 5: “It achieves differential privacy by using a secure implementation of the randomized response mechanism over the result of X ∩ Y. which is run by the server, takes an encrypted version of the client database and compares it to a plaintext version of the server database.”).
Regarding claim 7, Kacsmar in view of MacFall disclose the method of claim 1, Kacsmar further disclose wherein the first party set comprises a plurality of first party elements, wherein the plurality of first party tokens comprise a plurality of hash values, and wherein tokenizing the first party set comprises generating the plurality of hash values by hashing each first party element using a hash function. (See Kacsmar “The client hashes each element in its x set using the process described X in Section 4.5.3 Dual Hash Function: A table contains m bins and unlike cuckoo hashing each bin contains an agreed upon γ number of bin elements where γ ≥ 1. For any element x, compute H1(x) and H2(x). Append the value x to whichever bin, After the hashing process, the client has a table Tc containing m bins that each hold γ elements ”).
Regarding claim 8, Kacsmar in view of MacFall disclose The method of claim 1, Kacsmar further disclose wherein the first party set and the second party set comprise an equal number of elements. ( See Kacsmar Section 4.4: “organize the set elements into “bins” within a table, and then perform the comparison only between elements within the same bins. Such solutions are called hashing-to-bins techniques, since they use hash functions to assign elements to bins within a table…. consists of bins and each bin contains an agreed upon γ number of bin elements. If the client and the server agree upon appropriate hash functions in advance, then it is only necessary to compare inputs that are mapped to the same bins. Note that for both the server and client it is necessary to always send the γ number of bin elements.”).
Regarding claim 9, Kacsmar in view of MacFall disclose The method of claim 1, Kacsmar further disclose wherein the intersected token set comprises a union of the plurality of intersected token subsets, and wherein combining the plurality of intersected token subsets comprises determining the union of the plurality of intersected token subsets. (see Kacsmar Section 4.4: “ set elements into “bins” within a table, and then perform the comparison only between elements within the same bins. Such solutions are called hashing-to-bins techniques, since they use hash functions to assign elements to bins within a table…the techniques we use for hashing to bins exclusively consist of employing two hash functions to place inputs in one of two tables or in a secondary storage location called the stash. Each table consists of bins and each bin contains an agreed upon γ number of bin elements.”).
Regarding claim 12, Kacsmar in view of MacFall disclose the method of claim 1, Kacsmar further disclose wherein the plurality of tokenized first party subsets and a plurality of tokenized second party subsets comprise a predetermined number of subsets, wherein each tokenized first party subset is associated with a numeric identifier between one and the predetermined number of subsets, (see Kacsmar section 4.4: “hashing-to-bins techniques, since they use hash functions to assign elements to bins within a table exclusively consist of employing two hash functions to place inputs in one of two tables or in a secondary storage location called the stash. Each table consists of bins and each bin contains an agreed upon γ number of bin elements. If the client and the server agree upon appropriate hash functions in advance, then it is only necessary to compare inputs that are mapped to the same bins. Note that for both the server and client it is necessary to always send the γ number of bin elements.”), Examiner interpret that the predetermined number of subsets is based on the agreement between both party in this case client and server in the numbers of bins and that the numeric identifier is the y number of bin elements. inclusive, wherein the assignment function is a hash function that produces hash values between one and the predetermined number of subsets, inclusive, (see Kacsmar section 6.1: “use hashing-to bins techniques to place elements into bins before matching them we first encode the client’s data by spreading the bits of each client element over multiple ciphertext vectors.”), and wherein assigning each first party token of the plurality of first party tokens to one of the plurality of tokenized first party subsets comprises:
for each first party token of the plurality of first party tokens:
generating a hash value using the first party token as an input to the hash function; (Kacsmar section 4.5.3: “A table contains m bins and unlike cuckoo
hashing each bin contains an agreed upon γ number of bin elements where γ ≥ 1. For any element x, compute H1(x) and H2(x). Append the value x to whichever bin, T[H1(x)] or T[H2(x)], contains the fewer number of elements. The client hashes each element x in its set X using the process described in Section 4. use hashing-to bins techniques to place elements into bins before matching them.”),and
assigning the first party token to the tokenized first party subset with the numeric identifier equal to the hash value. (See Kacsmar section 4.5.3: “After hashing all of the inputs, the client appends dummy elements for each bin in T until all bins hold γ elements so as not to leak information about the data set in this manner.”).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over B. Kacsmar et al. ("Differentially Private Two-Party Set Operations," hereafter Kacsmar), in view of McFall et al. (US-11698990-B2 hereafter McFall), in further view of Consens et al. (US-6507846-B1 hereafter Consens).
Regarding claim 2, Kacsmar in view of MacFall disclose the method of claim 1, Kacsmar in view of MacFall do not explicitly teach however Consens teaches wherein the assignment function comprises a lexicographical ordering function, and wherein generating the plurality of tokenized first party subsets by assigning each first party token of the plurality of first party tokens to one of the plurality of tokenized first party subsets using the assignment function (see figs. 3-4) comprises:
assigning each first party token to a corresponding tokenized first party subset based on a lexicographical ordering of the plurality of first party tokens.
(See Consens col. 3 Lines 47-50: “The index generator system of the preferred embodiment converts a table from a relational database into a token stream by requesting all the tuples of the tables in the data sources.”. col. 4 lines 57-58: “the description of the data structures will be applicable for the entire token stream, or for a subset.”, Col. 1 lines 39-60: “According to a further aspect of the present invention, there is provided an indexing system for structured or semi-structured source data comprising a tokenizer for accepting source data and generating tokens representing the source data, the tokens from the tokenization representing the source data in a relational view, where for tokens representing a subset of the source data, the system generates tokens identifying the table and column of the subset of the data in the relational view of the source data, and an index builder for building index structures based on the tokens generated by the tokenizer, the index builder creating indexes which comprise a set of positional indexes for indicating the position of token data in the source data, a set of lexicographical indexes for indicating the lexicographical ordering of all tokens, the set of lexicographical indexes comprising a sort vector index and a join bit index, associated with the sort vector index, a set of data structures mapping between the lexicographical indexes and the positional indexes, comprising a lexicographic permutation data structure, the index builder creating a temporary sort vector data structure for generating the lexicographic permutation data structure and the sort vector index.”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine kacsmar in view of MacFall teaching of claim 1 with Consens teaching “the tokens from the tokenization representing the source data in a relational view, where for tokens representing a subset of the source data, the system generates tokens identifying the table and column of the subset of the data in the relational view of the source data, and building index structures based on the tokens generated by the tokenizer, the step of building index structures further comprising the steps of building a set of positional indexes for indicating the position of token data in the source data, building a set of lexicographical indexes for indicating the lexicographical ordering of all tokens, the set of lexicographical indexes comprising a sort vector index and a join bit index,”, (see Consens col. 2 lines 11-22).
Claim 4 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over B. Kacsmar et al. ("Differentially Private Two-Party Set Operations," hereafter Kacsmar), in view of McFall et al. (US-11698990-B2 hereafter McFall), in view of Ghazaleh et al. (US20200327130, hereafter Ghazaleh).
Regarding claim 4 Kacsmar in view of MacFall disclose the method of claim 1, but do not explicitly teach however Ghazaleh teaches wherein the first party computing system comprises a first party server and a first party computing cluster, the first party computing cluster comprising a first party driver node and the plurality of first party worker nodes.
(See Ghazaleh fig 15, par[0187]: “A server, computing cluster comprising of computing node. For example, the server 1390 can control executing by a computing cluster 1550. For example, the server 1390 can control executing by a computing cluster 1550. computing nodes 1560 for partitioning and grouping data onto the computing nodes (e.g., computing nodes of computing system 1380”). (Par[0197]: “a driver container 1600 and one or more executor containers 1650 can be allocated in a computing cluster 1550”). Par[204]: “a computing cluster (e.g., computing cluster 1550) is used for a Spark® server running a SAS Embedded Process consisting of a Spark® driver program that runs in the Spark® application master container (e.g., driver container 1600) and a set of specialized functions that run in the Spark executors' containers (e.g., executor containers 1650)”.)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine kacsmar in view of MacFall teaching of claim 1 with the teaching Ghazaleh having a server, computing cluster comprising of Spark® driver program and plurality of executioner containers. The server having an “Embedded Process Launcher 1520 launches a driver program which runs in a driver container ( e.g., at a given computing node 1560 of the computing cluster 1550). One or more embodiments, are useful for keeping data secure because the data does not need to leave the computing cluster, but instead computer programs can be executed on the data within a computing cluster (e.g., computing cluster 1550)”, (see Ghazaleh par.194).
Regarding claim 21, Kacsmar and MacFall disclose the method of claim 20, Kacsmar in view of MacFall do not explicitly teach however Ghazaleh teaches further comprising a computing cluster comprising a driver node and the plurality of first party worker nodes, wherein the first processor and the first non-transitory computer readable medium correspond to a first party server, (see Ghazaleh fig 15 par. 0187: “A server, computing cluster comprising of computing node. For example, the server 1390 can control executing by a computing cluster 1550. For example, the server 1390 can control executing by a computing cluster 1550. The computing cluster 1550 can comprise various computing nodes 1560 for partitioning and grouping data onto the computing nodes”), wherein the driver node comprises a second processor and a second non-transitory computer readable medium coupled to the second processor, (see Ghazaleh par. 0197: “a driver container 1600 and one or more executor containers 1650 can be allocated in a computing cluster 1550.”), wherein each first party worker node of the plurality of first party worker nodes comprises a third processor of a plurality of third processors and a third non-transitory computer readable medium of a plurality of third non-transitory computer readable mediums, each third non-transitory computer readable medium coupled to a corresponding third processor. (See Ghazaleh par. 0204: “a computing cluster (e.g., computing cluster 1550) is used for a Spark® server running a SAS Embedded Process consisting of a Spark® driver program that runs in the Spark® application master container (e.g., driver container 1600) and a set of specialized functions that run in the Spark executors' containers (e.g., executor containers 1650).”, par. 209 “Executor Container 1650 for input partition data 1700A grouped from stored data 1322. There could be other partitioned data ( e.g., l 700B and 1700N) processed in other executor containers. Each executor container is executing a given task or unit of work. There could be several tasks within an application. For instance, there could be one or more stages in a given task that are a set of tasks that depend on each other.”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine kacsmar in view of MacFall teaching of claim 20 with Ghazaleh teaching “by bringing together data storage (e.g., of big data), processing power of servers and databases like Hadoop® and Spark®, and the intelligence of SAS® products, a user can achieve greater storage capacity, greater parallel processing capabilities, and/or faster data growth and processing time”, (see Ghazaleh par. 0190).
Claims 5 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over B. Kacsmar et al. ("Differentially Private Two-Party Set Operations," hereafter Kacsmar), in view McFall et al. (US-11698990-B2 hereafter McFall), in further view of Meng et al. (US 10929402B2, hereafter Meng).
Regarding claim 5, Kacsmar in view of MacFall disclose the method of claim 1, but fail to teach wherein the first party set corresponds to a set of join keys corresponding to a private database table join query, and wherein the method further comprises:
receiving one or more filtered second party database tables from the second party computing system;
filtering one or more first party database tables using the intersected set, thereby generating one or more filtered first party database tables;
joining the one or more filtered first party database tables and the one or more filtered second party database tables, thereby generating a joined table.
However, Meng teaches wherein the first party set corresponds to a set of join keys corresponding to a private database table join query, and wherein the method further comprises: (see Meng col.15 line 41-44: “The process begins at operation 610, where a query is received from a client specifying a join of two encrypted tables in a database. In some embodiments, the query may be presented as a query token that indicates the join columns.”),
receiving one or more filtered second party database tables from the second party computing system; (see Meng col 6 lines 6 lines 56-67: “the query parser 150 may provide the column identifiers, for example, the join columns for tables A and B, to the encrypted join executor 152, the encrypted join executor 152 may implement or instruct a row pair randomizer 154 to combine in memory random pairs of rows from the two tables 142 and 146”). Examiner interpret that encrypted join execution 152 receive a filtered second party database table in this case the table is 146.
filtering one or more first party database tables using the intersected set, thereby generating one or more filtered first party database tables; (see Meng col. 15 lines 36-48: “first server is caused to obtain two respective rows from the two tables, in order to evaluate the two rows for the join.” Lines 52-65: the operation may be repeated for all combination of respective rows in the two tables. First server is caused to perform a homomorphic join operation on contents of the two rows to generate an encrypted join indicator. the first server may compare the two join attributes in encrypted form, using the homomorphic join operation, without decrypting the contents of the rows. The encrypted representation of the join attributes may be an EHL data structure”), Examiner construed that filtering of the database tables is performed by the first server that obtains two respective rows from the table in order to evaluate the two rows for join to which the examiner interpret as intersected set. The operation is repeated for all combination of respective rows and the tables. The result is an encrypted hash list.
joining the one or more filtered first party database tables and the one or more filtered second party database tables, thereby generating a joined table. (See Meng col. 6 lines 56-67: “The query parser 150 may provide the column identifiers, for example, the join columns for tables A and B, to the encrypted join executor 152. The encrypted join executor 152 may implement or instruct a row pair randomizer 154 to combine in memory random pairs of rows from the two tables 142 and 146 to be joined. The row pair randomizer may obtain, for each row pair, the respective encrypted representations 144 and 148 of the join attributes of the rows. For example, if tables A and B are to be joined based on a column BUYER ID in table A and a column COMPANY ID in table B, the respective encrypted representations of these two attributes are obtained for all combinations of row pairs from the two tables.”), col. 16 lines 18-26: “first server is caused to perform the join of the two rows using the decrypted join indicator, which was decrypted by the second server. the decrypted join indicator may indicate whether the join attributes of the two rows are equal if the join attributes of the two rows are indicated to be equal ( e.g., if the indicator has a value of zero), the two rows are joined. If not, the two rows are excluded from the result set of joined rows.”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine kacsmar in view of MacFall teaching of claim 1 with Meng teaching “because neither server has both the encrypted content and the decryption key, a compromise of one individual server does not compromise the entire encrypted database, and each server is less likely to be targeted for attacks.”, (Meng col. 16 lines: 40-43). The Motivation would have been to modified Kacsmar teaching to incorporate the teaching of Meng “the MPC protocol may be used in addition to the query token 440 to further secure the query communication. Under an MPC communication protocol, the server may never learn the mapping used by the client, and the client may never learn the mapping used by the server during the MPC protocol communications. However, both parties are able to collaborate to securely communicate the query columns.”, (see Meng Col. 13 lines 5-10).
Regarding claim 10, Kacsmar in view of MacFall disclose the method of claim 1, but fail to teach further comprising:
prior to tokenizing the first party set:
receiving a request message from an orchestrator computer, the request message indicating the first party set, and retrieving the first party set from a first party database; and
transmitting, to the orchestrator computer, the intersected set, wherein the orchestrator computer transmits the intersected set to a client computer.
However, Meng teaches further comprising:
prior to tokenizing the first party set:
receiving a request message from an orchestrator computer, the request message indicating the first party set, and retrieving the first party set from a first party database; (see Meng col 12. Lines 39-56: “implement a query translator 432. The query translator 432 may be employed to translate a query against tables with unpermuted columns, such as the first query in the figure, to a query with permuted column identifiers, such as the second query the query token may be parsed by a query parser 416. may extract the permuted column identifiers from the query token 440, and process the query using the permuted column identifiers”.).
and
transmitting, to the orchestrator computer, the intersected set, wherein the orchestrator computer transmits the intersected set to a client computer. (See Meng fig 4 and col 12 lines 39-56: “After the query is processed by the database server, the query results 442 are returned to the client 430.”).
PNG
media_image3.png
717
892
media_image3.png
Greyscale
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine kacsmar in view of MacFall teaching of claim 1 with Meng teaching “the MPC protocol may be used in addition to the query token 440 to further secure the query communication. Under an MPC communication protocol, the server may never learn the mapping used by the client, and the client may never learn the mapping used by the server during the MPC protocol communications. However, both parties are able to collaborate to securely communicate the query columns.”, (Meng see col. 12 lines 62-64 and col. 13 lines 5-10).
Claims 13, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Meng et al. (Meng US 10929402B2 hereafter Meng), in view of B. Kacsmar et al. ("Differentially Private Two-Party Set Operations," hereafter Kacsmar), in further view of McFall et al. (US-11698990-B2 hereafter McFall).
Regarding claim 13 Meng discloses a method comprising performing, by a first party computing system:
receiving a private database table join query, the private database table join query identifying one or more first database tables and one or more attributes; (see Meng col. 15 lines 41-45: “The process begins at operation 610, where a query is received from a client specifying a join of two encrypted tables in a database. In some embodiments, the query may be presented as a query token that indicates the join columns of the two encrypted tables.”).
retrieving the one or more first database tables from a first party database; (Meng col. 15 lines 46-48: “first server is caused to obtain two respective rows from the two tables, in order to evaluate the two rows for the join. the operation may be repeated for all combination of respective rows in the two tables.”).
determining a plurality of first party join keys based on the one or more first database tables and the one or more attributes; (Meng col. 5 lines 63-65: In some embodiments, the individual data items or attributes in the tables may be encrypted via an encryption scheme.” Col. 15 lines 41-44: “The process begins at operation 610, where a query is received from a client specifying a join of two encrypted tables in a database. In some embodiments, the query may be presented as a query token that indicates the join columns.”), lines 59-63: “the rows may contain, for its respective join attributes, an encrypted representation of those join attributes. In some embodiments, the first server may compare the two join attributes in encrypted form, using the homomorphic join operation, without decrypting the contents of the rows”.).
filtering the one or more first database tables using the intersected join key set, thereby generating one or more filtered first database tables; (see Meng col. 15 lines 46-48: “first server is caused to obtain two respective rows from the two tables, in order to evaluate the two rows for the join.”);
receiving, from the second party computing system, one or more filtered second database tables; (see Meng col. 16 lines 4-6: “second server is caused to decrypt the encrypted join indicator and provide the decrypted join indicator to the first server.”),
and
combining the one or more filtered first database tables and the one or more filtered second database tables, thereby generating a joined database table. (See Meng col. 6 lines 56-67: “The query parser 150 may provide the column identifiers, for example, the join columns for tables A and B, to the encrypted join executor 152. The encrypted join executor 152 may implement or instruct a row pair randomizer 154 to combine in memory random pairs of rows from the two tables 142 and 146 to be joined. The row pair randomizer may obtain, for each row pair, the respective encrypted representations 144 and 148 of the join attributes of the rows. For example, if tables A and B are to be joined based on a column BUYER ID in table A and a column COMPANY ID in table B, the respective encrypted representations of these two attributes are obtained for all combinations of row pairs from the two tables.”), col. 16 lines 18-26: “first server is caused to perform the join of the two rows using the decrypted join indicator, which was decrypted by the second server. the decrypted join indicator may indicate whether the join attributes of the two rows are equal if the join attributes of the two rows are indicated to be equal ( e.g., if the indicator has a value of zero), the two rows are joined. If not, the two rows are excluded from the result set of joined rows.”).
Meng appear to be silence however Kacsmar teaches
tokenizing the plurality of first party join keys, thereby generating a tokenized first party join key set, wherein the tokenized first party join key set comprises a plurality of first party tokens; (see Kacsmar Fig. 1 and section 6.1: “"Our DiPSI protocols use (leveled) homomorphic encryption and proceed in three steps. First, the client encrypts its elements and sends them to the server."), note: the encrypted elements can be considered tokens),
generating a plurality of tokenized first party subsets by assigning each first party token of the plurality of first party tokens to a tokenized first party subset of the plurality of tokenized first party subsets using an assignment function; (see Kacsmar fig 1 and section 6.1: “Second, the server uses the evaluation function of homomorphic encryption to evaluate a circuit that computes the Di.PSI functionality (either MRR---Sior MLAP---CA) and returns the result to the client. We employ a number of further optimization techniques. As we explained in Section 4, we use hashing-to-bins techniques to place elements into bins before matching them." and Section 4.4: "One solution to this problem is to organize the set elements into ~bins”),
combining the plurality of intersected token subsets, thereby generating an intersected token set (see Kacsmar section 6.4: “comparison is achieved by first computing the vectors for all j ∈ and then performing an AND operation over all of them. The result, denoted s an encryption of a vector that contains ones in those positions k where xi,k was found in the i_th server batch, and zeroes otherwise. Then, the server homomorphically adds Mi_,i over all the server batches i_ ∈ [μ], such that the resulting vector Mi contains (encrypted) ones where xi,k was found in the server database.”) ;
detokenizing the intersected token set, thereby generating an intersected join key set; (Kacsmar Fig 1 and section 6.1: “The client decrypts the result”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Meng teaching “two servers to divide the work of the join operation, so that neither server
retains significant knowledge of the data values used to perform the joins. The division
of labor is accomplished using a data structure that semantically encrypts a representation of the join attributes in the tables.”, (see Meng Col. 3 lines [1-5]) with Kacsmar teaching “Instead of encrypting each client element x ∈ X individually, we first
encode the client’s data by spreading the bits of each client element over multiple ciphertext vectors. This greatly simplifies the computational cost of element comparison
on the server side, since it avoids rotation operations”, (see Kacsmar section 6.1). The Motivation would have been to modified Meng teaching to incorporate the teaching of Kacsmar “define differentially private set intersection and we propose new protocols using (leveled) homomorphic encryption where the result is differentially private. Our
circuit-based approach has an adaptability that allows us to achieve differential privacy, as well as to compute predicates over the intersection such as cardinality. Furthermore,
our protocol produces differentially private output for set intersection and set intersection cardinality that is optimal in terms of communication and computation complexity”, (see Kacsmar section abstract).
Meng in view of Kacsmar appear to be silence however MacFall teaches
assigning each tokenized first party subset to a corresponding first party worker node of a plurality of first party worker nodes, wherein each first party worker node comprises a different first party computing device; (see MacFall Col.41 lines 34-45: “Each node now generates a candidate token for each of the input values that remain to be tokenised. This converts the set of input values into a map where, as with the vault, the key is the raw input value and the value is the token (or, in this case, candidate token since the assignation is not yet finalised). The input value and vault value maps are then ‘inverted’, so that they are keyed by token rather than value (i.e. the keys are swapped with the values, and the maps go from input→token to token→input). The token→input maps are now shuffled around the cluster to group them by token (this is achieved by partitioning both maps by the token”);
generating a plurality of intersected token subsets by (see MacFall Col.42 lines 35-37: “The input values are joined with the vault value keys after ensuring that the vault entry for a given input resides on the same node as that input value.”):
for each tokenized first party subset of the plurality of tokenized first party
subsets, causing the corresponding first party worker node to (see MacFall Col.41 lines 43-46: “The token→input maps are now shuffled around the cluster to group them by token (this is achieved by partitioning both maps by the token). Each token is then processed using the following logic:”):
[[perform a private set intersection protocol]] with a corresponding second
party worker node of a plurality of second party workers nodes from a second
party computer system, thereby [[privately determining an intersection between the tokenized first party subset and a tokenized second party subset]] assigned to the corresponding second party worker node, wherein each second party worker node comprises a different second party computing device, the intersection providing an intersected token subset of the plurality of intersected token subsets see MacFall Col.42 lines 24-45 and lines 51-60: “Each block of the file is loaded by the node on which it resides. The values for each of the columns that are to be tokenised by the rule are combined to form the set of input values for tokenisation. Duplicate values within this set for each block are discarded since each input should only be tokenised once. Each value is then tagged with the ID of the block within which it resides. As previously seen, the vault of token assignations created in the token generation step is also split across nodes, and each node loads its block(s) into memory, as a map of input value (key)→token (value). The input values are joined with the vault value keys after ensuring that the vault entry for a given input resides on the same node as that input value. This involves partitioning (see previous section) both the input dataset and the vault entries and then shuffling the data across the cluster. Note that, since the values within each input data partition were made distinct in step 1, there will be at most a single occurrence of each value within each partition and therefore the maximum number of occurrences of any single input value is equal to the number of input data partitions….The joined tuples of input value, block ID and tokenised value are now partitioned across the nodes of the cluster, this time by the block ID. This involves a further shuffle of data across the cluster, returning the input value entries back to their originating node. Now the input→token mapping for every input value exists on the same node as that input, and the original file may be processed and the tokenised value looked up for each input value. This process may be done independently on each node.”, Col. lines 61-67: “Privitar enables this safe pooling by enabling a central
aggregator to collect the data from each party and then make the pooled data available to each party in a privacy-preserving manner. Where each party's data contains information about common individuals or entities, SecureLink oblivious matching may be used to join these records without revealing the sensitive identifier.”);
MacFall does not disclose perform a private set intersection protocol, privately determining an intersection between the tokenized first party subset and a tokenized second party subset however Kacsmar discloses perform a private set intersection protocol, privately determining an intersection between the tokenized first party subset and a tokenized second party subset (fig 3 and section 6.4: the server creates the table Ts by hashing each element y ∈ Y using both H1 and H2, and appending y to both bins Ts[H1(y)] and Ts[H2(y)]. Then, the algorithm proceeds independently for each client batch i ∈ [γ], using a fresh copy T_s of Ts. We describe the matching process for a single client batch below. the server encodes T_s following the same approach as the client (first party subset) (lines 16-19). This encoding generates the plaintext vectors veci_,j (i_ ∈ [μ], j ∈ [_]), where each vector contains the jth bit of all the m elements in the i_th server batch. The algorithm then proceeds to the matching phase. First, the server (second party subset) compares the ith client batch with every server batch i_ ∈ [μ]. This comparison is achieved by first computing the vectors Mi_,i,j = ci,j ⊕ ¬veci_,j for all j ∈ [_] and then performing an AND operation over all of them. The result, denoted Mi_,i, is an encryption of a vector that contains ones in those positions k where xi,k was found in the i_th server batch, and zeroes otherwise. Then, the server homomorphically adds Mi_,i over all the server batches i_ ∈ [μ], such that the resulting vector Mi contains (encrypted) ones where xi,k was found in the server database”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Meng teaching along with Kacsmar teaching described above, with MacFall teaching “This involves partitioning (see previous section) both the input dataset and the vault entries and then shuffling the data across the cluster. Note that, since the values within each input data partition were made distinct in step 1, there will be at most a single occurrence of each value within each partition and therefore the maximum number of occurrences of any single input value is equal to the number of input data partitions.).”, (see MacFall Col.42 lines 37-45) The Motivation would have been to modified kacsmar teaching to incorporate the teaching of MacFall “This avoids a common problem with such input data shuffling schemes where the presence of a single massively over-represented value can result in almost all of the data being sent to a single partition during the shuffle and causing that node to run out of memory (the so-called ‘hot value’ problem.”, (see MacFall Col.42 lines 45-50).
Regarding claim 16, Meng in view of Kacsmar and MacFall teach the method of claim 13, Meng further teaches further comprising transmitting the one or more filtered first database tables to the second party computing system, wherein the second party computing system combines the one or more filtered first database tables and the one or more filtered second database tables, thereby generating the joined database table. (See Meng col. 16 lines 18-26: “first server is caused to perform the join of the two rows using the decrypted join indicator, which was decrypted by the second server. the decrypted join indicator may indicate whether the join attributes of the two rows are equal if the join attributes of the two rows are indicated to be equal ( e.g., if the indicator has a value of zero), the two 25 rows are joined. If not, the two rows are excluded from the result set of joined rows.”).
Regarding claim 17, Meng in view of Kacsmar and MacFall teach the method of claim 13, Meng further teaches wherein the private database table join query is received from an orchestrator computer, and wherein the method further comprises transmitting the joined database table to the orchestrator computer, wherein the orchestrator computer transmits the joined database table to a client computer. (“See Meng col.12 lines 39-56: “implement a query translator 432. The query translator 432 may be employed to translate a query against tables with unpermuted columns, such as the first query in the figure, to a query with permuted column identifiers, such as the second query the query token may be parsed by a query parser 416. After the query is processed by the database server, the query results 442 are returned to the client 430.”).
Claims 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Meng et al. (Meng US 10929402B2 hereafter Meng), in view of B. Kacsmar et al. ("Differentially Private Two-Party Set Operations," hereafter Kacsmar), in view of McFall et al. (US-11698990-B2 hereafter McFall), in further view of Chavan et al. (US20170031976-A1 hereafter Chavan).
Regarding claim 14, Meng in view of Kacsmar and MacFall teach the method of claim 13, Meng in view of Kacsmar and MacFall fail to teach however Chavan teaches wherein the private database table join query additionally comprises a "where" clause, and wherein the method further comprises, prior to determining the plurality of first party join keys:
pre-filtering the one or more first database tables based on the where clause.
(See Chavan par. [0115]: “The optimizer may pick a hash join to evaluate the query. When two tables are joined via a hash join, the dimension table is scanned and rows satisfying the where clause predicates for that table are used to create a hash table, based on the join key, in memory.”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Meng in view of Kacsmar and MacFall teaching of claim 13, with Chavan teaching “A bloom filter is a space-efficient probabilistic data structure that can be used to test whether an element is a member of a set or not. If a match is found in the bloom vector, then that row is sent to the hash join operator... If no match is found then the row is discarded... This results in a significant speed-up for joins that have a dominant bloom filter evaluation cost because the cost of computing.”, (see Chavan par. 0115).
Regarding claim 15, Meng in view of Kacsmar and MacFall teach the method of claim 13, Meng further teaches wherein each first party join key of the plurality of first party join keys corresponds to an attribute of the one or more attributes, (Meng col. 5 lines 63-65: In some embodiments, the individual data items or attributes in the tables may be encrypted via an encryption scheme.” Col. 15 lines 41-44: “The process begins at operation 610, where a query is received from a client specifying a join of two encrypted tables in a database. In some embodiments, the query may be presented as a query token that indicates the join columns.”), lines 59-63: “the rows may contain, for its respective join attributes, an encrypted representation of those join attributes. In some embodiments, the first server may compare the two join attributes in encrypted form, using the homomorphic join operation, without decrypting the contents of the rows”.).
Meng in view of Kacsmar and MacFall fail to teach however Chavan teaches: and
wherein tokenizing the plurality of first party join keys comprises:
for each attribute of the one or more attribute, concatenating each first party join key corresponding to the attribute, thereby generating a plurality of concatenated first party join keys; (see Chavan Q5 and par. 115: “The optimizer may pick a hash join to evaluate the query. When two tables are joined via a hash join, the dimension table is scanned and rows satisfying the where clause predicates for that table are used to create a hash table, based on the join key, the rows that satisfy the where clause predicates for that table, the same hashing algorithm is performed on the join column. The join operation then probes the previously built hash table for each value”.
and
hashing each concatenated first party join key of the plurality of concatenated first party join keys, thereby generating a plurality of hash values, wherein the tokenized join key set comprises the plurality of hash values. (See Chavan Q5 and par. 115: “During the hash table creation for the dimension table, a bloom filter is also created based on the join column-the "p_partkey" column in the case of Q5 The bloom filter is then sent as an additional predicate to the scan of the "lineitem" table. After the "where" clause predicates have been applied to the "lineitem" table, the resultant row-set is further pruned by having the join column "(I_partkey)" hashed and probed in the bloom filter. If a match is found in the bloom vector, then that row is sent to the hash join operator. If no match is found then the row is discarded.
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Meng in view of Kacsmar and MacFall teaching of claim 13, Chavan teaching “A bloom filter is a space-efficient probabilistic data structure that can be used to test whether an element is a member of a set or not. If a match is found in the bloom vector, then that row is sent to the hash join operator... Based on the foregoing, there is ample opportunity for hashing the values of the distinct dictionary entries of the join key columns and reusing them to improve join performance. For the rows that pass the predicates on the fact table "lineitem", a look-up is performed on the dictionary index of the join key column "l_partkey" to directly obtain the hash value from the materialized stream and use it to probe the bloom filter.”, (see Chavan par. 0115).
Claims 18 -19 are rejected under 35 U.S.C. 103 as being unpatentable over Meng et al. (Meng US 10929402B2 hereafter Meng), in view of B. Kacsmar et al. ("Differentially Private Two-Party Set Operations," hereafter Kacsmar), in view of McFall et al. (US-11698990-B2 hereafter McFall), in further view of Roitman et al. (US-20200151173 A-1, hereafter Roitman).
Regarding claim 18, Meng in view of Kacsmar and MacFall teach the method of claim 13, further Meng in view of Kacsmar and MacFall fail teach, however Roitman teaches:
comprising:
generating a token column comprising tokenized first party join keys of the tokenized first party join key set; (see Roitman par. 0027-0029: “A table 310 includes a plurality of columns, 350-1 through 350-N. Each column includes a plurality of keys (not shown), which are the unique values of data in that particular column. A key may be an integer, a string, and the like. [0028] Each key is used as an input for a hash function 320, which outputs a digest 330. The hash function 320 maps data of arbitrary size to data of fixed size. The digests are then used to generate a hashed table 340, which includes a plurality of columns 360-1 through 360-M, by replacing the original keys which were used as inputs for the hash function 320 with the corresponding digests.”). and appending the token column to the one or more first database tables. (See Roitman par. 0034: “At S410, a digest is determined for each unique key in a first column of a first table. Each digest is an index value determined using a hash function which maps data of arbitrary size to data of a fixed size. The hash function is applied to each unique key in order to determine the corresponding digest for that unique key.”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Meng in view of Kacsmar and MacFall teaching of claim 13, with Roitman teaching “By performing the hash, the memory usage may be reduced from an arbitrary size to a fixed size which is controlled by the user or administrator of the system. Essentially, the size of the digest determines the amount of memory used when performing a JOIN operation… This process also allows for faster creation of new relations because the hash function would not change the actual keys. As a result, defining a new relation in the data model service could be computed with less resources and therefore, new relations may be defined more often.”, (see Roitman par. 0032).
Regarding claim 19, Meng in view of Kacsmar, MacFall and Roitman teach the method of claim 18, Meng further teaches wherein filtering the one or more first database tables using the intersected join key set comprises:
removing one or more rows from the one or more first database tables based on the token column, the one or more rows corresponding to one or more tokenized first party join keys that are not in the intersected join key set. (See Meng col. lines 46-48: “first server is caused to obtain two respective rows from the two tables, in order to evaluate the two rows for the join.”. Lines 56-58: “the first server is caused to perform a homomorphic join operation on contents of the two rows to generate an encrypted join indicator.”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to
applicant's disclosure:
Bodziony et al. (US-11080280-B2) The encryption function is determined to be
homomorphic to sorting operators. A decryption function that decrypts the encrypted keys is determined to be homomorphic to sorting operators. Responsive to the encryption and decryption functions being determined to be homomorphic,
a merge join operation is selected. The merge join operation operates on the first database table and a second database table and includes the decryption function in a
joining condition. Using the merge join operation, an execution of a query is optimized. The query accesses one or more data items in the first or second database table.
Mattson et al. (US-20140177825-A1) Each secure computer system can perform one or more encoding operations, such as tokenization operations, encryption operations, data modifications encodes data using one or more encoding components (such as token tables, encryption keys, initialization vectors, and the like) that are inaccessible to the other secure computer systems.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUILIO MUNGUIA whose telephone number is (571)270-5277. The examiner can normally be reached M-F 9:30AM - 5:00Pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A Shiferaw can be reached on (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUILIO MUNGUIA/ Examiner, Art Unit 2497 /ELENI A SHIFERAW/Supervisory Patent Examiner, Art Unit 2497