DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
In response to the Non-final Office Action mailed on 6/6/2025, Applicant has filed an amendment on 9/8/2025. In this reply, the applicant has amended independent claims 1, 21, and 22 to further specify that word segment information is reported to a server for a data mining operation and comprises a target vector determined based on a target hash value in a method of satisfying differential privacy among other minor amendments.
Applicant has also argued that the prior art of record fails to teach a prefix tree where nodes of the tree are to record a frequency of the word segment while the nodes also represent the word segments because Wang only discloses how to find longer prefixes and iterate until the prefixes group gives the set of frequent values (Remarks, Pages 11-12). These arguments have been fully considered, however, are not found to be persuasive for the reasons noted in the Response to Arguments section.
Applicant indicates that the present amendments correct the previously noted antecedent basis rejections under 35 U.S.C. 112(b) and requests withdrawal of this rejection (Remarks, Pages 7-8). In response, due to the amended claims resolving the previously noted antecedent basis issues, the 35 U.S.C. 112(b) rejection has been withdrawn.
In response to the patent subject matter eligibility rejection under 35 U.S.C. 101, applicant argues that the differential privacy processing performed at a network server is a technical field and applicants invention leads to an improvement in this field by allowing data mining access to protected data securely at a server (Remarks, Pages 8-9).
These arguments have been fully considered and it has been confirmed in the specification that Applicant's amended invention does lead to this recited improvement (see Specification, Paragraphs 0001-0002). Accordingly, the independent claims 1 and 21-22 and their associated dependent claims are found to be patent eligible under step 2A prong 2 and the 35 U.S.C. 101 rejection has been withdrawn.
Response to Arguments
Applicant's arguments have been fully considered but they are not persuasive:
With respect to the rejection of independent claim 1 under 35 U.S.C. 102(a)(1) as being anticipated by Wang, et al. ("Locally Differentially Private Heavy Hitter Identification," 2017), Applicant first attempts to draw a distinction between Wang and the claimed invention based on fails to teach a prefix tree where nodes of the tree are to record a frequency of the word segment while the nodes also represent the word segments because Wang only discloses how to find longer prefixes and iterate until the prefixes group gives the set of frequent values (Remarks, Pages 11-12).
In response and as noted in the Non-final Action (see Pages 9-10), the examiner understands the point raised by the applicant because there is a lack of the term tree or trie in Wang but respectfully disagrees with applicant’s position. While Wang does not specifically discuss the term "trie" or "prefix trie," Section IV.A., Pages 5-6 of Wang describes a first layer of shorter prefixes where the identified "frequent values" make up a set C1 each of these prefixes that are selected at the first (1) layer are then extended at or are connected to a second layer (2) to consider extended prefixes at a second layer and construct select a set of the most frequent extended prefixes at the nodes of the second layer (i.e., candidate prefix at C1 extended to a prefix node at C2/layer 2). Importantly Wang also describes that the prefixes of each layer that "constructs" an additional portion at an extended layer. In this manner, Wang describes a process that selects the most frequent entry nodes at each layer, recording those frequent values/heavy hitters to form sets C1 comprising prefix word segments then extends each of those prefixes in a second layer that are selected with respect to frequency to construct C2 in an iterating process. Accordingly, this layered/connected constructed structure in Wang constitutes a tree/trie structure where term frequency values of the prefixes word segments are considered and recorded at different entries/nodes of each layer of the structure to find frequent prefixes in a heavy hitter statistical analysis (see also Section I, Page 1).
Applicant next argues that that Wang does not disclose generating, layer by layer based on the each group of estimated data, each layer of nodes of a prefix tree used to record a word segment frequency, wherein generating an nth layer of nodes comprises: obtaining each (n-1)-tuple word segment represented by each node at an (n-1)th layer, wherein an (n-1) tuple word segment represented by any node at the (n-1)th layer is formed by sequentially arranging word units corresponding to a root node to the any node." Applicant then provides an illustration of the concept and then appears to link the lack of sequential construction/arrangement to the earlier argument regarding a lack of recording word segment frequency (see Remarks, Pages 11-12).
In response, it is noted that this further argued limitation was addressed in the above remarks directed towards the first Claim 1 argument. In particular, the above listed citations of Wang address how a tree structure is disclosed despite the lack of the term tree or trie where word segment prefix entries/nodes at each layer are sequentially considered and have recorded frequency values to find a set of heavy hitters leading into a next sequential layer forming longer prefix word segments and also including assigned frequency values to find the heavy hitters at the n-1/C2 layer in an iterative process. In this manner, the examiner maintains that Wang teaches the secondly argued limitation of Claim 1.
Accordingly, Applicant arguments directed towards claim 1 are not found to be persuasive and the 35 U.S.C. 102(a)(1) rejection has been maintained.
The prior art rejections of the remaining independent and dependent claims have been traversed for reasons similar to independent claim 1 (see Remarks, Pages 13-14). In regards to such arguments, see the response directed towards independent claim 1.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 4, and 8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang, et al. ("Locally Differentially Private Heavy Hitter Identification," 2017).
With respect to Claim 1, Wang discloses:
A method for estimating a word segment frequency in differential privacy protection data (“we propose an LDP protocol for identifying heavy hitters,” Abstract, wherein “heavy hitters” pertains to the most frequent strings in a database and LDP is an acronym that represents local differential privacy), applied to a server (preambular limitation indicating intended use environment for method practice that does not place patentable limits on the body of the claim), wherein the method comprises:
obtaining each piece of word segment information that is reported to the server by a terminal device for a data mining operation at the server and that is subject to local differential privacy processing (LDP processing, Section II.A., Page 2 wherein inputs to the heavy hitter processing take the form of keywords that are provided by a user device to a remote service requiring a central computer that gathers data from user devices (e.g., website browsing), Section I, Page 1 and Section VI.A.2., Page 10; and the inputs are obtained in segments from user groups, Section 1, Pages 1-2), wherein any piece of word segment information corresponds to one word segment (inputs to the heavy hitter processing take the form of keywords that are provided by a user device to a remote service (e.g., website browsing), Section VI.A.2., Page 10; and the inputs are obtained in segments from user groups, Section 1, Pages 1-2), and comprises a target vector determined based on a target hash value in a method of satisfying differential privacy (the input is associated with values/vector that is subjected to local differential privacy processing that results in perturbed data that protects a user’s privacy at an aggregator, Section II.A., Page 2; see also hashing using a hash function in Section II.B., Page 3 ) and a target quantity that represents a quantity of word units comprised in the word segment, and the target quantity is less than or equal to a predetermined value N ("An important parameter in this process is the segment length η," Section 1, Page 1; see also Section IV.A., Pages 5-6- "divide it into equal-length segments"; and Section VI, Page 10- section regarding the number of groups/segment length);
obtaining through division N groups of word segment information, so each piece of word segment information of a same group of the N groups corresponds to a respective one of the same target quantity ("An important parameter in this process is the segment length η," Section 1, Page 1; see also Section IV.A., Pages 5-6- "divide it into equal-length segments"; and Section VI, Page 10- section regarding the number of groups/segment length; Section IV.A., Pages 5-6- "user is randomly assigned into one of g groups" and "continues until the last step where Cg gives the set of frequent values" thus the groups are "g" in Wang mapped to the claimed N groups);
determining each group of estimated data that is corresponding to each group of word segment information and that represents unbiased word segment frequency estimation (Section 1, Page 1 regarding the heavy hitter analysis- “The aggregator uses reports from the first group to finds C1, the set of frequent prefixes, and then uses reports from the second group to find C2, considering candidates that have prefixes in C1. The aggregator iterates this process until finding the set of frequent values;” Section III.A., Pages 3-4- “a top-k heavy hitter if its frequency fx…is ranked among top k frequencies of all possible values;” see also discussion of frequent values for each segment through a prefix trie, Section IV.A., Pages 5-6; Section II.B.1., Pages 2-3 discussing frequency oracles that determine the frequency of each value);
generating, layer by layer based on the each group of estimated data, each layer of nodes of a prefix tree used to record a word segment frequency (Section 1, Page 1 regarding the heavy hitter analysis- “The aggregator uses reports from the first group to finds C1, the set of frequent prefixes, and then uses reports from the second group to find C2, considering candidates that have prefixes in C1. The aggregator iterates this process until finding the set of frequent values;” see also discussion of frequent values for each segment through a prefix trie, Section IV.A., Pages 5-6; note that Wang regards determining heavy hitter frequencies at a first level C1 followed by C2 where the prefixes of this second layer depend upon/have a prefix in the prior layer. Wang also refers to prefix processing. Prefix processing as well as the dependencies between layers/levels is indicative of a tree-structure processing until heavy hitters of an entire word string is determined), wherein generating an nth layer of nodes comprises:
obtaining each (n-1)-tuple word segment represented by each node at an (n-1)th layer, wherein an (n-1)-tuple word segment represented by any node at the (n-1)th layer is formed by sequentially arranging word units corresponding to a root node to the any node (Section 1, Page 1 regarding the heavy hitter analysis- “The aggregator uses reports from the first group to finds C1, the set of frequent prefixes, and then uses reports from the second group to find C2, considering candidates that have prefixes in C1. The aggregator iterates this process until finding the set of frequent values;” see also discussion of frequent values for each segment through a prefix trie, Section IV.A., Pages 5-6; note that Wang regards determining heavy hitter frequencies at a first level C1 (i.e., n-1 layer) followed by C2 where the prefixes of this second layer depend upon/have a prefix in the prior layer corresponding to the claimed “sequentially arranging” limitation. Wang also refers to prefix processing. Prefix processing as well as the dependencies between layers/levels is indicative of a tree-structure processing until heavy hitters of an entire word string is determined);
determining a plurality of candidate n-tuple word segments for the nth layer of nodes based on the each (n-1)-tuple word segment (Section IV, Pages 5-6- “Let D1 = {0, 1}γ+η, the aggregator uses the first group’s reports to identify which values in D1 are frequent prefixes. Let C1 be the result. It then constructs D2 = C1 × {0, 1}η, which are candidates for longer frequent prefixes, and uses the second group’s reports to identify the frequent ones in D2 as C2. This continues until the last step where Cg gives the set of frequent values.” Note that since Wang relies upon a prefix tree structure to identify frequent values, the n-1 or prior layer nodes are relied upon until the final g layer is reached and overall frequency/heavy hitter word results are identified. See Section VI.A.2., Page 10);
calculating frequency salient distribution information of each of the plurality of candidate n-tuple word segments based on an nth group of estimated data corresponding to a target quantity n (Section IV. Pages 5-6 “Let D1 = {0, 1}γ+η, the aggregator uses the first group’s reports to identify which values in D1 are frequent prefixes. Let C1 be the result. It then constructs D2 = C1 × {0, 1}η, which are candidates for longer frequent prefixes, and uses the second group’s reports to identify the frequent ones in D2 as C2. This continues until the last step where Cg gives the set of frequent values.” See also ; Section II.B.1. and Section V., Page 8, and Pages 2-3 discussing frequency oracles that determine the frequency of each value; Note that heavy hitters are identified based upon statistical frequency); and
selecting, based on the frequency salient distribution information, several of the plurality of candidate n-tuple word segments as n-tuple word segments represented by the nth layer of nodes, and recording, by using each node at the nth layer, a frequency of an n-tuple word segment represented by the nth layer of nodes, wherein 1<n<N (heavy hitters at each level of the prefix trie are selected until the final layer is reached, which gives an overall frequency score indicative of a full word string heavy hitter- Section 1, Page 1 regarding the heavy hitter analysis- “The aggregator uses reports from the first group to finds C1, the set of frequent prefixes, and then uses reports from the second group to find C2, considering candidates that have prefixes in C1. The aggregator iterates this process until finding the set of frequent values;” see also discussion of frequent values for each segment through a prefix trie, Section IV.A., Pages 5-6; Section III.A., Page 4 discussing heavy hitter selection of the top k values).
With respect to Claim 4, Wang further discloses:
calculating each frequency of each candidate n-tuple word segment based on the nth group of estimated data (Section 1, Page 1 regarding the heavy hitter analysis- “The aggregator uses reports from the first group to finds C1, the set of frequent prefixes, and then uses reports from the second group to find C2, considering candidates that have prefixes in C1. The aggregator iterates this process until finding the set of frequent values;” Section III.A., Pages 3-4- “a top-k heavy hitter if its frequency fx…is ranked among top k frequencies of all possible values;”);
calculating each variance corresponding to the each candidate n-tuple word segment based on the each frequency (calculation of variance of the frequency estimation, Section II.B. Page 3 and Section VI.B.6., Page 11); and
calculating the frequency salient distribution information of the candidate n-tuple word segment based on the each variance (probability density function relying on variance that is used to determine heavy vs. non-heavy hitters as salience/importance, Section IV.B.3., Page 7).
With respect to Claim 8, Wang further discloses:
The method according to claim 1, wherein the target vector represents the word segment, and the target vector is subject to the local differential privacy processing (the input is associated with values/vector that is subjected to local differential privacy processing that results in perturbed data that protects a user’s privacy at an aggregator, Section II.A., Page 2).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2-3, 9, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Wang, et al. in view of Nissim Kobliner, et al. (U.S. PG Publication: 2018/0336357 A1).
With respect to Claim 2, Wang teaches the heavy hitter determination for keywords using a prefix trie structure as applied to claim 1. Although the trie structure steps iteratively from a start node to an end layer at which the entire word sequence has been considered for frequency determination, Wang does not specifically describe a root node being an empty character 0-th node layer as set forth in claim 2. Nissim Kobliner, however, discloses that a prefix trie begins with an empty j=0 layer (Paragraph 0034; See also Fig. 3, Element 302).
Wang and Nissim Kobliner are analogous art because they are from a similar field of endeavor in heavy hitter analysis using local differential privacy. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date, to add the empty node taught by Nissim Kobliner to the prefix trie analysis taught by Wang in order to provide a predictable result in the form of organizing trie analysis with a uniform starting placeholder for all iterations.
With respect to Claim 3, Wang teaches the heavy hitter determination for keywords using a prefix trie structure as applied to claim 1. While Wang does teach determining, as the plurality of candidate n-tuple word segments, the plurality of n-tuple word segments formed by using the each (n-1)-tuple word segment as a prefix (Section IV, Pages 5-6- “Let D1 = {0, 1}γ+η, the aggregator uses the first group’s reports to identify which values in D1 are frequent prefixes. Let C1 be the result. It then constructs D2 = C1 × {0, 1}η, which are candidates for longer frequent prefixes, and uses the second group’s reports to identify the frequent ones in D2 as C2. This continues until the last step where Cg gives the set of frequent values.” Note that since Wang relies upon a prefix tree structure to identify frequent values, the n-1 or prior layer nodes are relied upon until the final g layer is reached and overall frequency/heavy hitter word results are identified. See Section VI.A.2., Page 10), Wang does not teach that the determined keywords are each predetermined word units in a predetermined dictionary. Nissim Kobliner, however, discloses (the use of a dictionary in determining word strings, Paragraphs 0013 and 0031; Fig. 3, Element 300).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date, to utilize the dictionary taught by Nissim Kobliner in the heavy hitter determination taught by Wang in order to provide a predictable result in the form of implementing words of interest that can be used to track specific trends such as noted in Wang (Section VI.A.2., Page 10).
With respect to Claim 9, Wang discloses the heavy hitter word analysis using local differential privacy processing (LDP) as applied to Claim 8. Wang does not specifically detail the hash value processing for LDP as recited in Claim 9. Nissim Kobliner, however, discloses:
selecting one hash function from a plurality of predetermined hash functions as a target hash function (selected hash function amongst "one or more hash functions," Paragraphs 0081-0082); and calculating the target hash value of the word segment by using the target hash function (input is mapped "onto a particular hash value," Paragraph 0082; segments related to words, Paragraph 0031).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to utilize the use of hash value processing taught by Nissim Kobliner in the LDA processing taught by Wang to provide a predictable result in the form of bringing the number of elements to be considered in a heavy hitter algorithm down to a more reasonable number (Nissim Kobliner, Paragraph 0080) for more efficient processing.
Claim 21 contains subject matter similar to Claim 1, and thus, is rejected under similar rationale. Also, although Wang addresses the functionality of claim 21 and deals specifically with processing that is computer based, Wang fails to explicitly disclose computer hardware in the form of a memory storing processor-executable instructions and a processor for executing those instructions. Nissim Kobliner, however, discloses heavy hitter determination (See abstract) implemented using a computer processor and memory storing instructions (Paragraph 0091). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to implement the method taught by Wang using the computer hardware taught by Nissim Kobliner in order to provide a predictable result in the form of being able to practice the heavy hitter determination method on general purpose computing hardware.
Claim 22 contains subject matter similar to Claim 1, and thus, is rejected under similar rationale. Also, although Wang addresses the functionality of claim 21 and deals specifically with processing that is computer based, Wang fails to method implementation as a program stored on a non-transitory computer-readable storage medium. Nissim Kobliner, however, discloses heavy hitter determination (See abstract) implemented using a program stored on a non-transitory computer-readable storage medium (Paragraphs 0091 and 0093). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to implement the method taught by Wang using the non-transitory computer-readable medium embodiment taught by Nissim Kobliner in order to provide a predictable result in the form of being able to practice the heavy hitter determination method on general purpose computing hardware.
Claims 5 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Wang, et al. in view of Frank, et al. (U.S. PG Publication: 2016/0300252 A1).
With respect to Claim 5, Wang teaches the heavy hitter determination for keywords using a prefix trie structure that includes the determination of frequency, variance, and probability distributions as applied to claim 4. Wang does not teach the determination of z and p values that are then used to select candidate segments as set forth in claim 5. Frank, however, discloses:
calculating each z value corresponding to the each candidate n-tuple word segment based on the each variance (normalization of measured values to z values based on "a variance," Paragraph 0985); and
calculating each p value corresponding to the each candidate n-tuple word segment based on the each z value, as the frequency salient distribution information of the candidate n-tuple word segment (values (normalized above according to a z-score) are used to determine a p-score, Paragraphs 0642 and 1278; note that word segments were taught with respect to claim 1 by Wang);
wherein the selecting, based on the frequency salient distribution information, several candidate n-tuple word segments as n-tuple word segments represented by the nth layer of nodes comprises: selecting, based on the each p value, the several of the plurality of candidate n-tuple word segments as the n-tuple word segments represented by the nth layer of nodes (Wang teaches that frequency significance is used to select heavy hitters, IV.B.2., Page 7; while significance is measured by a p-value as taught by Frank, Paragraph 0642).
Wang and Frank are analogous art because they are from a similar field of endeavor in data analysis for determining trends using differential privacy. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to utilize the statistical information taught by Frank in the word analysis of heavy hitters taught by Wang in order to provide a predictable result of better identifying valid trends in statistics by considering a significance value.
With respect to Claim 7, Wang further discloses:
Using the each node at the nth layer to record a variance and a p value of the n-tuple word segment represented by the node (each candidate prefix node at each layer (1, 2,....g) are assessed with respect to assign/record a frequency significance used to determine heavy hitters including a calculation of variance, Section IV.A., Pages 5-6, Section IV.B.3, Page 7, and Section VI.B.6.; wherein Frank teaches that significance can be expressed with a p-value).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Wang, et al. in view of Frank, et al. and further in view of Dwork, et al. ("Private False Discovery Rate Control," 2015).
With respect to Claim 6, Wang in view of Frank teaches the teaches the heavy hitter determination for keywords using a prefix trie structure that includes the determination of p and z values. Wang in view of Frank does not teach the selection algorithm for p values that are less than a target p value as is set forth in claim 6. Dwork, however discloses:
arranging a p value in ascending order ("sort the p-values in increasing order," Section 1.1., Algorithm 1, page 3);
selecting a maximum p value that satisfies a predetermined condition as a target p value (determining a maximum p value below a predetermined condition threshold qj/m to cease the algorithm and reject other hypotheses, Section 1.1., Algorithm 1, page 3),
wherein any p value that satisfies the predetermined condition is less than or equal to a target result corresponding to the p value, and the target result is a result obtained by dividing a product of a sequence number of the p value in the arrangement and a predetermined threshold set for the nth layer by a quantity of candidate n-tuple word segments (see equation in algorithm 1- (qj/m)- j is the sequence number starting at 1 and ending at m and multiplied by q that is a minimum false discovery rate threshold and divided by m is the quantity of segments); and
selecting candidate n-tuple word segments corresponding to the p value that is less than the target p value as the n-tuple word segments represented by the nth layer of nodes (true hypotheses remain for selection after the algorithm is run and false/random chance hypotheses are rejected, Section 1.1., page 3, wherein Wang teaches the word segments at various levels of a prefix trie as applied to claim1).
Wang, Frank, and Dwork are analogous art because they are from a similar field of endeavor in data analysis for determining trends using differential privacy. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date to include the hypothesis management algorithms taught by Dwork into the combination of Wang and Frank in order to provide a predictable result in the form of better controlling false positives in the heavy hitter algorithm taught by Wang.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Fawaz, et al. (U.S. PG Publication: 2013/0212690 A1)- teaches the utilization of differential privacy with a tree structure (Paragraph 0028).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES S WOZNIAK whose telephone number is (571)272-7632. The examiner can normally be reached 7-3, off alternate Fridays.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant may use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JAMES S. WOZNIAK
Primary Examiner
Art Unit 2655
/JAMES S WOZNIAK/Primary Examiner, Art Unit 2655