Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 05/07/2025 has been entered.
Detailed Action
The following action is in response to the communication(s) received on 03/27/2026.
As of the claims filed 03/27/2026:
Claims 1, 8, and 15 have been amended.
Claims 1-20 are pending.
Claims 1, 8, and 15 are independent claims.
Information Disclosure Statement
The information disclosure statement(s) (IDS) submitted on 03/27/2026 was filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Response to Arguments
Applicant’s arguments filed 03/27/2026 have been fully considered, but are not fully persuasive.
With respect to the art rejections under 35 USC § 103:
Applicant asserts that Gorman does not teach “wherein the at least one meaning is based on metadata of the synthetic data.” This is unpersuasive, as Chen, not Gorman, teaches the detail that the identified meaning is based on the contextual semantics of the column (Chen [p.1 right ¶2]). Since the contextual semantics are not an explicit part of the dataset, and synthetic data emulates the contextual semantics of the original data, the contextual semantics correspond to the metadata of the synthetic data.
Applicant further asserts that Chen teaches away from using metadata, according to the abstract. Examiner respectfully disagrees. The broadest reasonable interpretation of "metadata" is not illuminated by the Specification. Thus, it is merely data that explains the data, in which Chen further teaches this limitation (Chen [p.1 right ¶2]), where the contextual semantics of the column is not an explicit part of the dataset and thus correspond to the metadata of the synthetic data. Although Chen does not use one type of metadata as stated in their abstract, it does not mean that the contextual semantics, which are used to aid the abbreviation expansion, are not metadata.
Applicant further asserts that Chen merely teaches the entity mentions and not the amended claim 1 which includes the metadata. This is unpersuasive, as Chen teaches modeling the contextual semantics (i.e., class) of a column, such as "company" for entities "Google", "Amazon", and "Apple Inc". (Chen [p.1 right ¶2]). The contextual semantics, which correspond to the metadata and the pre-existing column description, are used to predict the entity mention. Thus, Chen does teach the required element of column descriptions.
Applicant further asserts that Gorman does not teach the amended limitation, since Gorman teaches the error rates in relation to human-generated results. This is unpersuasive, as the synthetic data is further taught by Chen [p.1 right ¶2]; Gorman teaches the way to obtain the error score between the generated expansion (detailed by Chen) and the ground truth.
Claims 2-7 remain rejected by virtue of dependency to their respective parent claims.
Claims 8 and 15 remain reject by virtue of reciting substantially similar limitations of claim 1.
Claims 9-14 remain rejected by virtue of dependency to their respective parent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al., "Learning Semantic Annotations for Tabular Data" (hereinafter Chen), further in view of Xu et al., "Modeling Tabular Data using Conditional GAN" (hereinafter Xu), further in view of Gorman et al., “Structured Abbreviation Expansion in Context” (hereinafter Gorman), using Ratinov et al., "Abbreviation Expansion in Schema Matching and Web Integration." (hereinafter Ratinov) as a reference to provide a motivation rationale.
Regarding Claim 1, Chen teaches:
A method for refining column mappings, the method comprising: configuring a processing unit, the processing unit executing a plurality of computer instructions stored in a memory for: (Chen [p.1 right ¶2] In this study, we focus on semantic type (i.e., class) prediction for columns that are composed of phrases (i.e., entity mentions). For example, a column composed of “Google”, “Amazon” and “Apple Inc.” can be annotated by the class Company. To this end, we first develop a Hybrid Neural Network (HNN) to model the contextual semantics of a column. It embeds the phrase within a cell with a bidirectional Recurrent Neural Network and an attention layer (Att-BiRNN), and learns (i) column features (i.e., intra-column cell correlation) and (ii) row features (i.e., intra-row cell correlation) with a Convolutional Neural Network (CNN)…
In summary, this study contributes a new column type prediction method combing HNN for feature learning and KB lookup and reasoning for feature extraction.) (Note: using a neural network corresponds to configuring a processing unit)
receiving… the… data and a plurality of input column names, each of the plurality of column names being a group of one or more bytes; (Chen [p.2 left ¶1] The column includes ordered cells, each of which is a sequence of words (text phrase), known as an entity mention.
PNG
media_image1.png
284
496
media_image1.png
Greyscale
) (Note: the column features correspond to the plurality of input column names)
deploying the… data to train a deep learning (DL) model…; for thereby providing refined meanings for a given column name and obtain corresponding mapping prediction output. (Chen [p.1 right ¶2] In this study, we focus on semantic type (i.e., class) prediction for columns that are composed of phrases (i.e., entity mentions). For example, a column composed of “Google”, “Amazon” and “Apple Inc.” can be annotated by the class Company. To this end, we first develop a Hybrid Neural Network (HNN) to model the contextual semantics of a column. It embeds the phrase within a cell with a bidirectional Recurrent Neural Network and an attention layer (Att-BiRNN), and learns (i) column features (i.e., intra-column cell correlation) and (ii) row features (i.e., intra-row cell correlation) with a Convolutional Neural Network (CNN).)
Chen does not teach, but Xu further teaches:
configuring a synthetic data generator for generating synthetic data based on pre- existing mapping data; (Xu [Abst] We design CTGAN, which uses a conditional generator to address these challenges. To aid in a fair and thorough comparison, we design a benchmark with 7 simulated and 8 real datasets and several Bayesian network baselines. CTGAN outperforms Bayesian methods on most of the real datasets whereas other deep learning methods could not.) (Note: this shows a dataset could be synthesized by a synthetic data generator)
Xu and Chen are analogous to the present invention because both are from the same field of endeavor of deep learning related to datasets. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the data synthesization method from Xu into Chen’s column classification method. The motivation would be to address “the need to simultaneously model discrete and continuous columns, the multi-modal non-Gaussian values within each continuous column, and the severe imbalance of categorical columns” (Xu p.1 last ¶).
While Chen/Xu teaches receiving data in some manner, they do not explicitly teach receiving data in an encoder…; generating, by the encoder, an encoded data for each of the one or more bytes; …generated encoded data … having a word level auto regressive decoder for identifying at least one meaning for each byte of each of the received plurality of input column names; and configuring a mapping output generator. However, Gorman further teaches:
in an encoder…; generating, by the encoder, an encoded data for each of the one or more bytes; (Gorman [p.7 left ¶1] Abbreviation model The abbreviation model is a pair n-gram language model over input/output character pairs, encoded as a weighted transducer)
Gorman and Chen/Xu are analogous to the present invention because both are from the same field of endeavor of matching semantics of data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the abbreviation expansion method from Gorman to Chen/Xu’s column classification method. The motivation would be to “Most of the existing prototypes use schema level lexical information for schema matching. However, most of them perform rather poorly on real-world problems due to the abundance of abbreviations in real-world schemas… we propose a method for abbreviation expansion in schemas that facilitates lexical schema matching” (Ratinov [Abst]).
Gorman, via Chen/Xu/Gorman, further teaches:
generated encoded data … having a word level auto regressive decoder for identifying at least one meaning for each byte of each of the received plurality of input column names… (Gorman [p.7 left ¶2] 5.2 Neural implementation Expansion model The expansion model consists of an embedding layer of dimensionality 512 and two LSTM (Hochreiter and Schmidhuber, 1997) layers, each with 512 hidden units. Each sentence is padded with reserved start and end symbols. The model, implemented in TensorFlow (Abadi et al., 2016), is trained in batches of 256 until convergence using the Adam optimizer (Kingma and Ba, 2015) with α = .001.)
…configuring a mapping output generator…; (Gorman [p.4 right ¶1]
PNG
media_image2.png
363
315
media_image2.png
Greyscale
)
Chen, via Chen/Xu/Gorman, further teaches:
…wherein the at least one meaning is based on metadata of the synthetic data…(Chen [p.1 right ¶2] In this study, we focus on semantic type (i.e., class) prediction for columns that are composed of phrases (i.e., entity mentions). For example, a column composed of “Google”, “Amazon” and “Apple Inc.” can be annotated by the class Company. To this end, we first develop a Hybrid Neural Network (HNN) to model the contextual semantics of a column. It embeds the phrase within a cell with a bidirectional Recurrent Neural Network and an attention layer (Att-BiRNN), and learns (i) column features (i.e., intra-column cell correlation) and (ii) row features (i.e., intra-row cell correlation) with a Convolutional Neural Network (CNN).) (Note: the contextual semantics of the column is not an explicit part of the dataset and thus correspond to the metadata of the synthetic data)
…based on a plurality of pre-existing column descriptions (Chen [p.2 left ¶1] The column includes ordered cells, each of which is a sequence of words (text phrase), known as an entity mention.
PNG
media_image1.png
284
496
media_image1.png
Greyscale
) (Note: the column features correspond to the plurality of input column names)
While Chen/Xu teaches receiving the plurality of column descriptions and the metadata (Chen [p.2 left ¶1], see above), they do not explicitly teach using the mapping output generator to determine whether or not the identified at- least one meaning matches with at least one description of the plurality of column descriptions, and thereby obtain an error score.
However, Gorman, via Chen/Xu/Gorman, further teaches:
using the mapping output generator to determine whether or not the identified at- least one meaning matches with at least one description of the plurality of column descriptions, and thereby obtain an error score, wherein the error score indicates when the at least one meaning is correctly matched; (Gorman [p.7 bottom left] The primary metric used for system comparison is word error rate (WER), the percentage of incorrect words in the expansion. We also compute more specific statistics: overexpansion rate (OER), the percentage of words in the hypothesis expansion which were expanded but did not require expansion, underexpansion rate (UER), the percentage of words which required expansion but were not expanded, and incorrect expansion rate (IER), the percentage of words which both required and received expansion but which were expanded incorrectly. As Roark and Sproat (2014) argue, an ideal abbreviation expansion system should be “Hippocratic” in the sense that it does no harm to human interpretability, so it is particularly important to minimize OER and IER errors) (Note: the IER corresponds to the error score; since IER is a percentage of incorrectly expanded words, it also indicates the rate of correctly expanded words, thus corresponding to at least one meaning correctly matching)
and using the error score to fine tune the DL model… (Gorman [p.6 right ¶3] Language models are trained using the concatenation of the training set with 2.7m additional sentences from Wikipedia as described in subsection 2.1. The development set was used to tune the Markov order of the finite-state components, and to ablate the subsequence model heuristics.
[p.7 bottom left] As Roark and Sproat (2014) argue, an ideal abbreviation expansion system should be “Hippocratic” in the sense that it does no harm to human interpretability, so it is particularly important to minimize OER and IER errors) (Note: the IER corresponds to the error score; tuning the components with the development set while minimizing IER corresponds to fine tuning the DL model)
Regarding Claim 2, Chen/Xu/Gorman respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Chen/Xu/Gorman further teaches, via the encoder of Gorman (already incorporated into the combination):
the encoder is a byte level encoder. (Gorman [p.7 left ¶1] Abbreviation model The abbreviation model is a pair n-gram language model over input/output character pairs, encoded as a weighted transducer) (Note: the model using character pairs corresponds to being at byte (character) level)
Regarding Claim 3, Chen/Xu/Gorman respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Chen/Xu/Gorman further teaches, via the encoder of Gorman (already incorporated into the combination):
the encoder is a byte pair encoder. (Gorman [p.7 left ¶1] Abbreviation model The abbreviation model is a pair n-gram language model over input/output character pairs, encoded as a weighted transducer) (Note: the model using character pairs corresponds to being at byte (character) pair)
Regarding Claim 4, Chen/Xu/Gorman respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Chen, via Chen/Xu/Gorman, further teaches:
the DL model is an attention based neural model. (Chen [p.1 right ¶2] It embeds the phrase within a cell with a bidirectional Recurrent Neural Network and an attention layer (Att-BiRNN), and learns (i) column features (i.e., intra-column cell correlation) and (ii) row features (i.e., intra-row cell correlation) with a Convolutional Neural Network (CNN).)
Regarding Claim 5, Chen/Xu/Gorman respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Chen, via Chen/Xu/Gorman, further teaches:
The method of claim 1, wherein the pre-existing mapping data includes one or more sample data received from one or more data sources. (Chen [p.4 left ¶2] In the evaluation conducted in this paper we rely on DBpedia and three web table sets: T2Dv22 from the general Web, Limaye… and Efthymiou… from the Wikipedia encyclopedia.) (Note: each web table set corresponds to sample data; the general Web and Wikipedia encyclopedia correspond to the data sources)
Regarding Claim 6, Chen/Xu/Gorman respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Chen, via Chen/Xu/Gorman, further teaches:
The method of claim 1, wherein the DL model identifies the context of all columns of a given source table while mapping a current column name. (Chen [p.2 left ¶3]
PNG
media_image3.png
283
524
media_image3.png
Greyscale
) (Note: extracting every microtable by iterating through the sliding window corresponds to identifying the context of all columns)
Regarding Claim 7, Chen/Xu/Gorman respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Chen, via Chen/Xu/Gorman, further teaches:
The method of claim 1, further comprising performing a quality check on the obtained mapping prediction output. (Chen
PNG
media_image4.png
239
518
media_image4.png
Greyscale
) (Note: calculating accuracy corresponds to the performing a quality check)
Independent Claim 8 recites A system for refining column mappings, the system comprising: a processing unit executing a plurality of computer instructions stored in a memory to (Chen [p.1 right ¶2] we first develop a Hybrid Neural Network (HNN) to model the contextual semantics of a column. It embeds the phrase within a cell with a bidirectional Recurrent Neural Network and an attention layer (Att-BiRNN), and learns… with a Convolutional Neural Network (CNN)…) (Note: using a neural network corresponds to configuring a processing unit) to perform precisely the methods of Claim 1. Thus, Claim 8 is rejected for reasons set forth in Claim 1.
Claims 9-14, dependent on Claim 8, also recite the system configured to perform precisely the methods of Claims 2-7, respectively. Thus, Claims 9-14 are rejected for reasons set forth in Claims 2-7, respectively.
Regarding Claim 15, Chen teaches:
A method for refining column mappings, the method comprising: configuring a processing unit, the processing unit executing a plurality of computer instructions stored in a memory for: (Chen [p.1 right ¶2] In this study, we focus on semantic type (i.e., class) prediction for columns that are composed of phrases (i.e., entity mentions). For example, a column composed of “Google”, “Amazon” and “Apple Inc.” can be annotated by the class Company. To this end, we first develop a Hybrid Neural Network (HNN) to model the contextual semantics of a column. It embeds the phrase within a cell with a bidirectional Recurrent Neural Network and an attention layer (Att-BiRNN), and learns (i) column features (i.e., intra-column cell correlation) and (ii) row features (i.e., intra-row cell correlation) with a Convolutional Neural Network (CNN)…
In summary, this study contributes a new column type prediction method combing HNN for feature learning and KB lookup and reasoning for feature extraction.) (Note: using a neural network corresponds to configuring a processing unit)
receiving… the… data and a plurality of input column names, each of the plurality of column names being a group of one or more bytes; (Chen [p.2 left ¶1] The column includes ordered cells, each of which is a sequence of words (text phrase), known as an entity mention.
PNG
media_image1.png
284
496
media_image1.png
Greyscale
) (Note: the column features correspond to the plurality of input column names)
deploying the… data to train a deep learning (DL) model…; for thereby providing refined meanings for a given column name and obtain corresponding mapping prediction output. (Chen [p.1 right ¶2] In this study, we focus on semantic type (i.e., class) prediction for columns that are composed of phrases (i.e., entity mentions). For example, a column composed of “Google”, “Amazon” and “Apple Inc.” can be annotated by the class Company. To this end, we first develop a Hybrid Neural Network (HNN) to model the contextual semantics of a column. It embeds the phrase within a cell with a bidirectional Recurrent Neural Network and an attention layer (Att-BiRNN), and learns (i) column features (i.e., intra-column cell correlation) and (ii) row features (i.e., intra-row cell correlation) with a Convolutional Neural Network (CNN).)
Chen does not teach, but Xu further teaches:
configuring a synthetic data generator for generating synthetic data based on pre- existing mapping data; (Xu [Abst] We design CTGAN, which uses a conditional generator to address these challenges. To aid in a fair and thorough comparison, we design a benchmark with 7 simulated and 8 real datasets and several Bayesian network baselines. CTGAN outperforms Bayesian methods on most of the real datasets whereas other deep learning methods could not.) (Note: this shows a dataset could be synthesized by a synthetic data generator)
Xu and Chen are analogous to the present invention because both are from the same field of endeavor of deep learning related to datasets. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the data synthesization method from Xu into Chen’s column classification method. The motivation would be to address “the need to simultaneously model discrete and continuous columns, the multi-modal non-Gaussian values within each continuous column, and the severe imbalance of categorical columns” (Xu p.1 last ¶).
While Chen/Xu teaches receiving data in some manner, they do not explicitly teach receiving data in an encoder. However, Gorman further teaches:
in an encoder…; generating, by the encoder, an encoded data for each of the one or more bytes; (Gorman [p.7 left ¶1] Abbreviation model The abbreviation model is a pair n-gram language model over input/output character pairs, encoded as a weighted transducer)
Gorman and Chen/Xu are analogous to the present invention because both are from the same field of endeavor of matching semantics of data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the abbreviation expansion method from Gorman to Chen/Xu’s column classification method. The motivation would be to “Most of the existing prototypes use schema level lexical information for schema matching. However, most of them perform rather poorly on real-world problems due to the abundance of abbreviations in real-world schemas… we propose a method for abbreviation expansion in schemas that facilitates lexical schema matching” (Ratinov [Abst]).
Gorman further teaches:
generated encoded data … having a word level auto regressive decoder for identifying at least one meaning for each byte of each of the received plurality of input column names, (Gorman [p.7 left ¶2] 5.2 Neural implementation Expansion model The expansion model consists of an embedding layer of dimensionality 512 and two LSTM (Hochreiter and Schmidhuber, 1997) layers, each with 512 hidden units. Each sentence is padded with reserved start and end symbols. The model, implemented in TensorFlow (Abadi et al., 2016), is trained in batches of 256 until convergence using the Adam optimizer (Kingma and Ba, 2015) with α = .001.)
wherein the word level auto regressive decoder further decodes an output of a first word to predict the meaning of a second word; (Gorman [p.2 right ¶1] We assume the following task definition. Let A = [a0, a1, ..., an] be a sequence of possibly-abbreviated words and let E be a sequence of expanded words [e0, e1, …, en], both of length n. If ei is an element of E, then the corresponding element of A, ai, must either be identical to ei (in the case that it is not abbreviated), or a proper, non-null subsequence of ei (in the case that it is an abbreviation of ei). At inference time, the system is presented with an abbreviated A sequence of length n and is asked to propose a single hypothesis expansion of length n, denoted by Eˆ . [p.4 right ¶1]
PNG
media_image2.png
363
315
media_image2.png
Greyscale
) (Note: the hypothesized expanded second word in E^ corresponds to the predicted meaning of the second word)
configuring a mapping output generator…; (Gorman [p.4 right ¶1]
PNG
media_image2.png
363
315
media_image2.png
Greyscale
)
Chen, via Chen/Xu/Gorman, further teaches:
…based on a plurality of pre-existing column descriptions (Chen [p.2 left ¶1] The column includes ordered cells, each of which is a sequence of words (text phrase), known as an entity mention.
PNG
media_image1.png
284
496
media_image1.png
Greyscale
) (Note: the column features correspond to the plurality of input column names)
While Chen/Xu teaches receiving the plurality of column descriptions (Chen [p.2 left ¶1], see above), they do not explicitly teach using the mapping output generator to determine whether or not the identified at- least one meaning matches with at least one description of the plurality of column descriptions, and thereby obtain an error score.
However, Gorman, via Chen/Xu/Gorman, further teaches:
using the mapping output generator to determine whether or not the identified at- least one meaning matches with at least one description of the plurality of column descriptions, and thereby obtain an error score; (Gorman [p.7 bottom left] The primary metric used for system comparison is word error rate (WER), the percentage of incorrect words in the expansion. We also compute more specific statistics: overexpansion rate (OER), the percentage of words in the hypothesis expansion which were expanded but did not require expansion, underexpansion rate (UER), the percentage of words which required expansion but were not expanded, and incorrect expansion rate (IER), the percentage of words which both required and received expansion but which were expanded incorrectly. As Roark and Sproat (2014) argue, an ideal abbreviation expansion system should be “Hippocratic” in the sense that it does no harm to human interpretability, so it is particularly important to minimize OER and IER errors) (Note: the IER corresponds to the error score; since IER is a percentage of incorrectly expanded words, it also indicates the rate of correctly expanded words, thus corresponding to at least one meaning correctly matching)
and using the error score to fine tune the DL model… (Gorman [p.6 right ¶3] Language models are trained using the concatenation of the training set with 2.7m additional sentences from Wikipedia as described in subsection 2.1. The development set was used to tune the Markov order of the finite-state components, and to ablate the subsequence model heuristics.
[p.7 bottom left] As Roark and Sproat (2014) argue, an ideal abbreviation expansion system should be “Hippocratic” in the sense that it does no harm to human interpretability, so it is particularly important to minimize OER and IER errors) (Note: the IER corresponds to the error score; tuning the components with the development set while minimizing IER corresponds to fine tuning the DL model)
Regarding Claim 16, Chen/Xu/Gorman respectively teaches and incorporates the claimed limitations and rejections of Claim 15. Chen, via Chen/Xu/Gorman, further teaches:
the one or more data sources comprise one or more user devices to provide input data for data mapping and analytics (Chen [p.4 left ¶2] In the evaluation conducted in this paper we rely on DBpedia and three web table sets: T2Dv22 from the general Web, Limaye… and Efthymiou… from the Wikipedia encyclopedia.) (Note: each web table set corresponds to sample data; the general Web and Wikipedia encyclopedia correspond to the data sources)
Claims 17-19 and 20, dependent on Claim 15, also recite precisely the methods of Claims 2-4 and 6, respectively. Thus, Claims 17-19 and 20 are rejected for reasons set forth in Claims 2-4 and 6, respectively.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEP HAN whose telephone number is (703)756-1346. The examiner can normally be reached Mon-Fri 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.H./Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122