DETAILED ACTION
1. This office action is in response to the Application No. 18890788 filed on 11/18/2025. Claims 1-30 are presented for examination and are currently pending.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
3. A request for continued examination under 37 CFR 1.114, including the fee set
forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this
application is eligible for continued examination under 37 CFR 1.114, and the fee set
forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action
has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on
11/18/2025 has been entered.
Response to Arguments
4. The Examiner is withdrawing the rejections in the previous Office action because Applicant’s amendment necessitated the new grounds of rejection presented in this Office action.
In response to the arguments on pages 11-12, regarding the dependent claims, the Examiner notes that dependent claims 2-15 and 17-30, which depend directly or indirectly from claims 1 and 16 are not allowable because of the new grounds of rejection regarding the independent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over
Umuroglu et al.("Finn: A framework for fast, scalable binarized neural network inference." Proceedings of the 2017 ACM/SIGDA international symposium on field-programmable gate arrays. 2017) in view of Karabutov et al. (US20230106778).
Regarding claim 1, Umuroglu teaches a system for processing data (In this paper, we present Finn, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture … On a ZC706 embedded FPGA platform drawing less than 25 W total system power, we demonstrate up to 12.3 million image classifications per second with 0.31 µs latency on the MNIST dataset with 95.8% accuracy (abstract); As this is a streaming system, the classification throughput FPS will be approximately… the clock frequency, pg. 71, left col., third para.) using an all-binary neural network (By utilizing a novel set of optimizations that enable efficient mapping of binarized neural networks to hardware, abstract),
comprising: a computing device comprising at least a memory and a processor (All prototypes have been implemented on the Xilinx Zynq7000 All Programmable SoC ZC706 Evaluation Kit running Ubuntu 15.04. The board contains a Zynq Z7045 SoC with dual ARM Cortex-A9 cores and FPGA fabric with 218600 LUTs and 545 BRAMs, pg. 71, right col., second to the last para.):
an all-binary core (We consider three aspects of binarization for neural network layers: binary input activations, binary synapse weights and binary output activations. If all three components are binary, we refer to this as full binarization, pg. 66, right col., first para.) comprising a first plurality of programming instructions stored in the memory and operable on the processor, wherein the first plurality of programming instructions, when operating on the processor, cause the computing device to (The host code runs on the CortexA9 cores of the Zynq, pg. 71, right col., second to the last para.):
input data into binary (We also binarize the input images for the BNN as our experiments show that input binarization works well for MNIST, pg. 67, right col., last para.)
process the binary data through a plurality of binary neural network layers (We assume that the methodology … is used for training all BNNs in this paper, where all BNN layers have the following properties, pg. 68, right col., second para.),
wherein all weights and all activations in all layers are represented as single-bit binary values (Using 1-bit values for all input activations, weights and output activations (full binarization), where an unset bit represents -1 and a set bit represents +1, pg. 68, right col., second para.);
generate a final output (BNNs in which some or all the arithmetic involved in computing the outputs are constrained to single-bit values, pg. 66, right col., first para.)
and maintain single-bit binary representations (Using 1-bit values for all input activations, weights and output activations (full binarization), pg. 66, right col., first para.) and exclusive binary operations (The dot product computation itself consists of an XNOR of the vectors, pg. 69, right col., last para.) throughout all layers of the neural network (We consider three aspects of binarization for neural network layers: binary input activations, binary synapse weights and binary output activations, pg. 66, right col., first para.).
Umuroglu does not explicitly teach encode input data into discrete binary codewords using a codebook that maps input values to fixed binary representations; process the discrete binary data through a plurality of binary neural network layers, generate a final output using the processed binary codewords;
Karabutov teaches encode input data into discrete binary codewords using a codebook (a method is provided for encoding data for a neural network into a bitstream, … and a step of including codewords representing the quantized data into the bitstream [0026]) that maps input values to fixed binary representations (Binary symbols may be binary words (codewords) which have the same (fixed length) [0068]; After all input values xm, m∈{0, 1, . . . M−1} are quantized, in step 290, the quantized values are assigned the corresponding symbol sn … Symbols sn may be indexes or symbols of any kind, e.g. fixed or variable-length binary codewords [0072]. The Examiner notes the binary codewords are in a codebook);
process the discrete binary data through a plurality of binary neural network layers, generate a final output using the processed binary codewords (The quantized input values may be transmitted or stored and then further used by the neural network (e.g. by the following layers) [0067]. The Examiner notes the transmitted quantized input values which are used by the neural network would generate a final output);
Since Umuroglu as primary reference teaches binarize the input images (pg. 67, right col., last para.). Karabutov as a secondary reference discloses a binarizing process [0145] and the input to the system is input data, which can be image [0138], then, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Umuroglu to incorporate the teachings of Karabutov for the benefit of efficient quantization that enables for a good tradeoff between the rate necessary for transmission and the neural network accuracy (Karabutov 0041)
Regarding claim 16, claim 16 is similar to claim 1. It is rejected in the same manner and reasoning applying.
6. Claims 2, 3, 17, 18 are rejected under 35 U.S.C. 103 as being unpatentable over
Umuroglu et al.("Finn: A framework for fast, scalable binarized neural network inference." Proceedings of the 2017 ACM/SIGDA international symposium on field-programmable gate arrays. 2017) in view of Karabutov et al. (US20230106778) and further in view of Schindler et al. ("Towards efficient forward propagation on resource-constrained systems." Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18. Springer International Publishing, 2019)
Regarding claim 2, Modified Umuroglu teaches the system of claim 1, Modified Umuroglu does not explicitly teaches the limitations of claim 2.
Schindler teaches wherein encoding input data into binary codewords comprises using a shared codebook (The principle is to use fewer bits to encode values with a high frequency of appearance. In order to reduce the search space, we use a single codebook that contains the codes for all layers, pg. 434, last para. the Examiner notes a codebook comprises codewords).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Umuroglu to incorporate the teachings of Schindler for the benefit of reducing compute and memory requirements by removing redundancy from neural networks (Schindler, abstract)
Regarding claim 3, Modified Umuroglu teaches the system of claim 2, Modified Umuroglu does not explicitly teaches the limitations of claim 3.
Schindler teaches wherein the shared codebook uses Huffman coding (In order to reduce memory requirements, we flatten the transposed weight matrices WTl , store the signs together with their indices, and apply Huffman coding, pg. 434, second to the last para.)
to generate binary codewords for input data values (The principle is to use fewer bits to encode values with a high frequency of appearance. In order to reduce the search space, we use a single codebook that contains the codes for all layers, pg. 434, last para. the Examiner notes a codebook comprises codewords).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Umuroglu to incorporate the teachings of Schindler for the benefit of reducing compute and memory requirements by removing redundancy from neural networks (Schindler, abstract)
Regarding claim 17, claim 17 is similar to claim 2. It is rejected in the same manner and reasoning applying.
Regarding claim 18, claim 18 is similar to claim 3. It is rejected in the same manner and reasoning applying.
7. Claims 9-13, 24, 25 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Umuroglu et al.("Finn: A framework for fast, scalable binarized neural network inference." Proceedings of the 2017 ACM/SIGDA international symposium on field-programmable gate arrays. 2017) in view of Karabutov et al. (US20230106778) and further in view of Hirtzlin et al. ("Stochastic computing for hardware implementation of binarized neural networks." IEEE Access 7 (2019): 76394-76403, date of publication June 5, 2019, date of current version June 25, 2019)
Regarding claim 9, Modified Umuroglu teaches the system of claim 1, Hirtzlin teaches wherein the plurality of binary neural network layers comprises one or more binary fully connected layers (In a hardware implementation, it can therefore be attractive to binarize only the classifier (fully connected) layers, pg. 76398, right col., first para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Hirtzlin for the benefit of a low memory requirements of BNNs (Binary Neural Networks)(one bit by synapse), as well as the fact that they do not require any multiplication which makes them extremely adapted for inference hardware (Hirtzlin, pg. 76395, right col., second to the last para.)
Regarding claim 10, Modified Umuroglu teaches the system of claim 9, Hirtzlin teaches wherein the binary fully connected layers (In a hardware implementation, it can therefore be attractive to binarize only the classifier (fully connected) layers, pg. 76398, right col., first para.) comprise:
binary weights; binary activation functions (neuron activation values as well as synaptic weights assume binary values, meaning +1 and −1 (pg. 76395, right col., second para.); Second, the binarized weights W are not directly modified during the back propagation, pg. 96401, right col., first para.); and
binary matrix multiplications implemented using XNOR and popcount functions (It includes a 2 kbits memory array to stores weights, as well as XNOR gates and popcount logic (pg. 96399, right col., second para.); The products between weights and neuron activation values in Eq. (1) then simply become logic XNOR operation, pg. 76395, right col., second para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Hirtzlin for the benefit of a low memory requirements of BNNs (Binary Neural Networks)(one bit by synapse), as well as the fact that they do not require any multiplication which makes them extremely adapted for inference hardware (Hirtzlin, pg. 76395, right col., second to the last para.)
Regarding claim 13, Modified Umuroglu teaches the system of claim 1, Hirtzlin teaches wherein the one or more hardware processors are further configured for training the all-binary neural network (The architecture for hardware implementation of BNN inference is presented in Fig. 6, pg. 76401, right col., last para.) by:
initializing binary weights for the neural network layers (Second, the binarized weights W are not directly modified during the back propagation, pg. 76401, right col., first para.);
forward propagating binary codewords through the network while maintaining binary representations (1. Forward propagation for k = 1 to L do, pg. 76401, right col., Algorithm 3);
computing a loss function based on the network’s binary output (Compute gradient of softmax cross entropy loss, pg. 76401, right col., Algorithm 3);
back-propagating errors through the network using binary approximations of gradients (2. Backward propagation for k = L to 1 do, pg. 76401, right col., Algorithm 3); and
updating binary weights using a binary optimization algorithm (Training is done in the same conditions as the Fashion-MNIST case, using dropout and Adam optimizer, and the pytorch deep learning framework, pg. 76398, left col., second para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Hirtzlin for the benefit of a low memory requirements of BNNs (Binary Neural Networks)(one bit by synapse), as well as the fact that they do not require any multiplication which makes them extremely adapted for inference hardware (Hirtzlin, pg. 76395, right col., second to the last para.)
Regarding claim 24, claim 24 is similar to claim 9. It is rejected in the same manner and reasoning applying.
Regarding claim 25, claim 25 is similar to claim 10. It is rejected in the same manner and reasoning applying.
Regarding claim 28, claim 28 is similar to claim 13. It is rejected in the same manner and reasoning applying.
8. Claims 4 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Umuroglu et al.("Finn: A framework for fast, scalable binarized neural network inference." Proceedings of the 2017 ACM/SIGDA international symposium on field-programmable gate arrays. 2017) in view of Karabutov et al. (US20230106778) and further in view of Andri et al. ("Chewbaccann: A flexible 223 tops/w bnn accelerator." 2021 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2021).
Regarding claim 4, Modified Umuroglu teaches the system of claim 1, Modified Umuroglu does not explicitly teach wherein the plurality of binary neural network layers comprises one or more binary convolutional layers.
Andri teaches wherein the plurality of binary neural network layers comprises one or more binary convolutional layers (This paper presents ChewBaccaNN, a 0.7 mm2 sized binary convolutional neural network (CNN) accelerator designed in GlobalFoundries 22 nm technology, abstract).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Andri for the benefit of a binary convolutional neural network (CNN) accelerator designed in GlobalFoundries 22 nm technology and during inference of binary CNNs with up to 7×7 kernels, leading to a peak core energy efficiency of 223 TOPS/W (Andri, abstract).
Regarding claim 19, claim 19 is similar to claim 4. It is rejected in the same manner and reasoning applying.
9. Claims 5, 6, 20 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Umuroglu et al.("Finn: A framework for fast, scalable binarized neural network inference." Proceedings of the 2017 ACM/SIGDA international symposium on field-programmable gate arrays. 2017) in view of Karabutov et al. (US20230106778) in view of Andri et al. ("Chewbaccann: A flexible 223 tops/w bnn accelerator." 2021 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2021) and further in view of Hirtzlin et al. ("Stochastic computing for hardware implementation of binarized neural networks." IEEE Access 7 (2019): 76394-76403, date of publication June 5, 2019, date of current version June 25, 2019)
Regarding claim 5, Modified Umuroglu teaches the system of claim 4, Hirtzlin teaches wherein the binary convolutional layers comprise: binary weights; binary activation functions (neuron activation values as well as synaptic weights assume binary values, meaning +1 and −1 (pg. 76395, right col., second para.); Second, the binarized weights W are not directly modified during the back propagation, pg. 96401, right col., first para.); and
operations implemented using XNOR and popcount functions (It includes a 2 kbits memory array to stores weights, as well as XNOR gates and popcount logic, pg. 96399, right col., second para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Hirtzlin for the benefit of a low memory requirements of BNNs (Binary Neural Networks)(one bit by synapse), as well as the fact that they do not require any multiplication which makes them extremely adapted for inference hardware (Hirtzlin, pg. 76395, right col., second to the last para.)
Regarding claim 6, Modified Umuroglu teaches the system of claim 4, Andri teaches further comprising a binary max pooling layer following at least one of the binary convolutional layers (The same BPU array datapath is reused to perform binary maxpooling operation, pg. 2, right col., first para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Andri for the benefit of a binary convolutional neural network (CNN) accelerator designed in GlobalFoundries 22 nm technology and during inference of binary CNNs with up to 7×7 kernels, leading to a peak core energy efficiency of 223 TOPS/W (Andri, abstract).
Regarding claim 20, claim 20 is similar to claim 5. It is rejected in the same manner and reasoning applying.
Regarding claim 21, claim 21 is similar to claim 6. It is rejected in the same manner and reasoning applying.
10. Claims 7, 8, 11, 22, 23 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Umuroglu et al.("Finn: A framework for fast, scalable binarized neural network inference." Proceedings of the 2017 ACM/SIGDA international symposium on field-programmable gate arrays. 2017) in view of Karabutov et al. (US20230106778) and further in view of Mirsalari et al. ("MuBiNN: Multi-level binarized recurrent neural network for EEG signal classification." 2020 IEEE international symposium on circuits and systems (ISCAS). IEEE, 2020).
Regarding claim 7, Modified Umuroglu teaches the system of claim 1, Modified Umuroglu does not explicitly teach wherein the plurality of binary neural network layers comprises one or more binary long short-term memory (LSTM) layers.
Mirsalari teaches wherein the plurality of binary neural network layers comprises one or more binary long short-term memory (LSTM) layers (this work is the first effort to binarize LSTM with remarkable yield in accuracy (pg. 2, right col., last para.); For an M-level binarization, this process is repeated M times. it should be mentioned that sign values {-1,+1} must be encoded to 0 and 1, respectively. In M-level binarization, each layer needs M scaling factors, pg. 3, right col., first para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Mirsalari for the benefit of reducing the computation-intensive MAC operations (Mirsalari, pg. 2, right col., last para.)
Regarding claim 8, Modified Umuroglu teaches the system of claim 7, Mirsalari teaches wherein the binary LSTM layers comprise: binary input, forget, and output gates; a binary cell state (To the best of our knowledge, our proposed method called MuBiNN is the first effort to fully binarize LSTM for EEG classification with remarkable yield in complexity reduction. Our focus is on multi-level binarization of LSTM cells in each time step, for all weights, inputs, internal parameters, and outputs of the activation functions, pg. 1, right col., last para.); and
binary matrix multiplications implemented using XNOR and popcount functions (Based on the binarization of all parameters, we propose an XNOR based multiplier for performing matrix and point-wise multiplications (pg. 1, right col., last para.); By encoding the sign values {−1, +1} to binary vectors {0, 1}, the dot product operations between input and weight can be computed by simple xnor- popcount operations, pg. 3, left col., third para.).
The same motivation to combine dependent claim 7 applies here.
Regarding claim 11, Modified Umuroglu teaches the system of claim 1, Modified Umuroglu does not explicitly teach wherein the input data comprises multi-source time series data.
Mirsalari teaches wherein the input data comprises multi-source time series data (The proposed method employs RNNs because the EEG waveform is naturally fit to be processed by this type of neural network. RNNs capture the temporal dependencies in sequential data, pg. 1, left col., last para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Mirsalari for the benefit of reducing the computation-intensive MAC operations (Mirsalari, pg. 2, right col., last para.)
Regarding claim 22, claim 22 is similar to claim 7. It is rejected in the same manner and reasoning applying.
Regarding claim 23, claim 23 is similar to claim 8. It is rejected in the same manner and reasoning applying.
Regarding claim 26, claim 26 is similar to claim 11. It is rejected in the same manner and reasoning applying.
11. Claims 12 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Umuroglu et al.("Finn: A framework for fast, scalable binarized neural network inference." Proceedings of the 2017 ACM/SIGDA international symposium on field-programmable gate arrays. 2017) in view of Karabutov et al. (US20230106778) and further in view of Mizoguchi et al. ("Unsupervised Retrieval Based Multivariate Time Series Anomaly Detection and Diagnosis with Deep Binary Coding Models." PHM Society Asia-Pacific Conference. Vol. 4. No. 1. 2023).
Regarding claim 12, Modified Umuroglu teaches the system of claim 1, Modified Umuroglu does not explicitly teach wherein the final output comprises a binary anomaly indicator.
Mizoguchi teaches wherein the final output comprises a binary anomaly indicator (we present a layer-by layer description of our proposed Deep Hashing Network for Retrieval based Anomaly Detection (DHN-RAD) model (pg. 2, left col., section 2); In feature-binary layer, we aim to extract two kinds of binary codes with different length, v1-bits full-length binary codes and v2-bits sub-linear binary codes (v1 > v2) from the output of feature extraction layer (pg. 2, right col., section 2.3.); Output: Anomaly score a(Xq), sensor ranking r, pg. 4, Algorithm 2: Anomaly detection and diagnosis).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Mizoguchi for the benefit of presenting Deep Hashing Network for Retrieval based Anomaly Detection (DHN-RAD) to perform unsupervised multivariate time series anomaly detection serving both efficiency and explainability (Mizoguchi, pg. 1, right col., last para.).
Regarding claim 27, claim 27 is similar to claim 12. It is rejected in the same manner and reasoning applying.
12. Claims 14, 15, 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Umuroglu et al.("Finn: A framework for fast, scalable binarized neural network inference." Proceedings of the 2017 ACM/SIGDA international symposium on field-programmable gate arrays. 2017) in view of Karabutov et al. (US20230106778) and further in view of Hosseini et al. ("Binary precision neural network manycore accelerator." ACM Journal on Emerging Technologies in Computing Systems (JETC) 17.2 (2021): 1-27).
Regarding claim 14, Modified Umuroglu teaches the system of claim 1, Modified Umuroglu does not explicitly teach wherein the one or more hardware processors are further configured for quantizing floating-point values to binary or n-bit integer representations.
Hosseini teaches wherein the one or more hardware processors are further configured for quantizing floating-point values to binary or n-bit integer representations (We implement the double-precision floating-point models on the GPU and CPU components, and for the BiNMAC, we implement their binarized counterparts, pg. A:18, Section 6, second to the last para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Hosseini for the benefit of a low-power programmable manycore accelerator with special-purpose instructions … and their pertinent special purpose registers, as well as a specific logic that allows transposing memory blocks of 16×16-bit for efficient execution of binarized networks (Hosseini, pg. A:3, first bullet point).
Regarding claim 15, Modified Umuroglu teaches the system of claim 1, Modified Umuroglu does not explicitly teach wherein the all-binary neural network is implemented on an edge computing device with limited computational and memory resources.
Hosseini teaches wherein the all-binary neural network is implemented on an edge computing device with limited computational and memory resources (Two BNN configurations for image classification have been selected and reconfigured to evaluate the maximum performance of BiNMAC and to compare with an edge GPU implementation, and two other BNN configurations for multi-modal time-series data have been selected to evaluate the energy efficiency and low power characteristics of the BiNMAC (pg. A:3, last para.); Due to the limitation in BiNMAC’s on-chip memory, we evaluate the BiNMAC with one testing data instance at a time, pg. A:20, first para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Modified Umuroglu to incorporate the teachings of Hosseini for the benefit of a low-power programmable manycore accelerator with special-purpose instructions … and their pertinent special purpose registers, as well as a specific logic that allows transposing memory blocks of 16×16-bit for efficient execution of binarized networks (Hosseini, pg. A:3, first bullet point).
Regarding claim 29, claim 29 is similar to claim 14. It is rejected in the same manner and reasoning applying.
Regarding claim 30, claim 30 is similar to claim 15. It is rejected in the same manner and reasoning applying.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.G./Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148