1Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This communication is in response to applicant’s 11/06/2025 amendment or response in the application of VITTHALADEVUNI et al. for “CONFIGURABLE METRICS FOR CHANNEL STATE COMPRESSION AND FEEDBACK” filed 02/12/202#. The amendment or response to the claims have been entered. No claims have been canceled. No claims have been added. Claims 1-3, 5-6, 10-12, 14-17, 25, 33, 37-56 are now pending.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4, 6, 10, 12, 14-16, 25, 33, 37, 39, 41-42, 45, 47-51, 53-54, 56 is/are rejected under 35 U.S.C. 103 as being unpatentable over HAO (US 2020/0358492 A1) in view of WANG et al. (US 2021/0182658 A1), hereinafter WANG.
Regarding claim 1, HAO discloses a method for wireless communication at a user equipment (UE), comprising:
receiving, from a base station, an indication of a level of accuracy for reporting channel state feedback to the base station (the network device 110 may transmit the indication of a required CSI accuracy to each terminal device 120, see ¶ 0044);
receiving downlink data or reference signals from the base station (the network device 110 may require that the terminal device 120 to feed the CSI with the higher accuracy, for example, the CSI with an accuracy higher than a threshold accuracy (referred to as a “third threshold accuracy”), see ¶ 0045); and
reporting the channel state feedback to the base station corresponding to the level of accuracy based at least in part on the downlink data or the reference signals (the terminal device 120 may feed the indicated accuracy level of the CSI back to the network device 110, see ¶ 0044).
HAO fails to disclose that the UE receiving from the base station an loss function for training a neural network pair comprising a first network neural network an encoder for encoding and a second neural network at a decoder for decoding.
In the same field of endeavor, WANG discloses the UE comprising a CRM 212 comprising Neural Network table 216, UE neural network manager 218 (see figure 2); where the UE neural network manager 218 of the UE 110 includes a downlink processing module 606, where the downlink processing module 606 includes deep neural network(s) 608 (DNNs 608) for processing (received) downlink communications. In various implementations, the UE neural network manager 218 forms the DNNs 608 using NN formation configurations. In FIG. 6, the DNNs 608 correspond to the DNNs 514 of FIG. 5, where the deep neural network(s) 606 of UE 110 perform some or all receiver processing functionality for (received) downlink communications. Accordingly, the DNNs 604 and the DNNs 608 perform complementary processing to one another (e.g., encoding/decoding, modulating/demodulating), see figure 6 and ¶ 0093). The DNNs are train based on the configuration received from the base station 120 (see ¶ 0044).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to implement WANG’s teaching in the network taught by HAO for the use of neural networks for encoding and decoding at the User Equipment (UE)—such as smartphones and IoT devices—is primarily focused on optimizing physical layer communications and enabling efficient distributed machine learning.
Regarding claim 2, WANG discloses training the neural network pair using the loss function (CRM 262 includes training module 270 and neural network table 272. In implementations, the base station 120 manage and deploy NN formation configurations to UE 110, see ¶ 0044).
Regarding claim 4, WANG discloses encoding the channel state feedback using the first neural network at the encoder based at least in part on the training; and reporting the encoded channel state feedback (the DNNs 604 and the DNNs 608 perform complementary processing to one another (e.g., encoding/decoding, modulating/demodulating), see ¶ 0093; DNNs in the UE are trained based on training data, see ¶ 0038; and the UE transmit feedback to the BS, see ¶ 0043).
Regarding claim 6, WANG discloses receiving, from the base station, an indication to train a plurality of neural network pairs based at least in part on a plurality of levels of accuracy, the plurality of neural network pairs comprising the neural network pair; and training each of the plurality of neural network pairs based at least in part on a respective level of accuracy of the plurality of levels of accuracy (CRM 262 includes training module 270 and neural network table 272. In implementations, the base station 120 manage and deploy NN formation configurations to UE 110, see ¶ 0044; the base station neural network manager 268 transmit training data to the UE for the supervised learning, see ¶ 0110).
Regarding claim 10, HAO discloses the level of accuracy is based at least in part on one or more of a subband, spatial layer, or channel tap to which the channel state feedback corresponds, the method further comprising: receiving data from the base station on the subband or spatial layer or in accordance with the channel tap based at least in part on reporting the channel state feedback corresponding to the level of accuracy (the network device 110 divides the whole operating bandwidth into one or more frequency band ranges. The specific value of the first threshold accuracy may be determined based on the actual requirement of the system performance (such as Signal to Noise Ratio). Any suitable number of frequency band ranges with any bandwidth may be employed, including, for example, the whole frequency band on which the network device 110 operates, a partial frequency band, or a single frequency band (or sub-band), see ¶ 0046).
Regarding claim 12, HAO discloses identifying a number of bits for reporting the channel state feedback based at least in part on the level of accuracy, wherein the number of bits is directly related to the level of accuracy; and reporting the channel state feedback corresponding to the level of accuracy with the identified number of bits (the network device 110 obtains the CSI from a plurality of terminal devices 120 and an indication of a plurality of CSI accuracies corresponding to the CSI. The CSI accuracy may be indicated by an index of 1-3 bits. In this way, the network device 110 may learn the accuracy level of each received CSI, see ¶ 0044).
Regarding claim 14, HAO discloses receiving the indication of the level of accuracy comprises: receiving the indication of the level of accuracy in radio resource control (RRC) signaling or in a media access control (MAC) control element (MAC-CE) (the network device 110 may also transmit the indication to the terminal device 120 in signaling at a higher layer signaling such as a radio resource control (RRC) layer, for instance, see ¶ 0044).
Regarding claim 15, HAO discloses a method for wireless communication at a base station, comprising:
transmitting, to a user equipment (UE), an indication of a level of accuracy for reporting channel state feedback to the base station (the network device 110 may transmit the indication of a required CSI accuracy to each terminal device 120, see ¶ 0044);
transmitting downlink data or reference signals to the UE (the network device 110 may require that the terminal device 120 to feed the CSI with the higher accuracy, for example, the CSI with an accuracy higher than a threshold accuracy (referred to as a “third threshold accuracy”), see ¶ 0045); and
receiving channel state feedback from the UE corresponding to the level of accuracy based at least in part on a transmission of the downlink data or reference signals to the UE (the terminal device 120 may feed the indicated accuracy level of the CSI back to the network device 110, see ¶ 0044).
HAO fails to disclose that the UE receiving from the base station an loss function for training a neural network pair comprising a first network neural network an encoder for encoding and a second neural network at a decoder for decoding.
In the same field of endeavor, WANG discloses the UE comprising a CRM 212 comprising Neural Network table 216, UE neural network manager 218 (see figure 2); where the UE neural network manager 218 of the UE 110 includes a downlink processing module 606, where the downlink processing module 606 includes deep neural network(s) 608 (DNNs 608) for processing (received) downlink communications. In various implementations, the UE neural network manager 218 forms the DNNs 608 using NN formation configurations. In FIG. 6, the DNNs 608 correspond to the DNNs 514 of FIG. 5, where the deep neural network(s) 606 of UE 110 perform some or all receiver processing functionality for (received) downlink communications. Accordingly, the DNNs 604 and the DNNs 608 perform complementary processing to one another (e.g., encoding/decoding, modulating/demodulating), see figure 6 and ¶ 0093). The DNNs are train based on the configuration received from the base station 120 (see ¶ 0044).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to implement WANG’s teaching in the network taught by HAO for the use of neural networks for encoding and decoding at the User Equipment (UE)—such as smartphones and IoT devices—is primarily focused on optimizing physical layer communications and enabling efficient distributed machine learning.
Regarding claim 16, WANG discloses training the neural network pair using the loss function (CRM 262 includes training module 270 and neural network table 272. In implementations, the base station 120 manage and deploy NN formation configurations to UE 110, see ¶ 0044; the base station 120 communicates multiple neural network formation configurations to the UE 110. For example, the base station transmits a first message that directs the UE to use a first neural network formation configuration for uplink encoding, and a second message that directs the UE to use a second neural network formation configuration for downlink decoding. In some scenarios, the base station 120 communicates multiple neural network formation configurations, and the respective processing assignments, in a single message, see ¶ 0104-0105).
Regarding claim 25, HAO discloses an apparatus for wireless communication at a user equipment (UE), comprising:
means for receiving, from a base station, an indication of a level of accuracy for reporting channel state feedback to the base station (the network device 110 may transmit the indication of a required CSI accuracy to each terminal device 120, see ¶ 0044);
means for receiving downlink data or reference signals from the base station (the network device 110 may require that the terminal device 120 to feed the CSI with the higher accuracy, for example, the CSI with an accuracy higher than a threshold accuracy (referred to as a “third threshold accuracy”), see ¶ 0045); and
means for reporting the channel state feedback to the base station corresponding to the level of accuracy based at least in part on the downlink data or the reference signals (the terminal device 120 may feed the indicated accuracy level of the CSI back to the network device 110, see ¶ 0044).
HAO fails to disclose that the UE receiving from the base station an loss function for training a neural network pair comprising a first network neural network an encoder for encoding and a second neural network at a decoder for decoding.
In the same field of endeavor, WANG discloses the UE comprising a CRM 212 comprising Neural Network table 216, UE neural network manager 218 (see figure 2); where the UE neural network manager 218 of the UE 110 includes a downlink processing module 606, where the downlink processing module 606 includes deep neural network(s) 608 (DNNs 608) for processing (received) downlink communications. In various implementations, the UE neural network manager 218 forms the DNNs 608 using NN formation configurations. In FIG. 6, the DNNs 608 correspond to the DNNs 514 of FIG. 5, where the deep neural network(s) 606 of UE 110 perform some or all receiver processing functionality for (received) downlink communications. Accordingly, the DNNs 604 and the DNNs 608 perform complementary processing to one another (e.g., encoding/decoding, modulating/demodulating), see figure 6 and ¶ 0093). The DNNs are train based on the configuration received from the base station 120 (see ¶ 0044).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to implement WANG’s teaching in the network taught by HAO for the use of neural networks for encoding and decoding at the User Equipment (UE)—such as smartphones and IoT devices—is primarily focused on optimizing physical layer communications and enabling efficient distributed machine learning.
Regarding claim 33, HAO discloses an apparatus for wireless communication at a user equipment (UE), comprising: a processor, memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to:
receive, from a base station, an indication of a level of accuracy for reporting channel state feedback to the base station (the network device 110 may transmit the indication of a required CSI accuracy to each terminal device 120, see ¶ 0044);
receive downlink data or reference signals from the base station (the network device 110 may require that the terminal device 120 to feed the CSI with the higher accuracy, for example, the CSI with an accuracy higher than a threshold accuracy (referred to as a “third threshold accuracy”), see ¶ 0045); and
report the channel state feedback to the base station corresponding to the level of accuracy based at least in part on the downlink data or the reference signals (the terminal device 120 may feed the indicated accuracy level of the CSI back to the network device 110, see ¶ 0044).
HAO fails to disclose that the UE receiving from the base station an loss function for training a neural network pair comprising a first network neural network an encoder for encoding and a second neural network at a decoder for decoding.
In the same field of endeavor, WANG discloses the UE comprising a CRM 212 comprising Neural Network table 216, UE neural network manager 218 (see figure 2); where the UE neural network manager 218 of the UE 110 includes a downlink processing module 606, where the downlink processing module 606 includes deep neural network(s) 608 (DNNs 608) for processing (received) downlink communications. In various implementations, the UE neural network manager 218 forms the DNNs 608 using NN formation configurations. In FIG. 6, the DNNs 608 correspond to the DNNs 514 of FIG. 5, where the deep neural network(s) 606 of UE 110 perform some or all receiver processing functionality for (received) downlink communications. Accordingly, the DNNs 604 and the DNNs 608 perform complementary processing to one another (e.g., encoding/decoding, modulating/demodulating), see figure 6 and ¶ 0093). The DNNs are train based on the configuration received from the base station 120 (see ¶ 0044).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to implement WANG’s teaching in the network taught by HAO for the use of neural networks for encoding and decoding at the User Equipment (UE)—such as smartphones and IoT devices—is primarily focused on optimizing physical layer communications and enabling efficient distributed machine learning.
Regarding claim 37, WANG discloses training the neural network pair using the loss function (CRM 262 includes training module 270 and neural network table 272. In implementations, the base station 120 manage and deploy NN formation configurations to UE 110, see ¶ 0044).
Regarding claim 39, WANG discloses encoding the channel state feedback using the first neural network at the encoder based at least in part on the training; and reporting the encoded channel state feedback (the DNNs 604 and the DNNs 608 perform complementary processing to one another (e.g., encoding/decoding, modulating/demodulating), see ¶ 0093; DNNs in the UE are trained based on training data, see ¶ 0038; and the UE transmit feedback to the BS, see ¶ 0043).
Regarding claim 41, WANG discloses receiving, from the base station, an indication to train a plurality of neural network pairs based at least in part on a plurality of levels of accuracy, the plurality of neural network pairs comprising the neural network pair; and training each of the plurality of neural network pairs based at least in part on a respective level of accuracy of the plurality of levels of accuracy (CRM 262 includes training module 270 and neural network table 272. In implementations, the base station 120 manage and deploy NN formation configurations to UE 110, see ¶ 0044; the base station neural network manager 268 transmit training data to the UE for the supervised learning, see ¶ 0110).
Regarding claim 42, WANG discloses receive an indication to use the neural network pair of the plurality of neural network pairs for reporting the channel state feedback (the UE transmits the CSI, CQI or channel condition using NN formation configuration, see ¶ 0046, 055, 0170, 0183).
Regarding claim 45, HAO discloses the level of accuracy is based at least in part on one or more of a subband, spatial layer, or channel tap to which the channel state feedback corresponds, the method further comprising: receiving data from the base station on the subband or spatial layer or in accordance with the channel tap based at least in part on reporting the channel state feedback corresponding to the level of accuracy (the network device 110 divides the whole operating bandwidth into one or more frequency band ranges. The specific value of the first threshold accuracy may be determined based on the actual requirement of the system performance (such as Signal to Noise Ratio). Any suitable number of frequency band ranges with any bandwidth may be employed, including, for example, the whole frequency band on which the network device 110 operates, a partial frequency band, or a single frequency band (or sub-band), see ¶ 0046).
Regarding claim 47, HAO discloses identify a number of bits for reporting the channel state feedback based at least in part on the level of accuracy, wherein the number of bits is directly related to the level of accuracy; and report, the channel state feedback corresponding to the level of accuracy with the identified number of bits (the network device 110 obtains the CSI from a plurality of terminal devices 120 and an indication of a plurality of CSI accuracies corresponding to the CSI. The CSI accuracy may be indicated by an index of 1-3 bits. In this way, the network device 110 may learn the accuracy level of each received CSI, see ¶ 0044).
Regarding claim 48, HAO discloses receive an indication of the number of bits for reporting the channel state feedback based at least in part on the level of accuracy (The CSI accuracy may be indicated by an index of 1-3 bits. In this way, the network device 110 may learn the accuracy level of each received CSI …the network device 110 may transmit the indication of a required CSI accuracy to each terminal device 120. For example, the network device 110 may transmit the indication to each terminal device 120 in the physical downlink control channel (PDCCH), see ¶ 0044).
Regarding claim 49, HAO discloses receiving the indication of the level of accuracy comprises: receiving the indication of the level of accuracy in radio resource control (RRC) signaling or in a media access control (MAC) control element (MAC-CE) (the network device 110 may also transmit the indication to the terminal device 120 in signaling at a higher layer signaling such as a radio resource control (RRC) layer, for instance, see ¶ 0044).
Regarding claim 50, HAO discloses an apparatus for wireless communication at a base station, comprising: a processor, memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to:
transmit, to a user equipment (UE), an indication of a level of accuracy for reporting channel state feedback to the base station (the network device 110 may transmit the indication of a required CSI accuracy to each terminal device 120, see ¶ 0044);
transmit downlink data or reference signals to the UE (the network device 110 may require that the terminal device 120 to feed the CSI with the higher accuracy, for example, the CSI with an accuracy higher than a threshold accuracy (referred to as a “third threshold accuracy”), see ¶ 0045); and
receive channel state feedback from the UE corresponding to the level of accuracy based at least in part on a transmission of the downlink data or reference signals to the UE (the terminal device 120 may feed the indicated accuracy level of the CSI back to the network device 110, see ¶ 0044).
HAO fails to disclose that the UE receiving from the base station an loss function for training a neural network pair comprising a first network neural network an encoder for encoding and a second neural network at a decoder for decoding.
In the same field of endeavor, WANG discloses the UE comprising a CRM 212 comprising Neural Network table 216, UE neural network manager 218 (see figure 2); where the UE neural network manager 218 of the UE 110 includes a downlink processing module 606, where the downlink processing module 606 includes deep neural network(s) 608 (DNNs 608) for processing (received) downlink communications. In various implementations, the UE neural network manager 218 forms the DNNs 608 using NN formation configurations. In FIG. 6, the DNNs 608 correspond to the DNNs 514 of FIG. 5, where the deep neural network(s) 606 of UE 110 perform some or all receiver processing functionality for (received) downlink communications. Accordingly, the DNNs 604 and the DNNs 608 perform complementary processing to one another (e.g., encoding/decoding, modulating/demodulating), see figure 6 and ¶ 0093). The DNNs are train based on the configuration received from the base station 120 (see ¶ 0044).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to implement WANG’s teaching in the network taught by HAO for the use of neural networks for encoding and decoding at the User Equipment (UE)—such as smartphones and IoT devices—is primarily focused on optimizing physical layer communications and enabling efficient distributed machine learning.
Regarding claim 51, WANG discloses training the neural network pair using the loss function (CRM 262 includes training module 270 and neural network table 272. In implementations, the base station 120 manage and deploy NN formation configurations to UE 110, see ¶ 0044; the base station 120 communicates multiple neural network formation configurations to the UE 110. For example, the base station transmits a first message that directs the UE to use a first neural network formation configuration for uplink encoding, and a second message that directs the UE to use a second neural network formation configuration for downlink decoding. In some scenarios, the base station 120 communicates multiple neural network formation configurations, and the respective processing assignments, in a single message, see ¶ 0104-0105).
Regarding claim 53, WANG discloses receiving, from the base station, an indication to train a plurality of neural network pairs based at least in part on a plurality of levels of accuracy, the plurality of neural network pairs comprising the neural network pair; and training each of the plurality of neural network pairs based at least in part on a respective level of accuracy of the plurality of levels of accuracy (CRM 262 includes training module 270 and neural network table 272. In implementations, the base station 120 manage and deploy NN formation configurations to UE 110, see ¶ 0044; the base station neural network manager 268 transmit training data to the UE for the supervised learning, see ¶ 0110).
Regarding claim 54, HAO discloses transmit indications of different levels of accuracy for reporting channel state feedback for different subbands, spatial layers, channel taps, or in response to failing to decode different numbers of downlink transmissions comprising same data (the network device 110 divides the whole operating bandwidth into one or more frequency band ranges. The specific value of the first threshold accuracy may be determined based on the actual requirement of the system performance (such as Signal to Noise Ratio). Any suitable number of frequency band ranges with any bandwidth may be employed, including, for example, the whole frequency band on which the network device 110 operates, a partial frequency band, or a single frequency band (or sub-band), see ¶ 0046).
Regarding claim 56, HAO discloses the apparatus transmit an indication of a number of bits for the UE to use to report the channel state feedback based at least in part on the level of accuracy, wherein the number of bits is directly related to the level of accuracy; and receive the channel state feedback corresponding to the level of accuracy with the number of bits (the network device 110 obtains the CSI from a plurality of terminal devices 120 and an indication of a plurality of CSI accuracies corresponding to the CSI. The CSI accuracy may be indicated by an index of 1-3 bits. In this way, the network device 110 may learn the accuracy level of each received CSI, see ¶ 0044).
Claim(s) 5, 17, 40 and 52 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of HAO-WANG in view of WANG et al. (US 2021/0342687 A1), hereinafter WANG ‘687.
Regarding claims 5, 17, 40 and 52, the combination of HAO-WANG fails to explicitly disclose receiving, from the UE, coefficients for a neural network at a decoder for decoding the channel state feedback from the UE; and decoding the channel state feedback from the UE using the neural network at the decoder.
In the same field of endeavor, WANG ‘687 discloses receiving, from the UE, metric 825 (coefficients) for a neural network at a decoder for decoding the channel state feedback from the UE; and decoding the channel state feedback from the UE using the neural network at the decoder (see ¶ figure 8 and ¶ 0115).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to implement WANG ‘687’s teaching in the network taught by the combination of HAO-WANG for adapting to changing environment of automation to provide ability to process information rapidly, simultaneously while reducing potential human errors.
Claim(s) 11, 46 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of HAO-WANG in view of KIM et al. (US 2022/0158773 A1), hereinafter KIM.
Regarding claims 11 and 46, the combination of HAO-WANG fails to disclose receiving a retransmission of the same data that the UE failed to decode based at least in part on reporting the channel state feedback corresponding to the level of accuracy.
In the same field of endeavor, KIM discloses the receiving STA failed to decoding the received data, receiving the data retransmitted from the transmitting STA within the time limit after transmitting NACK to transmitting STA (see 0288).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to implement KIM’s teaching in the network taught by the combination of the combination of HAO-WANG for providing network’s reliability.
Claim(s) 41-44 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of HAO-WANG in view of GE et al. (US 10,785,681 A1), hereinafter GE.
Regarding claims 41-44, the combination of HAO-WANG fails to disclose receiving, from the base station, an indication to train a plurality of neural network pairs based at least in part on a plurality of levels of accuracy, the plurality of neural network pairs comprising the neural network pair; and training each of the plurality of neural network pairs based at least in part on a respective level of accuracy of the plurality of levels of accuracy.
In the same field of endeavor, GE discloses an apparatus comprising a plurality of DNN pairs, and each DNN pairs providing a respective different compression ratio and selecting the encoder DNN and decoder DNN pair and training together (see claims 1-4).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to implement GE’s teaching in the network taught by the combination of the combination of HAO-WANG to provide the user with multiple options for level of data accuracy and selecting the DNN pair based on the user need.
Allowable Subject Matter
Claims 3, 38, 55 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-3, 5-6, 10-12, 14-17, 25, 33, 37-56 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any response to this action should be mailed to:
The following address mail to be delivered by the United States Postal Service (USPS) only:
Mail Stop _____________
Commissioner for Patents
P. O. Box 1450
Alexandria, VA 22313-1450
or faxed to:
(571) 273-8300, (for formal communications intended for entry)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bob A. Phunkulh whose telephone number is (571) 272-3083. The examiner can normally be reached on Monday-Thursday from 8:00 A.M. to 5:00 P.M. (first week of the bi-week) and Monday-Friday (for second week of the bi-week).
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor CHARLES C. JIANG can be reach on (571) 270-7191.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/BOB A PHUNKULH/Primary Examiner, Art Unit 2412