DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12/15/2025 have been fully considered but they are not persuasive. Garcia et al. (US 2021/0218460) teaches the plurality of training sets as well as the input and output data corresponding to a channel realization and optimal user selection respectively.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 19-36, and 42-43 are rejected under 35 U.S.C. 103 as being unpatentable over Garcia et al. (US 2021/0218460) in view of Wang et al. (US 2021/0064996).
Regarding claim 1, Garcia discloses a method of training a neural network to select users for multi user multiple-input multiple-output (MU-MIMO) communication from a set of potential users, the method comprising ([0041], “Each deep neural network (DNN) may accept as an input the quantities for calculating priority metrics (PM) and is trained to maximize the MU-PM or a heuristic of the MU-PM. Because of a DNN's parallel architecture and operational simplicity, an embodiment of the DNN-based scheme can quickly and efficiently calculate a MU-MIMO beam selection and user pairing that can outperform conventional heuristic and combinatorial-search schemes.”): providing, to the neural network ([0088], “An embodiment provides DQN Neural Network Training Sample Generation. It may be important to train. DQN network with the right training samples.”), a plurality of training data sets ([0089], “Several policies may be used to create training samples. For example, the policies may include an exhaustive search policy, multi-user greedy policy, CBI-free greedy policy, or random-greedy hybrid policy.”), each training data set comprising input data corresponding to a channel realization and output data corresponding to an optimal user selection for the channel realization ([0042], “As depicted in FIG. 1, the system may include a user priority metric calculator 110, a beam priority metric calculator 120, DNN beam selector(s) 125, non-DNN beam selector(s) 130, a best beam selection evaluator 140, and/or a user selector 150. In the example of FIG. 1, the input to the system may include per-user quantities for calculating PM, and the output may include a user pairing or remaining candidate users.”). Garcia does not disclose the branch weights of the neural network. Wang discloses and controlling the neural network to analyze the plurality of training data sets to determine a branch weight for each association between neurons of neighboring layers of the neural network ([0079], “Node 410 corresponds to one of several nodes included in input layer 404, where the nodes perform independent computations from one another. As further described, a node receives input data, and processes the input data using algorithm(s) to produce output data. At times, the algorithm(s) include weights and/or coefficients that change based on adaptive learning. Thus, the weights and/or coefficients reflect information learned by the neural network.”), wherein the branch weight is for provision of the output data responsive to the input data ([0079], “Each node can, in some cases, determine whether to pass the processed input data to the next node(s). To illustrate, after processing input data, node 410 can determine whether to pass the processed input data to node 412 and/or node 414 of hidden layer(s) 408. Alternately or additionally, node 410 passes the processed input data to nodes based upon a layer connection architecture. This process can repeat throughout multiple layers until the DNN generates an output using the nodes of output layer 406.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Garcia in view of Wang to have the branch weights of the neural network. The motivation would have been to improve accuracy and lowering errors (e.g., Wang [0066]).
Regarding claim 2, Garcia discloses a method performed by a neural network, wherein the method is a training method configuring the neural network for selection of users for multi user multiple-input multiple-output (MU-MIMO) communication from a set of potential users, the method comprising ([0041], “Each deep neural network (DNN) may accept as an input the quantities for calculating priority metrics (PM) and is trained to maximize the MU-PM or a heuristic of the MU-PM. Because of a DNN's parallel architecture and operational simplicity, an embodiment of the DNN-based scheme can quickly and efficiently calculate a MU-MIMO beam selection and user pairing that can outperform conventional heuristic and combinatorial-search schemes.”): receiving ([0088], “An embodiment provides DQN Neural Network Training Sample Generation. It may be important to train. DQN network with the right training samples.”) a plurality of training data sets ([0089], “Several policies may be used to create training samples. For example, the policies may include an exhaustive search policy, multi-user greedy policy, CBI-free greedy policy, or random-greedy hybrid policy.”), each training data set comprising input data corresponding to a channel realization and output data corresponding to an optimal user selection for the channel realization ([0042], “As depicted in FIG. 1, the system may include a user priority metric calculator 110, a beam priority metric calculator 120, DNN beam selector(s) 125, non-DNN beam selector(s) 130, a best beam selection evaluator 140, and/or a user selector 150. In the example of FIG. 1, the input to the system may include per-user quantities for calculating PM, and the output may include a user pairing or remaining candidate users.”). Garcia does not disclose the branch weights of the neural network. Wang discloses and analyzing the plurality of training data sets to determine a branch weight for each association between neurons of neighboring layers of the neural network ([0079], “Node 410 corresponds to one of several nodes included in input layer 404, where the nodes perform independent computations from one another. As further described, a node receives input data, and processes the input data using algorithm(s) to produce output data. At times, the algorithm(s) include weights and/or coefficients that change based on adaptive learning. Thus, the weights and/or coefficients reflect information learned by the neural network.”), wherein the branch weight is for provision of the output data responsive to the input data ([0079], “Each node can, in some cases, determine whether to pass the processed input data to the next node(s). To illustrate, after processing input data, node 410 can determine whether to pass the processed input data to node 412 and/or node 414 of hidden layer(s) 408. Alternately or additionally, node 410 passes the processed input data to nodes based upon a layer connection architecture. This process can repeat throughout multiple layers until the DNN generates an output using the nodes of output layer 406.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Garcia in view of Wang to have the branch weights of the neural network. The motivation would have been to improve accuracy and lowering errors (e.g., Wang [0066]).
Regarding claim 19, Garcia discloses a computer program product comprising a non-transitory computer readable medium ([0109], "For example, memory 14 can be comprised of any combination of random access memory (RAM) 54, read only memory (ROM) 44, non-volatile memory, static storage such as a magnetic or optical disk, hard disk drive (HDD), or any other type of non-transitory machine or computer readable media."), having thereon a computer program comprising program instructions, the computer program being loadable into a data processing unit and configured to cause execution of the method of claim 1 when the computer program is run by the data processing unit ([0109], "Memory 14 and/or media 64 may store software, computer program code or instructions. The instructions stored in memory 14 or media 64 may include program instructions or computer program code that, when executed by processor 12, enable the apparatus 10 to perform tasks as described herein.").
Regarding claim 20, Garcia discloses an apparatus for training of a neural network to select users for multi user multiple-input multiple-output (MU-MIMO) communication from a set of potential users ([0041], "Each deep neural network (DNN) may accept as an input the quantities for calculating priority metrics (PM) and is trained to maximize the MU-PM or a heuristic of the MU-PM. Because of a DNN's parallel architecture and operational simplicity, an embodiment of the DNN-based scheme can quickly and efficiently calculate a MU-MIMO beam selection and user pairing that can outperform conventional heuristic and combinatorial-search schemes."), the apparatus comprising controlling circuitry configured to cause ([0109], "Apparatus 10 may further include or be coupled to at least one memory 14 (internal or external), which may be coupled to processor 12, for storing information and instructions that may be executed by processor 12."): provision, to the neural network ([0088], “An embodiment provides DQN Neural Network Training Sample Generation. It may be important to train. DQN network with the right training samples.”), of a plurality of training data sets ([0089], “Several policies may be used to create training samples. For example, the policies may include an exhaustive search policy, multi-user greedy policy, CBI-free greedy policy, or random-greedy hybrid policy.”), each training data set comprising input data corresponding to a channel realization and output data corresponding to an optimal user selection for the channel realization ([0042], “As depicted in FIG. 1, the system may include a user priority metric calculator 110, a beam priority metric calculator 120, DNN beam selector(s) 125, non-DNN beam selector(s) 130, a best beam selection evaluator 140, and/or a user selector 150. In the example of FIG. 1, the input to the system may include per-user quantities for calculating PM, and the output may include a user pairing or remaining candidate users.”). Garcia does not disclose the branch weights of the neural network. Wang discloses and control of the neural network for causing the neural network to analyze the plurality of training data sets to determine a branch weight for each association between neurons of neighboring layers of the neural network ([0079], “Node 410 corresponds to one of several nodes included in input layer 404, where the nodes perform independent computations from one another. As further described, a node receives input data, and processes the input data using algorithm(s) to produce output data. At times, the algorithm(s) include weights and/or coefficients that change based on adaptive learning. Thus, the weights and/or coefficients reflect information learned by the neural network.”), wherein the branch weight is for provision of the output data responsive to the input data ([0079], “Each node can, in some cases, determine whether to pass the processed input data to the next node(s). To illustrate, after processing input data, node 410 can determine whether to pass the processed input data to node 412 and/or node 414 of hidden layer(s) 408. Alternately or additionally, node 410 passes the processed input data to nodes based upon a layer connection architecture. This process can repeat throughout multiple layers until the DNN generates an output using the nodes of output layer 406.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Garcia in view of Wang to have the branch weights of the neural network. The motivation would have been to improve accuracy and lowering errors (e.g., Wang [0066]).
Regarding claim 21, Garcia discloses the apparatus of claim 20, wherein the input data comprises a channel correlation metric of the channel realization for each user in the set of potential users ([0045], “FIG. 2 illustrates an example embodiment of a system, which includes a user priority metric calculator 110 as introduced above. A user priority metric (UPM) informs the scheduler of the relative priority of a user in scheduling. As an example, if only one user can be selected for a resource, then user A will be selected instead of user B when user A has a higher UPM value compared to user B.”).
Regarding claim 22, Garcia discloses the apparatus of claim 21, wherein the channel correlation metric for a user comprises: a channel filter norm for the user; a channel norm for the user; a channel gain for the user; pair-wise correlations between the user and one or more other users of the set of potential users; and/or a channel eigenvalue for the user ([0045], “The UPM for the uth user (ρ.sub.u,L.sup.u(α)) is a function of the user channel state information (CSI) (σ), the user Quality-of-Service (QoS) metric κ, the user average throughput (r.sup.ave), the total crossbeam interference ratio (TCBI) (α), the power splitting factor ψ.sub.L based on the number of co-scheduled users, and the number of resource units allocated to the user (w.sub.u), for example”).
Regarding claim 23, Garcia discloses the apparatus of claim 21, wherein an input layer of the neural network comprises one neuron per element of the channel correlation metric ([0060], “Examples of the architecture of the SSDBS DNN 320 may include fully-connected NNs, Convolutional NNs (CNN), Recurrent NNs, etc. FIG. 4 illustrates an example of a fully-connected NN case, where the nodes are neurons and the arrows are connections between neurons. Each neuron performs a linear or differentiable non-linear transformation to its input and each connection linearly scales its input.”).
Regarding claim 24, Garcia discloses the apparatus of claim 20, wherein an output layer of the neural network comprises one neuron per selection alternative ([0059], “The SSDBS DNN 320 may output the beam neural network (NN) metrics (y.sub.1, . . . , y.sub.B), which represent either the normalized selection probabilities or the Bellman Q-values of the beams.”).
Regarding claim 25, Garcia discloses the apparatus of claim 24, wherein a selection alternative refers to whether a particular user is selected, or whether a particular collection of users are selected ([0059], “Optionally, the SSDBS DNN 320 can output the layer rank metric probabilities λ.sub.1, . . . , λ.sub.Λ to indicate the number of user layers that the beam discriminator should select.”).
Regarding claim 26, Garcia does not disclose the output data comprising a vector with one element per neuron. Wang discloses the apparatus of claim 24, wherein the output data comprises a vector with one element per neuron of the output layer ([0083], "A kernel parameter indicates a filter size (e.g., a width and height) to use in processing input data. Alternately or additionally, the kernel parameter specifies a type of kernel method used in filtering and processing the input data. A support vector machine, for instance, corresponds to a kernel method that uses regression analysis to identify and/or classify data. Other types of kernel methods include Gaussian processes, canonical correlation analysis, spectral clustering methods, and so forth."), wherein each element is assigned a binary value defining whether or not the corresponding selection alternative is true for the optimal user selection ([0075], Output data can be a hard selection as stated in applicant specification [0115]. "The nodes between layers are configurable in a variety of ways, such as a partially-connected configuration where a first subset of nodes in a first layer are connected with a second subset of nodes in a second layer, a fully- connected configuration where each node in a first layer are connected to each node in a second layer, etc. A neuron processes input data to produce a continuous output value, such as any real number between 0 and 1."). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Garcia in view of Wang to have the output data comprising a vector with one element per neuron. The motivation would have been to improve accuracy and lowering errors (e.g., Wang [0066]).
Regarding claim 27, Garcia discloses the apparatus of claim 20, wherein a number of hidden neurons of the neural network, a number of hidden layers of the neural network, and/or a number of neurons per hidden layer of the neural network ([0060], Depicted in FIG. 4 is the hidden layer. “Examples of the architecture of the SSDBS DNN 320 may include fully-connected NNs, Convolutional NNs (CNN), Recurrent NNs, etc. FIG. 4 illustrates an example of a fully-connected NN case, where the nodes are neurons and the arrows are connections between neurons. Each neuron performs a linear or differentiable non-linear transformation to its input and each connection linearly scales its input.”) is based on a number of users in the set of potential users, a maximum number of un-selected users, and/or a number of MU-MIMO transmit antennas ([0061], “The MBD 330 may perform the selection of one or more beams from the output of the DNN 320. The MBD 330 may ensure that the layer rank (number of scheduled layers) are valid: 1≤L≤Λ and that each selected beams has at least one associated user. The discrimination can be threshold based, or based on the Top-N.”).
Regarding claim 28, Garcia discloses the apparatus of claim 20, wherein the optimal user selection is based on a performance metric of the set of potential users for the channel realization ([0041], “Instead of directly performing the MU-PM calculation and search of candidate beam selections through heuristic or combinatorial schemes, one embodiment uses DNN(s) to perform beam selections. The paired users may then be chosen based on the selected beams. Each deep neural network (DNN) may accept as an input the quantities for calculating priority metrics (PM) and is trained to maximize the MU-PM or a heuristic of the MU-PM.”).
Regarding claim 29, Garcia discloses the apparatus of claim 28, wherein the performance metric comprises: a sum-rate, a per-user-rate, an average error rate, a maximum error rate, a per-user error rate, and/or a sum-correlation ([0042], “In the example of FIG. 1, the input to the system may include per-user quantities for calculating PM, and the output may include a user pairing or remaining candidate users.”).
Regarding claim 30, Garcia discloses the apparatus of claim 28, wherein the optimal user selection has: a highest sum-rate, a highest per-user-rate, a lowest average error rate, a lowest maximum error rate, a lowest per-user error rate, and/or a lowest sum-correlation ([0045], “FIG. 2 illustrates an example embodiment of a system, which includes a user priority metric calculator 110 as introduced above. A user priority metric (UPM) informs the scheduler of the relative priority of a user in scheduling. As an example, if only one user can be selected for a resource, then user A will be selected instead of user B when user A has a higher UPM value compared to user B.”).
Regarding claim 31, Garcia discloses an apparatus for selection of users for multi user multiple-input multiple-output (MU-MIMO) communication from a set of potential users ([0041], "Each deep neural network (DNN) may accept as an input the quantities for calculating priority metrics (PM) and is trained to maximize the MU-PM or a heuristic of the MU-PM. Because of a DNN's parallel architecture and operational simplicity, an embodiment of the DNN-based scheme can quickly and efficiently calculate a MU-MIMO beam selection and user pairing that can outperform conventional heuristic and combinatorial-search schemes."), the apparatus comprising controlling circuitry configured to cause ([0109], "Apparatus 10 may further include or be coupled to at least one memory 14 (internal or external), which may be coupled to processor 12, for storing information and instructions that may be executed by processor 12."): provision, to a neural network ([0088], “An embodiment provides DQN Neural Network Training Sample Generation. It may be important to train. DQN network with the right training samples.”) trained according to the method of claim 1, of input data corresponding to an applicable channel ([0042], “As depicted in FIG. 1, the system may include a user priority metric calculator 110, a beam priority metric calculator 120, DNN beam selector(s) 125, non-DNN beam selector(s) 130, a best beam selection evaluator 140, and/or a user selector 150. In the example of FIG. 1, the input to the system may include per-user quantities for calculating PM, and the output may include a user pairing or remaining candidate users.”); reception, from the neural network, of output data comprising a user selection indication ([0059], “The SSDBS DNN 320 may output the beam neural network (NN) metrics (y.sub.1, . . . , y.sub.B), which represent either the normalized selection probabilities or the Bellman Q-values of the beams. Optionally, the SSDBS DNN 320 can output the layer rank metric probabilities λ.sub.1, . . . , λ.sub.Λ to indicate the number of user layers that the beam discriminator should select.”); and selection of users based on the user selection indication ([0076], “The user selector 150 may determine the set of users from the final beam selection by either discarding the un-associated users or selecting the best user of each beam. In some use cases, the scheduler may provide the user pairing from the final beam selection, wherein there is a single user assigned for the beam.”).
Regarding claim 32, Garcia discloses the apparatus of claim 31, wherein the input data comprises a channel correlation metric of the applicable channel for each user in the set of potential users ([0045], “FIG. 2 illustrates an example embodiment of a system, which includes a user priority metric calculator 110 as introduced above. A user priority metric (UPM) informs the scheduler of the relative priority of a user in scheduling. As an example, if only one user can be selected for a resource, then user A will be selected instead of user B when user A has a higher UPM value compared to user B.”).
Regarding claim 33, Garcia discloses the apparatus of claim 32, wherein the channel correlation metric for a user comprises: a channel filter norm for the user; a channel norm for the user; a channel gain for the user; pair-wise correlations between the user and one or more other users of the set of potential users; and/or a channel eigenvalue for the user ([0045], “The UPM for the uth user (ρ.sub.u,L.sup.u(α)) is a function of the user channel state information (CSI) (σ), the user Quality-of-Service (QoS) metric κ, the user average throughput (r.sup.ave), the total crossbeam interference ratio (TCBI) (α), the power splitting factor ψ.sub.L based on the number of co-scheduled users, and the number of resource units allocated to the user (w.sub.u), for example”).
Regarding claim 34, Garcia does not disclose the user device being single or multi-antenna. Wang discloses the apparatus of claim 20, wherein a user corresponds to a single-antenna user device or to an antenna of a multi-antenna user device ([0051], "The antennas 202 of the user equipment 110 may include an array of multiple antennas that are configured similar to or differently from each other."). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Garcia in view of Wang to have the user device being single or multi- antenna. The motivation would have been to improve accuracy and lowering errors (e.g., Wang [0066]).
Regarding claim 35, Garcia does not disclose the power control. Wang discloses the apparatus of claim 20, wherein the MU-MIMO applies max-min power control ([0069], "the NN formation configurations and/or NN formation configuration elements stored at the neural network table 216 at the UE 110 include more fixed architecture and/or parameter configurations, relative to those stored in the neural network table 316 and/or the neural network table 272, that reduce requirements (e.g., computation speed, less data processing points, less computations, less power consumption, etc.) at the UE 110 relative to the base station 121 and/or the core network server 302."). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Garcia in view of Wang to have the power control. The motivation would have been to improve accuracy and lowering errors (e.g., Wang [0066]).
Regarding claim 36, Garcia does not disclose the training of the neural networking comprising machine learning. Wang discloses the apparatus of claim 20, wherein the training of the neural network to select users for MU-MIMO communication from a set of potential users comprises machine learning ([0166], "Machine-learning module 400 analyzes the training data, and generates an output 1206, represented here as binary data. Some implementations iteratively train the machine-learning module 400 using the same set of training data and/or additional training data that has the same input characteristics to improve the accuracy of the machine-learning module."). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Garcia in view of Wang to have the neural network comprising machine learning. The motivation would have been to improve accuracy and lowering errors (e.g., Wang [0066]).
Regarding claim 42, Garcia does not disclose the transmit power information in the input data. Wang discloses the method of claim 1, wherein the input data comprises: a pair-wise spatial correlation between users; transmit power; and/or a maximum number of users that can be dropped ([0061], “For instance, the input characteristics includes, by way of example and not of limitation, power information, signal-to-interference-plus-noise ratio (SINR) information, channel quality indicator (CQI) information, channel state information (CSI), Doppler feedback, frequency bands, BLock Error Rate (BLER), Quality of Service (QoS), Hybrid Automatic Repeat reQuest (HARQ) information (e.g., first transmission error rate, second transmission error rate, maximum retransmissions), latency, Radio Link Control (RLC), Automatic Repeat reQuest (ARQ) metrics, received signal strength (RSS), uplink SINR, timing measurements, error metrics, UE capabilities, BS capabilities, power mode, Internet Protocol (IP) layer throughput, end2end latency, end2end packet loss ratio, etc.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Garcia in view of Wang to have the transmit power information in the input data. The motivation would have been to improve accuracy and lowering errors (e.g., Wang [0066]).
Regarding claim 43, Garcia discloses the method of claim 1, wherein the method further comprises: for a first channel realization ([0042], “As depicted in FIG. 1, the system may include a user priority metric calculator 110, a beam priority metric calculator 120, DNN beam selector(s) 125, non-DNN beam selector(s) 130, a best beam selection evaluator 140, and/or a user selector 150. In the example of FIG. 1, the input to the system may include per-user quantities for calculating PM, and the output may include a user pairing or remaining candidate users.”), determining a first user selection by performing an exhaustive search among a plurality of candidate user selections ([0089], “Several policies may be used to create training samples. For example, the policies may include an exhaustive search policy, multi-user greedy policy, CBI-free greedy policy, or random-greedy hybrid policy. In an exhaustive search policy, training samples are generated using exhaustive search of all beam combinations that produce maximum sum of BPMs as defined in (18).”); and for a second channel realization, determining a second user selection by performing an exhaustive search among a plurality of candidate user selections ([0089], “In an exhaustive search policy, training samples are generated using exhaustive search of all beam combinations that produce maximum sum of BPMs as defined in (18).”), the plurality of training data sets comprises a first training data set and a second training data set ([0090], “Once training samples are generated, they may be divided into mini-batches, and DQN is incrementally trained with these mini-batches. The size of the mini-batches may be configurable.”), the first training data set comprises first input data corresponding to the first channel realization and first output data corresponding to the determined first user selection, and the second training data set comprises second input data corresponding to the second channel realization and second output data corresponding to the determined second user selection ([0102], “According to an embodiment, the method may include, at 960, selecting the paired users based on the final beam selection. In one embodiment, the selecting 960 may include determining a set of users from the final beam selection by at least one of discarding un-associated users or selecting a best user of each beam. In certain embodiments, when the user pairing from the final beam selection is provided and there is a single user assigned for the selected beam, the lth paired user (u.sub.l*) is the PM-maximizing user of the lth beam:”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nick A Sundara whose telephone number is (571)272-6749. The examiner can normally be reached M-TH 7:30-5:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jae Y. Lee can be reached at (571) 270-3936. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICK ANON SUNDARA/Examiner, Art Unit 2479 /JAE Y LEE/Supervisory Patent Examiner, Art Unit 2479