DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 7-9 and 14-17 rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 2021/0273707, “Yoo”) in view of Yoo et al. (US 2021/0264255, “Yoo255”) and Larsson et al. (US 2011/0268208, “Larsson”).
Examiner’s note: in what follows, references are drawn to Yoo unless otherwise mentioned.
Yoo comprises the following features:
With respect to independent claims:
Regarding claim 1, a method for reporting, by a terminal, channel state information (CSI) in a wireless communication system, the method comprising:
receiving, from a base station, a pilot signal related to calculation of a quantization rule ([0077] “a CSI transmitting device (e.g., UE) may perform the training, obtain the encoder and decoder weights, use the encoder weights, and provide the decoder weights to a base station with a decoder. For example, based at least in part on a minimum quantity of DL channel observations (e.g., from CSI-RSs), the UE may determine encoder weights θ and decoder weights ϕ from a trained neural network model.”, and [0059] “CSI is compactly encoded using a precoding codebook and coarse quantization before being transmitted back to the base station.”),
wherein the quantization rule is determined based on an empirical distribution of an encoder neural network output of the terminal (See aforesaid [0077] “weights θ and decoder weights ϕ from a trained neural network model”, and [0068] “a device may train a neural network model to determine the encoder and decoder weights. The device may train the neural network model by encoding a CSI instance into encoded CSI with a CSI encoder, decoding the encoded CSI into decoded CSI with a CSI decoder, and comparing the CSI instance and the decoded CSI.”);
transmitting, to the base station, quantization rule information related to the quantization rule calculated based on the pilot signal ([0067] “The gNB may receive the encoded CSI, and CSI decoder 420 may decode the encoded CSI into decoded CSI using decoder parameters 425. Decoder parameters 425 may include decoder weights obtained from machine learning, such as from the training of the neural network model associated with a CSI encoder and a CSI decoder.”);
receiving, from the base station, information on a gradient calculated based on the quantization rule information (This will be discussed in view of Yoo255.),
wherein the quantization rule information includes information on an empirically calculated variance with respect to the empirical distribution of the encoder neural network output (This will be discussed in view of Larsson.).
It is noted that while disclosing generating CSI with using a machine learning model, Yoo does not specifically teach about information on a gradient, and information on an empirically calculated variance. It, however, had been known in the art before the effective date of the instant application as shown by Yoo255 and Larsson as follows;
receiving, from the base station, information on a gradient calculated based on the quantization rule information ([Yoo255, 0135] “the process 1000 may include updating weights of the transmitter neural network based on the payload gradient (block 1040). For example, the base station (e.g., using the controller/processor 240, the SOC 300, memory 242, and/or the like) can update weights of the transmitter neural network.”),
the quantization rule information includes information on an empirically calculated variance with respect to the empirical distribution of the encoder neural network output ([Larsson, claim 1] “receiving from each UE a quantized normalization measure of channel elements”, and [Larsson, claim 3] “the quantized normalization measure is represented as the log of a ratio of complex Gaussian random variables having different variances.”)
Therefore, it would have been obvious to one of ordinary skill in the art at the time of instant application to modify Yoo by using the features of Yoo255 and Larsson in order to apply neural network processing and reduce overheads such that “The method also determines a transmission reference point gradient of a loss based on the transmission reference point value” [Yoo255, 0007], and “uplink overhead is significantly reduced in a MU-COMP wireless communication network by exploiting the dissimilarity of received signal strength in signals” [Larsson, 0008].
Regarding claim 16, it is a terminal claim corresponding to the method claim 1, except the limitations, “a transmitter, a receiver, at least one processor” (See Fig. 2.), and “at least one computer memory operably connectable to the at least one processor, and storing instructions of performing operations when executed by the at least one processor” ([0011 and Fig. 2] “a non-transitory computer-readable medium may store one or more instructions for wireless communication. The one or more instructions, when executed by one or more processors of a device”), and is therefore rejected for the similar reasons set forth in the rejection of claim 1.
Regarding claim 17, it is a method claim by a base station, corresponding to the method claim 1 in a reciprocal way, and is therefore rejected for the similar reasons set forth in the rejection of claim 1.
With respect to dependent claims:
Regarding claim 2, the method of claim 1, further comprising:
receiving, from the base station, information on a maximum information amount used for feedback of the quantization rule information related to the quantization rule ([0069] “encoded CSI may be more accurate, but larger in size. In another scenario, encoded CSI may be smaller, but less accurate. There is a balance between encoded CSI accuracy and encoded CSI size. The device may determine to select more accuracy rather than a smaller size, or select less accuracy to have a smaller size. The device may transmit the encoder weights and the decoder weights from the training to another device, such as a UE (e.g., CSI encoder 410) or a base station (e.g., CSI decoder 420).”).
Regarding claim 7, the method of claim 1, wherein values which the output of the encoder neural network of the terminal may have follow a Gaussian distribution ([Larsson, claim 3] “the quantized normalization measure is represented as the log of a ratio of complex Gaussian random variables having different variances.”).
Regarding claim 8, the method of claim 7, wherein when a mean value of the values which the output of the encoder neural network of the terminal may have is not 0, the quantization rule information further includes information on the mean value of the values which the output of the encoder neural network of the terminal may have (See [Larsson, Fig. 2 and Fig. 5]. Larsson discloses a Gaussian distribution which indicates a non-zero mean and a deviation.).
Regarding claim 9, the method of claim 1, wherein the transmitting of the quantization rule information further includes determining whether the empirical distribution of the output of the encoder neural network of the terminal is changed ([0064] “the UE may encode only a changed part of the CSI (compared to previous CSI), and thus provide a smaller size CSI feedback with the same reconstruction quality.”).
Regarding claim 14, the method of claim 1, further comprising: reporting, to the base station, the CSI calculated based on the pilot signal, wherein the CSI includes a precoding matrix indicator (PMI) ([0056] “CSI feedback may include a precoding matrix indicator (PMI)”).
Regarding claim 15, the method of claim 14, further comprising: receiving downlink data from the base station, wherein the downlink data is transmitted based on precoding by a precoding matrix indicated by the PMI ([0055] “FIG. 3 illustrates an example 300 of precoding vectors for CSI feedback. FIG. 3 shows a base station that may select a beam from among beam candidates or may select a combination of two beams.”).
Claim(s) 3 rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 2021/0273707, “Yoo”) in view of Yoo et al. (US 2021/0264255, “Yoo255”) and Larsson et al. (US 2011/0268208, “Larsson”) and further in view of Jin et al. (US 2019/0123796, “Jin”).
Examiner’s note: in what follows, references are drawn to Yoo unless otherwise mentioned.
Regarding claim 3, it is noted that while disclosing generating CSI with using a machine learning model, Yoo does not specifically teach about information on an order pair. It, however, had been known in the art before the effective date of the instant application as shown by Jin as follows;
the method of claim 2, further comprising: transmitting, to the base station, (i) the number of neurons constituting an output layer of an encoder neural network of the terminal ([0057] “Type-II CSI feedback may include precoding vectors for different ranks. Rank may refer to the number of spatial layers of modulated symbols before precoding is applied,”) and (ii) information on an order pair of the number of bits used for quantization of the output of the encoder neural network of the terminal ([Jin, 0055] “Generating the CSI based on the codebook of the CSI at each transport layer of the UE usually means that indexes corresponding to values of corresponding W.sub.1 and W.sub.2 in the codebook of the CSI at each transport layer of the UE are carried in a corresponding precoding matrix indicator (PMI)”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of instant application to modify Yoo by using the features of Jin in order to improve CSI feedback accuracy such that “a discussion of high-precision CSI feedback mechanism design mainly focuses on a method for representing CSI by linearly superposing a plurality of codewords” [Jin, 0004].
Claim(s) 4 rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 2021/0273707, “Yoo”) in view of Yoo et al. (US 2021/0264255, “Yoo255”) and Larsson et al. (US 2011/0268208, “Larsson”) and further in view of Guo et al. (US 2021/0110272, “Guo”).
Examiner’s note: in what follows, references are drawn to Yoo unless otherwise mentioned.
Regarding claim 4, it is noted that while disclosing generating CSI with using a machine learning model, Yoo does not specifically teach about a batch size. It, however, had been known in the art before the effective date of the instant application as shown by Guo as follows;
the method of claim 1, wherein the quantization rule information is calculated according to a period determined based on a batch size for training a neural network ([Guo, 0017] “receive remote statistics or remote intermediate values from other processors executing the cross batch normalization layer; (5) compute a global batch mean and a global batch variance based on local and remote statistics or intermediate values”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of instant application to modify Yoo by using the features of Guo in order to efficiently train machine learning algorithms such that “the systems and techniques disclosed herein may provide for synchronization of normalization statistics between local batches of inputs in a cross batch normalization layer for a global batch of inputs including a plurality of local batches.” [Guo, 0009].
Claim(s) 5 rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 2021/0273707, “Yoo”) in view of Yoo et al. (US 2021/0264255, “Yoo255”) and Larsson et al. (US 2011/0268208, “Larsson”) and further in view of Liu et al. (US 2021/0065011, “Liu”).
Examiner’s note: in what follows, references are drawn to Yoo unless otherwise mentioned.
Regarding claim 5, it is noted that while disclosing generating CSI with using a machine learning model, Yoo does not specifically teach about a STE function. It, however, had been known in the art before the effective date of the instant application as shown by Liu as follows;
the method of claim 1, wherein the encoder neural network of the terminal includes (i) a quantization layer for quantization of an output value of the encoder neural network and (ii) a straight-through estimator (STE) function which is a differentiable function used during back-propagation ([Liu, 0061] “In the back propagation, the layers involved in quantization in the forward propagation are processed using STE (gradient estimation) techniques.”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of instant application to modify Yoo by using the features of Liu in order to provide accelerated neural network model training such that “a training method of a neural network model, comprising: determining gradients of weights in the neural network model during a back propagation” [Liu, 0008].
Claim(s) 6 rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 2021/0273707, “Yoo”) in view of Yoo et al. (US 2021/0264255, “Yoo255”) and Larsson et al. (US 2011/0268208, “Larsson”) and further in view of Zhao et al. (US 2016/0344458, “Zhao”).
Examiner’s note: in what follows, references are drawn to Yoo unless otherwise mentioned.
Regarding claim 6, it is noted that while disclosing generating CSI with using a machine learning model, Yoo does not specifically teach about quantizing a range of a value It, however, had been known in the art before the effective date of the instant application as shown by Zhao as follows;
the method of claim 1, wherein the information on the variance is transmitted based on a codebook configured by quantizing a range of a value of the variance ([Zhao, 0099] “the CSI may be quantized by adopting the determined precoding codebook model, and then codebook index information used for identifying the precoding codebook model and a phase parameter, corresponding to the precoding codebook model, obtained by quantization are sent to the evolved Node B.”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of instant application to modify Yoo by using the features of Zhao in order to improve accuracy in CSI feedbacks such that “storing one or more sets of precoding codebook models the same as one or more sets of precoding codebook models stored at an eNodeB; determining a precoding codebook model for feeding back CSI” [Zhao, 0024].
Claim(s) 10-12 rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 2021/0273707, “Yoo”) in view of Yoo et al. (US 2021/0264255, “Yoo255”) and Larsson et al. (US 2011/0268208, “Larsson”) and further in view of Lowell et al. (US 2019/0188557, “Lowell”).
Examiner’s note: in what follows, references are drawn to Yoo unless otherwise mentioned.
Regarding claim 10, it is noted that while disclosing generating CSI with using a machine learning model, Yoo does not specifically teach about a condition for a transmission. It, however, had been known in the art before the effective date of the instant application as shown by Lowell as follows;
the method of claim 9, wherein when it is determined that the empirical distribution of the output of the encoder neural network of the terminal is changed ([Lowell, 0045 and Fig. 4] “On condition 450 that the training error is acceptable (e.g., the difference is below an acceptable threshold, or a heuristic applied to the output and the known correct output satisfies a desired condition), ANN 300 can be considered to be trained on this training data set.”), the information on the variance is transmitted (See aforesaid [0067].).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of instant application to modify Yoo by using the features of Lowell in order to optimize weights for neural networks such that “The processor includes circuitry to calculate a distribution of ANN information; circuitry to select a quantization function from a set of quantization functions based on the distribution” [Lowell, 0012].
Regarding claim 11, the method of claim 10, further comprising: receiving, from the base station, the information on the gradient calculated based on the quantization rule information (See aforesaid [Yoo255, 0135].).
Regarding claim 12, the method of claim 11, wherein a pre-trained neural network parameter is updated based on the information on the gradient calculated based on the quantization rule information (See [Lowell, Fig. 4] for 455 “Adjust Link Weights” and it goes to step 410.).
Claim(s) 13 rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (US 2021/0273707, “Yoo”) in view of Yoo et al. (US 2021/0264255, “Yoo255”) and Larsson et al. (US 2011/0268208, “Larsson”) and further in view of Thakker et al. (US 2021/0056422, “Thakker”).
Examiner’s note: in what follows, references are drawn to Yoo unless otherwise mentioned.
Regarding claim 13, it is noted that while disclosing generating CSI with using a machine learning model, Yoo does not specifically teach about no change leading to no update. It, however, had been known in the art before the effective date of the instant application as shown by Thakker as follows;
the method of claim 9, wherein when it is determined that the empirical distribution of the output of the encoder neural network of the terminal is not changed, the information on the variance is not transmitted, and the pre-trained neural network parameter is applied ([Thakker, 0028] “The RNN model is configured to for each input data value received from the skip predictor, process the input data value, and update the hidden state vector, and generate output data after a last input data value is processed.”, and [Thakker, 0029] “The skip predictor is trained without retraining the pre-trained RNN model.”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of instant application to modify Yoo by using the features of Thakker in order to reduce overheads in training ML such that “a binary predictor may be placed in front of an RNN to determine whether a particular time step should be processed or may be skipped.” [Thakker, 0004].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Harry H. Kim whose telephone number and email address are as follows; 571-272-5009, harry.kim2@uspto.gov.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Derrick Ferris can be reached at 571-272-3123.
Information regarding the status of an application may be obtained from www.uspto.gov. For questions or assistance, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (in USA or Canada) or 571-272-1000.
/HARRY H KIM/ Primary Examiner, Art Unit 2411