DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
2. The instant application: PCT/CN2021/134751 12/01/2021.
Information Disclosure Statement
3. The information disclosure statement (IDS) submitted, IDS - 04/04/2024, 07/25/2025 and 08/25/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
4. In the event the determination of the status of the application as subject to AIA 35
U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any
correction of the statutory basis for the rejection will not be considered a new ground of
rejection if the prior art relied upon, and the rationale supporting the rejection, would be
the same under either status.
5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that
form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or
in public use, on sale or otherwise available to the public before the effective
filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151,
or in an application for patent published or deemed published under section
122(b), in which the patent or application, as the case may be, names another
inventor and was effectively filed before the effective filing date of the claimed
invention.
6. Claims 1-9, 13-24 and 28-30 are rejected under 35 U.S.C. 102(a)(2) as being
anticipated by SHI-Hongzhe et al. (US-20240137082-A1) hereinafter “SHI-Hongzhe”.
Regarding Claim 1,
SHI-Hongzhe discloses, ‘An apparatus for wireless communication at a user equipment (UE), comprising: a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to:’(In Fig. 13 wireless device),
And discloses, ‘receive a control message from a base station indicating a machine learning model for generating or compressing one or more components of a precoding matrix indicator’ (In Fig. 3 configuration from the BS. The UE performs compression frequency-space [0135]. PMI generates from the UE and provided to the BS [0115, 0134-0135]. A ML model and DNN [0118-0119, 0138]; AI model includes auto-encoder, the UE-side-encoder and the NW-side-decoder [0141]. The terminal is configured to implement AI [0108] );
And discloses, ‘determine or compress the one or more components of the precoding matrix indicator in accordance with the machine learning model and based at least in part on a characteristic of a wireless channel’ (AI-model implemented on the auto-encoder and perform compression [0148]. The UE/encoder determines the vector quantization as part of PMI. Jointly optimized by the auto-encoder; AI-based CSI to enable the AI for the UE [0179, 0183-0185, 0187-0188]. The UE perform compression measured matrix of the channel and quantize coefficients; dimensions of vectors and compression by joint optimization [0135-0136]. 2D-DFT convolutional NN channel matrix DFT bases on frequency-space domain [0260-0266]. The converts the index matrix into binary-stream uses as PMI [0319].)
And discloses, ‘and transmit a precoding matrix indicator message comprising the one or more components of the precoding matrix indicator that are determined or compressed in accordance with the machine learning model.’ (And the UE sends to the BS [0320].)
Regarding Claim 2,
‘The apparatus of claim 1’ (disclosed above), ‘wherein the machine learning model comprises a neural network based model, and the instructions are further executable by the processor to cause the apparatus to:’,
SHI-Hongzhe discloses, ‘receive a downlink control information from the base station indicating a change in neural network parameters for the machine learning model, wherein determining or compressing the one or more components of the precoding matrix indicator is in accordance with the change in neural network parameters.’ (AI/ML parameters updated includes PMI uses convolution NNs, MLP, recurrent RNN [0118-0121]. VQ-AE NE jointly optimized implemented between compression and quantization [0136]. The training-data and inference model in Fig. 4 and [0140-0141]. High-precision-feedback on CSI [0005].)
Regarding Claim 3,
‘The apparatus of claim 2’ (disclosed above), ‘wherein the instructions are further executable by the processor to cause the apparatus to:’
SHI-Hongzhe discloses, ‘apply the change in neural network parameters according to a timing defined by one or more of:
a downlink data transmission scheduled by the downlink control information, an acknowledgment of the downlink control information, an uplink control transmission scheduled by the downlink control information, a configuration message from the base station, or any combination thereof.’ (Schedule periodically downlink and uplink [0500, 0502, 0508, 0521].)
Regarding Claim 4,
‘The apparatus of claim 1’ (disclosed above),
SHI-Hongzhe discloses, ‘wherein the machine learning model comprises a kernel based model.’ (in Fig. 13 illustrates a convolution neural network to reduce complexity and extract features; uses kernel 0583, 0597].)
Regarding Claim 5,
‘The apparatus of claim 1’ (disclosed above),
SHI-Hongzhe discloses, ‘wherein determining or compressing the one or more components of the precoding matrix indicator is further based at least in part on a transmission rank associated with the UE, a transmission layer associated with the UE, a transmission polarization associated with the UE, or any combination thereof.’ (PMI computation includes rank, layer, dimension and polarization [0134, 0193, 0377-0378, 0512].)
Regarding Claim 6,
‘The apparatus of claim 1’ (disclosed above),
SHI-Hongzhe discloses, ‘wherein an output size of compressing the one or more components of the precoding matrix indicator is based at least in part on a transmission rank associated with the UE, a transmission layer associated with the UE, a transmission polarization associated with the UE or any combination thereof.’ (rank and layer [0134]. And, size and convolution kernel [0583] and Table 5.)
Regarding Claim 7,
‘The apparatus of claim 1’ (disclosed above),
SHI-Hongzhe discloses, ‘wherein the instructions are further executable by the processor to cause the apparatus to:
receive from the base station an indication of a numerical quantity of spatial domain bases or frequency domain bases associated with the precoding matrix indicator.’ (configuration/RRC-signaling includes frequency-space-domain [0114-0115, 0156] and coefficients [0168-0169-170, 0172]. frequency-space-domain compression quantize the coefficients [0135]. )
Regarding Claim 8,
‘The apparatus of claim 7’ (disclosed above),
SHI-Hongzhe discloses, ‘wherein an output size of compressing the one or more components of the precoding matrix indicator is based at least in part on the numerical quantity of spatial domain bases or frequency domain bases.’ (Discloses, frequency-space-domain coefficients and quantity [0114-0115] and disclosure Claim 18. The UE perform vector quantization and sends to the BS [0314-0320].)
Regarding Claim 9,
‘The apparatus of claim 1’ (disclosed above),
SHI-Hongzhe discloses, ‘wherein the instructions are further executable by the processor to cause the apparatus to: receive, from the base station, an output size of compressing the one or more components of the precoding matrix indicator.’ (the configuration receives from the BS in Fig. 7 [0011, 0496-0497].)
Regarding Claim 13,
‘The apparatus of claim 1’ (disclosed above), ‘wherein the instructions are further executable by the processor to cause the apparatus to:’ (disclosed above),
SHI-Hongzhe discloses, ‘receive from the base station an indication of decoder information associated with the base station, wherein determining or compressing the one or more components of the precoding matrix indicator is further based at least in part on the decoder information associated with the base station.’ (the terminal-BS configured input/output auto encoder [0141]. And, Fig. 4 illustrates AI-model inference includes auto encoder.
Regarding Claim 14,
‘The apparatus of claim 1’ (disclosed above),
SHI-Hongzhe discloses,, ‘wherein an input to the machine learning model comprises a channel state information reference signal, an indication of an estimated channel, an indication of interference on the estimated channel, one or more previously-determined precoding matrix indicator components, one or more previously-compressed precoding matrix indicator components, or any combination thereof.’ (recurrent NN as part of DNN [0120]. PMI fed by the UE; perform compression and quantization to improve efficiency in the CSI [0134-0136]. Re-constructed channel obtained-PMI end-to-end re-construction [0360-0361] and Fig. 4).
Regarding Claim 15,
‘The apparatus of claim 1’ (disclosed above),
And discloses, ‘wherein: an output of the machine learning model comprises an indication of a quantity of spatial domain bases‘(Fig. 10B and 11 includes normalization layer uses batch-normalization [0591]. Represents convolution kernel sizes of different convolution layers.),
And discloses,
‘an indication of a selection of spatial domain bases,
an indication of one or more frequency domain base types,
a frequency domain base oversampling rate,
a number of transfer domain bases,
an indication of a selection of frequency domain bases,
an indication of a quantity of one or more frequency domain base coefficients,
an indication of one or more locations of a quantity of frequency domain base coefficients,
one or more indications associated with a channel state information report, or any combination thereof.’ (Frequency domain base disclosure Claim 7-9. Space-frequency-bases, oversampling and coefficients; determination of space-frequency and oversampled-DFT [0241-0247]; And locations of the frequency [0382, 0385] and oversampling weighted-coefficients [0445-0446]. And quantity of frequency domain [0015]. )
Regarding Claim 16,
‘A method for wireless communication at a user equipment (UE),comprising: receiving a control message from a base station indicating a machine learning model for generating or compressing one or more components of a precoding matrix indicator; determining or compressing the one or more components of the precoding matrix indicator in accordance with the machine learning model and based at least in part on a characteristic of a wireless channel; and transmitting a precoding matrix indicator message comprising the one or more components of the precoding matrix indicator that are determined or compressed in accordance with the machine learning model.’
Regarding Claim 17,
‘The method of claim 16’ (disclosed above),
Identical to Claim 2 disclosed above and rejected, ‘wherein the machine learning model comprises a neural network based model, the method further comprising: receiving a downlink control information from the base station indicating a change in neural network parameters for the machine learning model, wherein determining or compressing the one or more components of the precoding matrix indicator is in accordance with the change in neural network parameters.’
Regarding Claim 18,
‘The method of claim 17’ (disclosed above),
Identical to Claim 3 disclosed above and rejected, ‘further comprising: applying the change in neural network parameters according to a timing defined by one or more of: a downlink data transmission scheduled by the downlink control information, an acknowledgment of the downlink control information, an uplink control transmission scheduled by the downlink control information, a configuration message from the base station, or any combination thereof.’
Regarding Claim 19,
‘The method of claim 16’ (disclosed above),
Identical to Claim 4 disclosed above and rejected, ‘wherein the machine learning model comprises a kernel based model.’
Regarding Claim 20,
‘The method of claim 16’ (disclosed above),
Identical to Claim 5 disclosed above and rejected, ‘wherein determining or compressing the one or more components of the precoding matrix indicator is further based at least in part on a transmission rank associated with the UE, a transmission layer associated with the UE, a transmission polarization associated with the UE, or any combination thereof.’
Regarding Claim 21,
‘The method of claim 16’ (disclosed above),
Identical to Claim 6 disclosed above and rejected, ‘wherein an output size of compressing the one or more components of the precoding matrix indicator is based at least in part on a transmission rank associated with the UE, a transmission layer associated with the UE, a transmission polarization associated with the UE, or any combination thereof.’
Regarding Claim 22,
‘The method of claim 16’ (disclosed above),
Identical to Claim 7 disclosed above and rejected, ‘further comprising: receiving from the base station an indication of a numerical quantity of spatial domain bases or frequency domain bases associated with the precoding matrix indicator.’
Regarding Claim 23,
‘The method of claim 22’ (disclosed above),
Identical to Claim 8 disclosed above and rejected, ‘wherein an output size of compressing the one or more components of the precoding matrix indicator is based at least in part on the numerical quantity of spatial domain bases or frequency domain bases.’
Regarding Claim 24,
‘The method of claim 16’ (disclosed above),
Identical to Claim 9 disclosed above and rejected, ‘further comprising: receiving, from the base station, an output size of compressing the one or more components of the precoding matrix indicator.’
Regarding Claim 28,
‘The method of claim 16’ (disclosed above),
Identical to Claim 15 disclosed above and rejected, ‘further comprising: an output of the machine learning model comprises an indication of a quantity of spatial domain bases, an indication of a selection of spatial domain bases, an indication of one or more frequency domain base types, a frequency domain base oversampling rate, a number of transfer domain bases, an indication of a selection of frequency domain bases, an indication of a quantity of one or more frequency domain base coefficients, an indication of one or more locations of a quantity of frequency domain base coefficients, one or more indications associated with a channel state information report, or any combination thereof.’
Regarding Claim 29,
Identical to Claim 1 and 16 disclosed above and rejected, ‘An apparatus for wireless communication at a user equipment (UE), comprising: means for receiving a control message from a base station indicating a machine learning model for generating or compressing one or more components of a precoding matrix indicator; means for determining or compressing the one or more components of the precoding matrix indicator in accordance with the machine learning model and based at least in part on a characteristic of a wireless channel; and means for transmitting a precoding matrix indicator message comprising the one or more components of the precoding matrix indicator that are determined or compressed in accordance with the machine learning model.’
Regarding Claim 30,
Identical to Claim 1 and 16 disclosed above and rejected, ‘A non-transitory computer-readable medium storing code for wireless communication at a user equipment (UE), the code comprising instructions executable by a processor to: receive a control message from a base station indicating a machine learning model for generating or compressing one or more components of a precoding matrix indicator; determine or compress the one or more components of the precoding matrix indicator in accordance with the machine learning model and based at least in part on a characteristic of a wireless channel; and transmit a precoding matrix indicator message comprising the one or more components of the precoding matrix indicator that are determined or compressed in accordance with the machine learning model.’
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35
U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any
correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will
not be considered a new ground of rejection if the prior art relied upon, and the rationale
supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all
obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the
claimed invention is not identically disclosed as set forth in section 102, if the
differences between the claimed invention and the prior art are such that the
claimed invention as a whole would have been obvious before the effective filing
date of the claimed invention to a person having ordinary skill in the art to which
he claimed invention pertains. Patentability shall not be negated by the manner
in which the invention was made.
The factual inquiries for establishing a background for determining obviousness
under 35 U.S.C. 103 are summarized as follows:
• Determining the scope and contents of the prior art.
• Ascertaining the differences between the prior art and the claims at issue.
• Resolving the level of ordinary skill in the pertinent art.
• Considering objective evidence present in the application indicating
• obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the
claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any
evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to
point out the inventor and effective filing dates of each claim that was not commonly
owned as of the effective filing date of the later invention in order for the examiner to
consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2)
prior art against the later invention.
7. Claims 10-12 and 25-27 is rejected under 35 U.S.C. 103 as being unpatentable over SHI-Hongzhe et al. in view of Victor et al. (US-11515917-B2) hereinafter “Victor”.
Regarding Claim 10,
‘The apparatus of claim 9’ (disclosed above),
And SHI-Hongzhe discloses, ‘wherein: transmitting the precoding matrix indicator message comprises packing the compressed one or more components of the precoding matrix indicator into’ of the CSI (disclosed above and in Fig. 3 [0117])
and didn’t disclose,
‘a first portion of a channel state information, wherein the output size is independent of a rank indicator or a numerical quantity of the compressed one or more components of the precoding matrix indicator.’
Victor in the relevant art discloses, a first and a second part of the CSI report and the rank indicator and the compressed-PMI includes payload-size, disclosure claim 1, and 6-8.
Therefore, a person in the ordinary skill in the art before the effective filing date of the claim invention would have recognized that the disclosure of SHI-Hongzhe and to include with that of Victor to come up with the claim invention,
SHI-Hongzhe motive to compress and apply to CSI per layer/rank and to provide improve CSI feedback efficiency [0494] and optimization. This would optimize the CSI- feedback Col. 4 [0031].
Regarding Claim 11,
‘The apparatus of claim 9’ (disclosed above), ‘wherein: transmitting the precoding matrix indicator message comprises packing the compressed one or more components of the precoding matrix indicator’ (SHI-Hongzhe discloses, joint optimization to improve accuracy CSI-feedback uses auto-encoder implemented by AI model [0007-0008, 0014, 0119]. PMI represents by index of one or more vectors of the vector quantization [0494].)
And didn’t disclose, ‘into a second portion of a channel state information, wherein the output size is based at least in part on a rank indicator reported in a first portion of the channel state information.’
Victor in the relevant art discloses, a first and a second part of the CSI report and the rank indicator and the compressed-PMI includes payload-size, disclosure claim 1, and 6-8. And precoding matrix performed Col. 5 [0006-0010]. And, table 1 and S is configured for different codebook rank value and layers Col. 8 [0027-0030]
Motive would be identical disclosed above. Further, as part of frequency-space compression uses DFT includes dimension 2D convolution NNs [0225, 0260] to improve efficiency of the CSI.
Regarding Claim 12,
‘The apparatus of claim 1’ (disclosed above),
SHI-Hongzhe discloses, ‘wherein transmitting the precoding matrix indicator message comprises transmitting’ the CSI’
And didn’t disclose, ‘a first portion of a channel state information comprising an output size of compressing the one or more components of the precoding matrix indicator and transmitting a second portion of the channel state information comprising the compressed one or more components of the precoding matrix indicator.’
Victor in the relevant art discloses. Disclosure Claim 1, and 6-8 disclosed above in Claim 10-11. Motive would be identical disclosed above.
Regarding Claim 25,
‘The method of claim 24’ (disclosed above),
Identical to Claim 10 disclosed above and rejected, ‘wherein transmitting the precoding matrix indicator message comprises packing the compressed one or more components of the precoding matrix indicator into a first portion of a channel state information, wherein the output size is independent of a rank indicator or a numerical quantity of the compressed one or more components of the precoding matrix indicator.’
Regarding Claim 26,
‘The method of claim 24’ (disclosed above),
Identical to Claim 11 disclosed above and rejected, ‘wherein transmitting the precoding matrix indicator message comprises packing the compressed one or more components of the precoding matrix indicator into a second portion of a channel state information, wherein the output size is based at least in part on a rank indicator reported in a first portion of the channel state information.’
Regarding Claim 27,
‘The method of claim 16’ (disclosed above),
Identical to Claim 12 disclosed above and rejected, ‘wherein transmitting the precoding matrix indicator message comprises transmitting a first portion of a channel state information comprising an output size of compressing the one or more components of the precoding matrix indicator and transmitting a second portion of the channel state information comprising the compressed one or more components of the precoding matrix indicator.’
Conclusion
The prior art made of record and not relied upon is considered pertinent to
applicant's disclosure:
Faxer et al. (US20220239360A1), “CSI-omission-rules for enhanced type ii csi reporting”; Precoding-codebook compression both frequency-spatial-domain; Reduced CSI-payload; PMI-payload size and rank-indicator [0039-40, 0161].
Minseok-Jo et al. (US-20250007582-A1) ,In Fig. 17 and Fig. 18 includes training model between UE-BS. And, the UE-configuration NN [0170] and in Fig. 11. PMI components DFT [0087]. The UE calculates compression includes the PMI and a plurality of codebook parameters and a RS. In addition, coefficient matrices identified [0409, 0414, 0418, 0736]; User-side encoder NN parameters is optimized and varied channel condition and characteristics [0179-0180]; In Fig. 11 End-to-End precoding system [0164]; transmit to the BS [0028-0029, 0175]. In Fig. 11, 17 and 18.
PNG
media_image1.png
801
997
media_image1.png
Greyscale
Chen, Muhan, et al. "Deep learning-based implicit CSI feedback in massive MIMO." IEEE Transactions on Communications 70.2 (2021): 935-950. (Year: 2021); NNs are used to replace the PMI encoding module at the UE and the PMI decoding module at the BS. The input of the encoder is the eigenvector v extracted from the full channel matrix. Deep-learning feedback structure includes auto-encoder compression. Oversampled 2D-DFT-beams.
PNG
media_image2.png
292
761
media_image2.png
Greyscale
Liu, Zhenyu, Mason del Rosario, and Zhi Ding. "A Markovian model-driven deep learning framework for massive MIMO CSI feedback." IEEE Transactions on Wireless Communications 21.2 (2021): (Year: 2021); Convolution NN based dimension compression and decompression module in Fig. 7
PNG
media_image3.png
414
906
media_image3.png
Greyscale
Examiner interviews are available via telephone, in-person, and video
conferencing using a USPTO supplied web-based collaboration tool. To schedule an
interview, applicant is encouraged to use the USPTO Automated Interview Request
(AIR) at http://www.uspto.gov/interviewpractice.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Syed Ahmed whose telephone number is (703)-756-5308. The examiner can normally be reached from Monday-Friday 9am-6pm. The examiner can also be reached on alternate If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faruk Hamza can be reached on (571) 272-7969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service
Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.A./Examiner, Art Unit 2466
/CHRISTOPHER M CRUTCHFIELD/Primary Examiner, Art Unit 2466