Prosecution Insights
Last updated: April 19, 2026
Application No. 18/551,945

METHOD FOR TRANSMITTING COMPRESSED CODEBOOK, AND METHOD FOR OBTAINING CHANNEL STATE INFORMATION MATRIX

Final Rejection §103
Filed
Sep 22, 2023
Examiner
BOKHARI, SYED M
Art Unit
2473
Tech Center
2400 — Computer Networks
Assignee
ZTE CORPORATION
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
694 granted / 841 resolved
+24.5% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
31 currently pending
Career history
872
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
72.8%
+32.8% vs TC avg
§102
6.6%
-33.4% vs TC avg
§112
4.8%
-35.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 841 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, anycorrection of the statutory basis for the rejection will not be considered a new ground ofrejection if the prior art relied upon, and the rationale supporting the rejection, would bethe same under either status. Response to Amendment The proposed reply filed on December 12th, 2025 has been entered. Claims 1, 7 and 14 have been amended. Claims 3 and 10 have been canceled. Claims 1-2, 4-9, 11-14 and 16-21 are pending in the application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al. (US 2021/0195462 A1) in view of Park et al. (US 2021/0409991 A1). Regarding claim 1, Pezeshki et al. teach a method for transmitting a compressed codebook, comprising, acquiring a channel state information (CSI) for subsequent feedback (Fig. 5, [0046], an autoencoder may be used for transmission of feedback, for example, channel state information (CSI) feedback using machine learning also referred to as artificial intelligence (AI). The UE may use AI to compress feedback to a BS in accordance with a configuration for the compression to be performed as indicated by the BS), Pezeshki et al. teach inputting the CSI into an encoder network structure to acquire a first compressed codebook corresponding to the CSI (Fig. 5, [0057, 0087], a communication system 500 for feedback signaling using AI compression, in accordance with certain aspects of the present disclosure. For example, the communication system 500 may include a UE 502 that may receive, from a BS 504, the reference signal 506. The UE 502 may perform one or more measurements and compress the one or more measurements using an AI encoder 508 (e.g., via one of AI module(s) 512). Wherein the at least one reference signal comprises at least one channel state information (CSI)-reference signal (RS)), Pezeshki et al. teach wherein a number of elements of the first compressed codebook is less than a number of elements of the CSI (Fig. 5, [0030, 0046], an AI module (e.g., autoencoder) may be used to compress, at a user-equipment (UE), a received measurement based on a reference signal, where the compressed measurement is to be fed back to a base station (BS). In other words, the measurement may serve as the input to an autoencoder used to generate a codeword by compressing the measurement. The codeword may be fed back to the BS. The CSI feedback in massive multiple-input multiple-output (MIMO) (e.g., frequency division duplexing (FDD)) systems have overhead for CSI feedback. In some aspects, an autoencoder may be used to reduce overhead associated with CSI feedback. (Note: reducing the overheads of the CSI measurements with the compression in the encoder is equivalent to say that the number of elements of the compressed codebook is less than the number of CSI before compression)), Pezeshki et al. teach and transmitting the first compressed codebook to a base station (Fig. 5, [0057], the AI encoder 508 may compress one or more measurements corresponding to the reference signal 506 and generate a codeword 514, in accordance with the configuration 510. The codeword 514 may be transmitted to the BS 504 via a transmitter 516), Pezeshki et al. teach wherein, the encoder network structure comprises at least one network parameter each comprising a compression ratio that is indicative of a ratio of the number of elements of the first compressed codebook to the number of elements of the CSI, and at least one of, a network layer type, a network layer number, a network layer mapping, a network layer weight, a network layer bias, a network layer weight normalization coefficient, and a network layer activation function; and the encoder network structure comprises at least one network layer each having a network weight (Fig. 5, [0028-0029, 0051, 0060], the configuration to be used for the compression may include an indication of a compression ratio associated with the compression. A compression ratio generally refers to a ratio between a size of a compressed output of an encoder and a size of the input to be compressed by the encoder. As an example, when compression is performed using a neural network, different compression ratios may correspond to different neural network architectures used for the compression. The BS may determine the compression ratio based on the one or more parameters to be calculated. For instance, the compression ratio may be determined based on a type of the one or more parameters to be calculated, a quantity of data associated with the one or more parameters to be calculated, or any combination thereof. The configuration 510 may be an indication of a compression ratio to be used for the compression of the one or more measurements corresponding to the reference signal 506, an indication of at least one AI module (one of AI module(s) 512) to be used for the compression of the one or more measurements corresponding to the reference signal 506, or both. In layered neural network architectures, the output of a first layer of artificial neurons becomes an input to a second layer of artificial neurons, the output of a second layer of artificial neurons becomes an input to a third layer of artificial neurons, and so on. For instance, the encoder and decoder may each have multiple layers having neurons. Each of the neurons may be associated with a weight. During training, an error between the input and the output may be determined, and each weight's contribution to the error may be determined. The weights may be adjusted accordingly using gradient descent to facilitate training of the autoencoder, allowing the compressed version of the input to more closely represent the input. Pezeshki et al. is teaching of acquiring, by the UE, the CSI matrix and after compressing it with the encoders transmit the compressed codebook to the base station via the feedback link. Pezeshki et al., however, fail to expressly disclose that a number of elements of the compressed codebook are less as compared to the elements of the CSI matrix. (Emphasis added). regarding claim 1, Park et al. teach wherein a number of elements of the first compressed codebook is less than a number of elements of the CSI (Figs. 10 and 29, [0707, 0718-0719], a conventional operation associated with Proposal 1 is a method of calculating specific reported CSI and omitting the report of a specific CSI if the capability of a resource configured for CSI reporting is not satisfied. In contrast, Proposal 1-3 may be applied to solve a problem in that the conventional operation is not applied without any change. In Proposal 1-3, the K value may be substituted with a p value. In Proposal 1-3, an omission (or compression) operation that gradually reduces a parameter(p, beta and/or R) for controlling the payload of the time domain compression-based codebook or frequency domain-based compression codebook to be smaller than a configured value to fit the resource allocated for PUSCH transmission may be performed. That is, when the size of a payload of calculated CSI is greater than an allocated resource, a UE can adjust the size of a payload of the CSI to be equal to or smaller than the capacity of the allocated resource by omitting (or compressing or reducing) a payload of part 2 CSI according to a specific rule based on a value of a parameter K.sub.0 determined based on a specific value (beta). It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Chen et al. by incorporating the features as taught by Park et al. in order to provide a more effective and efficient system that is capable of having a number of elements of the first compressed codebook less than a number of elements of the CSI matrix. The motivation is to support an improved method for transmitting and receiving channel state information in a wireless communication system (see [0002]). Regarding claim 7, Pezeshki et al. teach a method for acquiring a channel state information (CSI) matrix, comprising (Fig. 5, [0046], an autoencoder may be used for transmission of feedback, for example, channel state information (CSI) feedback using machine learning also referred to as artificial intelligence (AI). The UE may use AI to compress feedback to a BS in accordance with a configuration for the compression to be performed as indicated by the BS), Pezeshki et al. teach receiving a first compressed codebook or a second compressed codebook transmitted by a terminal device, wherein the first compressed codebook is generated by an encoder network structure into which the CSI is input by the terminal device (Fig. 5, [0056-0058, 0087], the UE may transmit a codeword to the base station, the codeword being associated with a compression of the one or more measurements in accordance with the configuration. The BS 504 may receive the codeword 514 via the receiver 518. A communication system 500 for feedback signaling using AI compression, in accordance with certain aspects of the present disclosure. For example, the communication system 500 may include a UE 502 that may receive, from a BS 504, the reference signal 506. The UE 502 may perform one or more measurements and compress the one or more measurements using an AI encoder 508 (e.g., via one of AI module(s) 512). Wherein the at least one reference signal comprises at least one channel state information (CSI)-reference signal (RS)), Pezeshki et al. teach the second compressed codebook is generated by a successively quantizing, encoding, and modulating the first compressed codebook (Fig. 5, [0057, 0087], a communication system 500 for feedback signaling using AI compression, in accordance with certain aspects of the present disclosure. For example, the communication system 500 may include a UE 502 that may receive, from a BS 504, the reference signal 506. The UE 502 may perform one or more measurements and compress the one or more measurements using an AI encoder 508 (e.g., via one of AI module(s) 512). In certain aspects, the UE 502 may also receive, from the BS 504, a configuration 510 to be used for the compression. As illustrated, the AI encoder 508 may compress one or more measurements corresponding to the reference signal 506 and generate a codeword 514, in accordance with the configuration 510. The codeword 514 may be used by the BS to calculate one or more parameters (e.g., CQI, PMI, RI, RSRP, or any combination thereof) for communication with the UE Wherein the at least one reference signal comprises at least one channel state information (CSI)-reference signal (RS)), Pezeshki et al. teach and a number of elements of the first compressed codebook is less than a number of elements of the CSI (Fig. 5, [0030, 0046], an AI module (e.g., autoencoder) may be used to compress, at a user-equipment (UE), a received measurement based on a reference signal, where the compressed measurement is to be fed back to a base station (BS). In other words, the measurement may serve as the input to an autoencoder used to generate a codeword by compressing the measurement. The codeword may be fed back to the BS. The CSI feedback in massive multiple-input multiple-output (MIMO) (e.g., frequency division duplexing (FDD)) systems have overhead for CSI feedback. In some aspects, an autoencoder may be used to reduce overhead associated with CSI feedback. (Note: reducing the overheads of the CSI measurements with the compression in the encoder is equivalent to say that the number of elements of the compressed codebook is less than the number of CSI before compression)), Pezeshki et al. teach and inputting the first compressed codebook or the second compressed codebook into a decoder network structure corresponding to the encoder network structure to generate the CSI (Fig. 5, [0030, 0058-0059], the BS 504 may receive the codeword 514 via the receiver 518. The BS 504 may include an AI decoder 520 having one or more AI modules 522 for decompressing the codeword 514 and generating the decompressed codeword 524. The decompressed codeword 524 may be used to calculate the one or more parameters for communication with the UE, as described herein. The one or more AI modules 522 at the decoder 520 decompresses the codeword 514 using a corresponding neural network algorithm. The BS may decompress the feedback from the UE using an AI module, and calculate one or more parameters (e.g., channel quality parameters, such as channel quality indicator (CQI)) to facilitate communication with the UE), Pezeshki et al. teach wherein, the encoder network structure comprises at least one network parameter each comprising a compression ratio that is indicative of a ratio of the number of elements of the first compressed codebook to the number of elements of the CSI, and at least one of, a network layer type, a network layer number, a network layer mapping, a network layer weight, a network layer bias, a network layer weight normalization coefficient, and a network layer activation function; and the encoder network structure comprises at least one network layer each having a network weight (Fig. 5, [0028-0029, 0051, 0060], the configuration to be used for the compression may include an indication of a compression ratio associated with the compression. A compression ratio generally refers to a ratio between a size of a compressed output of an encoder and a size of the input to be compressed by the encoder. As an example, when compression is performed using a neural network, different compression ratios may correspond to different neural network architectures used for the compression. The BS may determine the compression ratio based on the one or more parameters to be calculated. For instance, the compression ratio may be determined based on a type of the one or more parameters to be calculated, a quantity of data associated with the one or more parameters to be calculated, or any combination thereof. The configuration 510 may be an indication of a compression ratio to be used for the compression of the one or more measurements corresponding to the reference signal 506, an indication of at least one AI module (one of AI module(s) 512) to be used for the compression of the one or more measurements corresponding to the reference signal 506, or both. In layered neural network architectures, the output of a first layer of artificial neurons becomes an input to a second layer of artificial neurons, the output of a second layer of artificial neurons becomes an input to a third layer of artificial neurons, and so on. For instance, the encoder and decoder may each have multiple layers having neurons. Each of the neurons may be associated with a weight. During training, an error between the input and the output may be determined, and each weight's contribution to the error may be determined. The weights may be adjusted accordingly using gradient descent to facilitate training of the autoencoder, allowing the compressed version of the input to more closely represent the input. Pezeshki et al. is teaching of acquiring, by the UE, the CSI matrix and after compressing it with the encoders transmit the compressed codebook to the base station via the feedback link. Pezeshki et al., however, fail to expressly disclose that a number of elements of the compressed codebook are less as compared to the elements of the CSI matrix. (Emphasis added). regarding claim 7, Park et al. teach a number of elements of the first compressed codebook is less than a number of elements of the CSI (Figs. 10 and 29, [0707, 0718-0719], a conventional operation associated with Proposal 1 is a method of calculating specific reported CSI and omitting the report of a specific CSI if the capability of a resource configured for CSI reporting is not satisfied. In contrast, Proposal 1-3 may be applied to solve a problem in that the conventional operation is not applied without any change. In Proposal 1-3, the K value may be substituted with a p value. In Proposal 1-3, an omission (or compression) operation that gradually reduces a parameter(p, beta and/or R) for controlling the payload of the time domain compression-based codebook or frequency domain-based compression codebook to be smaller than a configured value to fit the resource allocated for PUSCH transmission may be performed. That is, when the size of a payload of calculated CSI is greater than an allocated resource, a UE can adjust the size of a payload of the CSI to be equal to or smaller than the capacity of the allocated resource by omitting (or compressing or reducing) a payload of part 2 CSI according to a specific rule based on a value of a parameter K.sub.0 determined based on a specific value (beta). It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Chen et al. by incorporating the features as taught by Park et al. in order to provide a more effective and efficient system that is capable of having a number of elements of the first compressed codebook less than a number of elements of the CSI matrix. The motivation is to support an improved method for transmitting and receiving channel state information in a wireless communication system (see [0002]). Regarding claim 14, Pezeshki et al. teach device for transmitting a compressed codebook, comprising, an acquisition module, which is configured to acquire a channel state information (CSI) matrix for subsequent feedback (Fig. 5, [0046], an autoencoder may be used for transmission of feedback, for example, channel state information (CSI) feedback using machine learning also referred to as artificial intelligence (AI). The UE may use AI to compress feedback to a BS in accordance with a configuration for the compression to be performed as indicated by the BS), Pezeshki et al. teach a first processing module, which is configured to input the CSI into an encoder network structure to generate a first compressed codebook corresponding to the CSI (Fig. 5, [0057, 0087], a communication system 500 for feedback signaling using AI compression, in accordance with certain aspects of the present disclosure. For example, the communication system 500 may include a UE 502 that may receive, from a BS 504, the reference signal 506. The UE 502 may perform one or more measurements and compress the one or more measurements using an AI encoder 508 (e.g., via one of AI module(s) 512). Wherein the at least one reference signal comprises at least one channel state information (CSI)-reference signal (RS)), Pezeshki et al. teach wherein a number of elements of the first compressed codebook is less than a number of elements of the CSI (Fig. 5, [0030, 0046], an AI module (e.g., autoencoder) may be used to compress, at a user-equipment (UE), a received measurement based on a reference signal, where the compressed measurement is to be fed back to a base station (BS). In other words, the measurement may serve as the input to an autoencoder used to generate a codeword by compressing the measurement. The codeword may be fed back to the BS. The CSI feedback in massive multiple-input multiple-output (MIMO) (e.g., frequency division duplexing (FDD)) systems have overhead for CSI feedback. In some aspects, an autoencoder may be used to reduce overhead associated with CSI feedback. (Note: reducing the overheads of the CSI measurements with the compression in the encoder is equivalent to say that the number of elements of the compressed codebook is less than the number of CSI before compression)), Pezeshki et al. teach and a transmission module, which is configured to transmit the first compressed codebook to a base station (Fig. 5, [0057], the AI encoder 508 may compress one or more measurements corresponding to the reference signal 506 and generate a codeword 514, in accordance with the configuration 510. The codeword 514 may be transmitted to the BS 504 via a transmitter 516), Pezeshki et al. teach wherein, the encoder network structure comprises at least one network parameter each comprising a compression ratio that is indicative of a ratio of the number of elements of the first compressed codebook to the number of elements of the CSI, and at least one of, a network layer type, a network layer number, a network layer mapping, a network layer weight, a network layer bias, a network layer weight normalization coefficient, and a network layer activation function; and the encoder network structure comprises at least one network layer each having a network weight (Fig. 5, [0028-0029, 0051, 0060], the configuration to be used for the compression may include an indication of a compression ratio associated with the compression. A compression ratio generally refers to a ratio between a size of a compressed output of an encoder and a size of the input to be compressed by the encoder. As an example, when compression is performed using a neural network, different compression ratios may correspond to different neural network architectures used for the compression. The BS may determine the compression ratio based on the one or more parameters to be calculated. For instance, the compression ratio may be determined based on a type of the one or more parameters to be calculated, a quantity of data associated with the one or more parameters to be calculated, or any combination thereof. The configuration 510 may be an indication of a compression ratio to be used for the compression of the one or more measurements corresponding to the reference signal 506, an indication of at least one AI module (one of AI module(s) 512) to be used for the compression of the one or more measurements corresponding to the reference signal 506, or both. In layered neural network architectures, the output of a first layer of artificial neurons becomes an input to a second layer of artificial neurons, the output of a second layer of artificial neurons becomes an input to a third layer of artificial neurons, and so on. For instance, the encoder and decoder may each have multiple layers having neurons. Each of the neurons may be associated with a weight. During training, an error between the input and the output may be determined, and each weight's contribution to the error may be determined. The weights may be adjusted accordingly using gradient descent to facilitate training of the autoencoder, allowing the compressed version of the input to more closely represent the input. Pezeshki et al. is teaching of acquiring, by the UE, the CSI matrix and after compressing it with the encoders transmit the compressed codebook to the base station via the feedback link. Pezeshki et al., however, fail to expressly disclose that a number of elements of the compressed codebook are less as compared to the elements of the CSI matrix. (Emphasis added). regarding claim 14, Park et al. wherein a number of elements of the first compressed codebook is less than a number of elements of the CSI (Figs. 10 and 29, [0707, 0718-0719], a conventional operation associated with Proposal 1 is a method of calculating specific reported CSI and omitting the report of a specific CSI if the capability of a resource configured for CSI reporting is not satisfied. In contrast, Proposal 1-3 may be applied to solve a problem in that the conventional operation is not applied without any change. In Proposal 1-3, the K value may be substituted with a p value. In Proposal 1-3, an omission (or compression) operation that gradually reduces a parameter(p, beta and/or R) for controlling the payload of the time domain compression-based codebook or frequency domain-based compression codebook to be smaller than a configured value to fit the resource allocated for PUSCH transmission may be performed. That is, when the size of a payload of calculated CSI is greater than an allocated resource, a UE can adjust the size of a payload of the CSI to be equal to or smaller than the capacity of the allocated resource by omitting (or compressing or reducing) a payload of part 2 CSI according to a specific rule based on a value of a parameter K.sub.0 determined based on a specific value (beta). It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Chen et al. by incorporating the features as taught by Park et al. in order to provide a more effective and efficient system that is capable of having a number of elements of the first compressed codebook less than a number of elements of the CSI matrix. The motivation is to support an improved method for transmitting and receiving channel state information in a wireless communication system (see [0002]). Claim(s) 2, 8 and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al. (US 2021/0195462 A1) in view of Park et al. (US 2021/0409991 A1) as applied to claims 1 and 7 above, and further in view of Chen et al. (US 2024/0137093 A1). Pezeshki et al. and Park et al. disclose the claimed limitations as described in paragraph 6 above. Pezeshki et al. and Park et al. do not expressly disclose the following features: regarding claim 2, wherein transmitting the first compressed codebook to the base station comprises, successively quantizing, encoding, and modulating the first compressed codebook to generate a second compressed codebook; and transmitting the second compressed codebook to the base station; Regarding claim 8, Chen et al. wherein, the second compressed codebook is generated by sequentially quantizing, encoding and modulating the first compressed codebook; regarding claim 16, a non-transitory computer-readable medium storing thereon a computer program which, when executed by a processor, causes the processor to carry out the method as claimed in claim 1; regarding claim 17, an electronic apparatus, comprising a processor and a memory storing a computer program, which when executed by the processor, causes the processor to carry out the method as claimed in claim 1; regarding claim 18, a non-transitory computer-readable medium storing thereon a computer program which, when executed by a processor, causes the processor to carry out the method as claimed in claim 7; regarding claim 19, Chen et al. teach an electronic apparatus, comprising a processor and a memory storing a computer program, which when executed by the processor, causes the processor to carry out the method as claimed in claim 7. Regarding claim 2, Chen et al. teach wherein transmitting the first compressed codebook to the base station comprises, successively quantizing, encoding, and modulating the first compressed codebook to generate a second compressed codebook; and transmitting the second compressed codebook to the base station (Fig. 11, [0138, 0147], since the correlation of the real part of the CSI matrix and the correlation of the imaginary part of the CSI matrix are similar, parameters of the compression encoders used to encode the real part and the imaginary part are often similar or the same. Therefore, optionally, based on model parameters of the first target CSI compression encoder, a second target CSI compression encoder is constructed, the real part and the imaginary part of the target CSI matrix are extracted, and the real part and the imaginary part of the target CSI matrix are respectively input into corresponding target CSI compression encoders for encoding. That is, the UE deploys two target CSI compression encoders at the same time, and may input the real part of the target CSI matrix into one of the target CSI compression encoders and input the imaginary part of the target CSI matrix into the other of the target CSI compression encoders in parallel. In the embodiments of the disclosure, since two target CSI compression encoders are deployed simultaneously in the parallel encoding mode, the real part and the imaginary part. Encoding module 11 is configured to encode, based on a first target CSI compression encoder, a target CSI matrix in a delay domain and an angle domain, to generate a compressed encoded value, in which the first target CSI compression encoder includes N composite convolution layers and one fully-connected layer, each composite convolution layer includes a delay-domain convolution step and an angle-domain convolution step, and the delay-domain convolution step of the first composite convolution layer in the N composite convolution layers is smaller than the angle-domain convolution step of the first composite convolution layer in the N composite convolution layers, where N is a positive integer). Regarding claim 8, Chen et al. wherein, the second compressed codebook is generated by sequentially quantizing, encoding and modulating the first compressed codebook (Fig. 11, [0138, 0147], since the correlation of the real part of the CSI matrix and the correlation of the imaginary part of the CSI matrix are similar, parameters of the compression encoders used to encode the real part and the imaginary part are often similar or the same. Therefore, optionally, based on model parameters of the first target CSI compression encoder, a second target CSI compression encoder is constructed, the real part and the imaginary part of the target CSI matrix are extracted, and the real part and the imaginary part of the target CSI matrix are respectively input into corresponding target CSI compression encoders for encoding. That is, the UE deploys two target CSI compression encoders at the same time, and may input the real part of the target CSI matrix into one of the target CSI compression encoders and input the imaginary part of the target CSI matrix into the other of the target CSI compression encoders in parallel. In the embodiments of the disclosure, since two target CSI compression encoders are deployed simultaneously in the parallel encoding mode, the real part and the imaginary part. Encoding module 11 is configured to encode, based on a first target CSI compression encoder, a target CSI matrix in a delay domain and an angle domain, to generate a compressed encoded value, in which the first target CSI compression encoder includes N composite convolution layers and one fully-connected layer, each composite convolution layer includes a delay-domain convolution step and an angle-domain convolution step, and the delay-domain convolution step of the first composite convolution layer in the N composite convolution layers is smaller than the angle-domain convolution step of the first composite convolution layer in the N composite convolution layers, where N is a positive integer). Regarding claim 16, Chen et al. teach a non-transitory computer-readable medium storing thereon a computer program which, when executed by a processor, causes the processor to carry out the method as claimed in claim 1 (Fig. 15, [0217] a non-transitory computer readable storage medium, the memory 1620 may be configured to store non-transitory software programs, non-transitory computer executable programs and modules, such as program instructions/modules corresponding to the method in the embodiments of the disclosure. The processor 1610 is configured to execute various functional applications and data processing of the server by operating non-transitory software programs, instructions and modules stored in the memory 1620, that is, implements the encoding method for CSI or the decoding method for CSI according to the disclosure). regarding claim 17, Chen et al. teach an electronic apparatus, comprising a processor and a memory storing a computer program, which when executed by the processor, causes the processor to carry out the method as claimed in claim 1 (Fig. 15, [0215-0216], the communication device includes: one or more processors 1610, a memory 1620, and interfaces for connecting various components, including a high-speed interface and a low-speed interface. Various components are connected to each other by different buses and may be installed on a common main board or in other ways as required. The processor may process instructions executed within the communication device, including instructions stored in. The memory 1620 is a non-transitory computer readable storage medium provided by the disclosure. The memory is configured to store instructions executed by at least one processor, to enable the at least one processor to execute the encoding method for CSI or the decoding method for CSI according to the disclosure. The non-transitory computer readable storage medium according to the disclosure is configured to store computer instructions. The computer instructions are configured to enable a computer to execute the encoding method for CSI or the decoding method for CSI according to the disclosure). Regarding claim 18, Chen et al. teach a non-transitory computer-readable medium storing thereon a computer program which, when executed by a processor, causes the processor to carry out the method as claimed in claim 7 (Fig. 15, [0217], a non-transitory computer readable storage medium, the memory 1620 may be configured to store non-transitory software programs, non-transitory computer executable programs and modules, such as program instructions/modules corresponding to the method in the embodiments of the disclosure. The processor 1610 is configured to execute various functional applications and data processing of the server by operating non-transitory software programs, instructions and modules stored in the memory 1620, that is, implements the encoding method for CSI or the decoding method for CSI according to the disclosure). Regarding claim 19, Chen et al. teach an electronic apparatus, comprising a processor and a memory storing a computer program, which when executed by the processor, causes the processor to carry out the method as claimed in claim 7 (Fig. 15, [0215-0216], the communication device includes: one or more processors 1610, a memory 1620, and interfaces for connecting various components, including a high-speed interface and a low-speed interface. Various components are connected to each other by different buses and may be installed on a common main board or in other ways as required. The processor may process instructions executed within the communication device, including instructions stored in. The memory 1620 is a non-transitory computer readable storage medium provided by the disclosure. The memory is configured to store instructions executed by at least one processor, to enable the at least one processor to execute the encoding method for CSI or the decoding method for CSI according to the disclosure. The non-transitory computer readable storage medium according to the disclosure is configured to store computer instructions. The computer instructions are configured to enable a computer to execute the encoding method for CSI or the decoding method for CSI according to the disclosure). It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pezeshki et al. with Park et al. by incorporating the features as taught by Chen et al. in order to provide a more effective and efficient system that is capable of generating the second compressed codebook by sequentially quantizing, encoding and modulating the first compressed codebook. The motivation is to support an improved method for transmitting and receiving channel state information in a wireless communication system (see [0002]). Claim(s) 6 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al. (US 2021/0195462 A1) in view of Park et al. (US 2021/0409991 A1) and as applied to claims 1 and 7 above, and further in view of Chen et al. (US 2023/0084164 A1) (Chen’164 herein after). Pezeshki et al. and Park et al. disclose the claimed limitations as described in paragraph 6 above. Pezeshki et al. and Park et al. do not expressly disclosed the following features: regarding claim 6, wherein the encoder network structure comprises at least one of, a fully connected network structure, a convolution neural network structure, a recurrent neural network structure, or a residual network structure; regarding claim 13, wherein the decoder network structure comprises at least one of, a fully connected network structure, a convolutional neural network structure, a recurrent neural network structure, or a residual network structure. Regarding claim 6, Chen’164 teaches wherein the encoder network structure comprises at least one of, a fully connected network structure, a convolution neural network structure, a recurrent neural network structure, or a residual network structure (Fig. 6, is a block diagram illustrating an exemplary auto-encoder 600, in accordance with aspects of the present disclosure, [0074-0075] in machine learning (ML) based channel state information (CSI) compression and feedback, a user equipment (UE) trains an encoder/decoder neural network (NN) pair and sends the trained decoder model to a base station (e.g., gNB). The UE uses the encoder NN to create and to feed back the channel state feedback (CSF). The encoder NN may take the raw channel as input. A base station uses the decoder NN to recover the raw channel from the CSF. The UE uses a certain loss metric to train the encoder/decoder NN. The auto-encoder 600 includes an encoder 610 having a neural network (NN). The encoder 610 receives the channel realization and/or interference realization as an input and compresses the channel/interference realization). Regarding claim 13, Chen’164 teaches wherein the decoder network structure comprises at least one of, a fully connected network structure, a convolutional neural network structure, a recurrent neural network structure, or a residual network structure (Fig. 6, [0074-0075] in machine learning (ML)-based channel state information (CSI) compression and feedback, a user equipment (UE) trains an encoder/decoder neural network (NN) pair and sends the trained decoder model to a base station (e.g., gNB). The UE uses the encoder NN to create and to feed back the channel state feedback (CSF). The encoder NN may take the raw channel as input. A base station uses the decoder NN to recover the raw channel from the CSF. The UE uses a certain loss metric to train the encoder/decoder NN. The auto-encoder 600 includes an encoder 610 having a neural network (NN). The encoder 610 receives the channel realization and/or interference realization as an input and compresses the channel/interference realization). It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pezeshki et al. with Park et al. by incorporating the features as taught by Chen’164 in order to provide a more effective and efficient system that is capable of having an the encoder and a decoder network structure comprises at least one of, a fully connected network structure, a convolution neural network structure, a recurrent neural network structure, or a residual network structure. The motivation is to support an improved method for configurable 5G new radio (NR) channel state feedback (CSF) learning (see [0001]). Claim(s) 4, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al. (US 2021/0195462 A1) in view of Park et al. (US 2021/0409991 A1) and as applied to claims 1 and 7 above, and further in view of Jassal et al. (US 2020/0366326 A1). Pezeshki et al. and Park et al. disclose the claimed limitations as described in paragraph 6 above. Pezeshki et al. and Park et al. do not expressly disclosed the following features: regarding claim 4, wherein before inputting the channel state information matrix into the encoder network structure to acquire the first compressed codebook corresponding to the CSI matrix, the method further comprises: receiving a plurality sets of network parameters of encoder network structures sent by a base station through high-layer signaling or physical layer signaling; and acquiring a set of network parameters from the plurality sets of network parameters of encoder network structures according to a channel condition, wherein the channel condition comprises at least one of a channel scenario, or a channel feature; regarding claim 11, further comprising, transmitting a plurality of sets of network parameters of encoder network structures to the terminal device through high-layer signaling or physical layer signaling, to instruct the terminal device to acquire a set of network parameters from the plurality sets of network parameters of encoder network structures according to a channel condition, wherein the channel condition comprises at least one of, a channel scenario, or a channel feature. Regarding claim 4, Jassal et al. teach wherein before inputting the channel state information matrix into the encoder network structure to acquire the first compressed codebook corresponding to the CSI matrix, the method further comprises: receiving a plurality sets of network parameters of encoder network structures sent by a base station through high-layer signaling or physical layer signaling; and acquiring a set of network parameters from the plurality sets of network parameters of encoder network structures according to a channel condition, wherein the channel condition comprises at least one of a channel scenario, or a channel feature (Fig. 8, [0103-0104, 0110], the UE's NN can be configured using one or more objects defined by higher-layer signaling. The one or more objects configure the behavior the UE should adopt while performing channel compression. The higher-layer signaling object will carry parameters relevant for the configuration of a NN, e.g. the number of layers, the type of each layer (convolutional, fully connected), the number of neurons in each layer, the coefficients of the link between neurons of neighboring layers. The network trains its NNs such that it matches the input and the output as closely as possible by learning salient properties of the downlink channel (e.g. angles of departure and arrival, spatial correlations between antenna ports, temporal correlations). Some of the technical advantages of this embodiment are that: it can directly work with received pilot signals to derive a codeword; the UE is directly configured with encoding functions trained offline at the network side; this helps reduce uplink feedback overhead in terms of bits transmitted by having the UE use more compact channel representations and reduce the frequency at which pilot signals are transmitted); regarding claim 11, Jassal et al. teach further comprising, transmitting a plurality of sets of network parameters of encoder network structures to the terminal device through high-layer signaling or physical layer signaling, to instruct the terminal device to acquire a set of network parameters from the plurality sets of network parameters of encoder network structures according to a channel condition, wherein the channel condition comprises at least one of, a channel scenario, or a channel feature (Fig. 8, [0103-0104, 0110], the UE's NN can be configured using one or more objects defined by higher-layer signaling. The one or more objects configure the behavior the UE should adopt while performing channel compression. The higher-layer signaling object will carry parameters relevant for the configuration of a NN, e.g. the number of layers, the type of each layer (convolutional, fully connected), the number of neurons in each layer, the coefficients of the link between neurons of neighboring layers. The network trains its NNs such that it matches the input and the output as closely as possible by learning salient properties of the downlink channel (e.g. angles of departure and arrival, spatial correlations between antenna ports, temporal correlations). Some of the technical advantages of this embodiment are that: it can directly work with received pilot signals to derive a codeword; the UE is directly configured with encoding functions trained offline at the network side; this helps reduce uplink feedback overhead in terms of bits transmitted by having the UE use more compact channel representations and reduce the frequency at which pilot signals are transmitted). It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pezeshki et al. with Park et al. by incorporating the features as taught by Jassal et al. in order to provide a more effective and efficient system that is capable of inputting before the channel state information matrix into the encoder network structure to acquire the first compressed codebook corresponding to the CSI matrix, the method further comprises: receiving a plurality sets of network parameters of encoder network structures sent by a base station through high-layer signaling or physical layer signaling; and acquiring a set of network parameters from the plurality sets of network parameters of encoder network structures according to a channel condition, wherein the channel condition comprises at least one of a channel scenario, or a channel feature, and having a number of elements of the first compressed codebook less than a number of elements of the CSI matrix. Receiving a plurality sets of network parameters sent by a base station through high-layer signaling, and acquiring a set of network parameters according to a channel condition. The motivation is to support an improved method for reporting CSI feedback to the base station (see [0011]). Claim(s) 5 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al. (US 2021/0195462 A1) in view of Park et al. (US 2021/0409991 A1) and Jassal et al. (US 2020/0366326 A1) as applied to claims 1 and 7 above, and further in view of Chen et al. (US 2024/0137093 A1). Pezeshki et al., Park et al. and Jassal et al. disclose the claimed limitations as described in paragraph 6 above. Pezeshki et al., Park et al. and Jassal et al. do not expressly disclose the following features: regarding claim 5, wherein after acquiring the set of network parameters from the plurality of network parameters of encoder network structures according to the channel condition, the method further comprises, transmitting the acquired set of network parameters to the base station, to instruct the base station to process the first compressed codebook according to a decoder network structure corresponding to the set of network parameters to acquire the CSI matrix; regarding claim 12, wherein, the decoder network structure comprises at least one network parameter each comprising at least one of, a network layer type, a network layer number, a network layer mapping, a network layer weight, a network layer bias, a network layer weight normalization coefficient, a network layer activation function, or a compression ratio, wherein the compression ratio is indicative of a ratio of the number of elements of the first compressed codebook to the number of elements of the CSI matrix. Regarding claim 5, Chen et al. teach wherein after acquiring the set of network parameters from the plurality of network parameters of encoder network structures according to the channel condition, the method further comprises, transmitting the acquired set of network parameters to the base station, to instruct the base station to process the first compressed codebook according to a decoder network structure corresponding to the set of network parameters to acquire the CSI matrix (Fig. 10, [0133-0134, 0147], since the correlation of the real part of the CSI matrix and the correlation of the imaginary part of the CSI matrix are similar, parameters of the compression encoders used to encode the real part and the imaginary part are often similar or the same. Therefore, optionally, based on model parameters of the first target CSI compression encoder, a second target CSI compression encoder is constructed, the real part and the imaginary part of the target CSI matrix are extracted, and the real part and the imaginary part of the target CSI matrix are respectively input into corresponding target CSI compression encoders for encoding. The UE sends s.sub.re or s.sub.im to the network device through the feedback link. The network device inputs s.sub.re or s.sub.im into the CSI decoder. The deconvolution layer of the first composite deconvolution layer in the CSI decoder adopts a deconvolution kernel with a size of f×1×3×3. The convolution layer of the second composite deconvolution layer and the convolution layer of the third deconvolution layer both adopt a deconvolution kernel with a size of f×f×3×3 and a deconvolution step of (1,1). The deconvolution layer of the fourth deconvolution layer adopts a deconvolution kernel with a size of f×1×3×5 and a deconvolution step of (1,2). The network device inputs s.sub.re or s.sub.im into the fully-connected layer for the fully-connected processing to output a vector of 1×(N.sub.cc×(N.sub.t/2)) for reconstructing to output a tensor with a size of 1×N.sub.cc×(N.sub.t/2) as the input of the first composite deconvolution layer. After the deconvolution layer of the first composite deconvolution layer performs the deconvolution processing on the tensor with the size of 1×N.sub.cc×(N.sub.t/2) to output a tensor with a size of f×N.sub.cc×(N.sub.t/2). Regarding claim 12, Chen et al. teach wherein, the decoder network structure comprises at least one network parameter each comprising at least one of, a network layer type, a network layer number, a network layer mapping, a network layer weight, a network layer bias, a network layer weight normalization coefficient, a network layer activation function, or a compression ratio, wherein the compression ratio is indicative of a ratio of the number of elements of the first compressed codebook to the number of elements of the CSI matrix (Fig. 10, [0133-0134, 0147], since the correlation of the real part of the CSI matrix and the correlation of the imaginary part of the CSI matrix are similar, parameters of the compression encoders used to encode the real part and the imaginary part are often similar or the same. Therefore, optionally, based on model parameters of the first target CSI compression encoder, a second target CSI compression encoder is constructed, the real part and the imaginary part of the target CSI matrix are extracted, and the real part and the imaginary part of the target CSI matrix are respectively input into corresponding target CSI compression encoders for encoding. The UE sends s.sub.re or s.sub.im to the network device through the feedback link. The network device inputs s.sub.re or s.sub.im into the CSI decoder. The deconvolution layer of the first composite deconvolution layer in the CSI decoder adopts a deconvolution kernel with a size of f×1×3×3. The convolution layer of the second composite deconvolution layer and the convolution layer of the third deconvolution layer both adopt a deconvolution kernel with a size of f×f×3×3 and a deconvolution step of (1,1). The deconvolution layer of the fourth deconvolution layer adopts a deconvolution kernel with a size of f×1×3×5 and a deconvolution step of (1,2). The network device inputs s.sub.re or s.sub.im into the fully-connected layer for the fully-connected processing to output a vector of 1×(N.sub.cc×(N.sub.t/2)) for reconstructing to output a tensor with a size of 1×N.sub.cc×(N.sub.t/2) as the input of the first composite deconvolution layer. After the deconvolution layer of the first composite deconvolution layer performs the deconvolution processing on the tensor with the size of 1×N.sub.cc×(N.sub.t/2) to output a tensor with a size of f×N.sub.cc×(N.sub.t/2)). It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pezeshki et al. with Park et al. and Jassal et al. by incorporating the features as taught by Chen et al. in order to provide a more effective and efficient system that is capable of acquiring the set of network parameters from the plurality of network parameters of encoder network structures according to the channel condition, the method further comprises, transmitting the acquired set of network parameters to the base station, to instruct the base station to process the first compressed codebook according to a decoder network structure corresponding to the set of network parameters to acquire the CSI matrix. And utilizing decoder network structure comprises at least one network parameter each comprising at least one of, a network layer type, a network layer number, a network layer mapping, a network layer weight, a network layer bias, a network layer weight normalization coefficient, a network layer activation function, or a compression ratio, wherein the compression ratio is indicative of a ratio of the number of elements of the first compressed codebook to the number of elements of the CSI matrix. The motivation is to support an improved method for channel state information (CSI), a decoding method for CSI, and a communication device (see [0002]). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al. (US 2021/0195462 A1) in view of Park et al. (US 2021/0409991 A1) as applied to claim 7 above, and further in view of IEEE, Wang et al. (Compressive Sampled CSI Feedback Method Based on Deep Learning for FDD Massive MIMO Systems). Pezeshki et al. and Park et al. disclose the claimed limitations as described in paragraph 6 above. Pezeshki et al. and Park et al. do not expressly disclosed the following features: regarding claim 9, further comprising, successively demodulating, decoding, and dequantizing the second compressed codebook, in response to a reception of the second compressed codebook transmitted by the terminal device, to obtain the first compressed codebook. Regarding claim 9, Wang et al. teach further comprising, successively demodulating, decoding, and dequantizing the second compressed codebook, in response to a reception of the second compressed codebook transmitted by the terminal device, to obtain the first compressed codebook (Fig. 1, [page 2, 2nd col, para 1st, and 2nd, page 3, col 1st, chap A, col 2nd, chap B, para 1st] inspired by the prior knowledge of channel correlations in time and frequency dimensions, the first step of SampleDL method is to compress the CSI matrix from time/frequency dimension by sampling, see fig. 1. The UE feeds the compressed CSI back to the BS through the error free feedback link. The assumption is justifiable because the feedback link is usually protected using error correction coding and hence has a very low error probability [37]. When the BS receives the compressed CSI (Hec), the first step is to recover Hes by a decompression NN (neural network). In detail, the UE quantizes the compressed downlink CSI with limited bits for further reducing the feedback overhead and practical transmitting. The final values of downlink CSI that the BS receives can be given as Hqc = fQ(fC (H, Φ1)) (2), where Φ1 denotes the parameters of the compression function. When the BS receives Hqc, the dequantization and decompression (decoding) will be used for recovering the original downlink CSI). It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pezeshki et al. with Park et al. by incorporating the features as taught by Wang et al. in order to provide a more effective and efficient system that is capable of demodulating, decoding, and dequantizing the second compressed codebook, in response to a reception of the second compressed codebook transmitted by the terminal device, to obtain the first compressed codebook. The motivation is to support an improved method for the CSI feedback overhead restriction due to the limited uplink resources assigned to CSI feedback (see [page 1, 2nd, col 2 li 3-5]). Allowable Subject Matter Claims 20-21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant’s arguments with respect to claim(s) 1-2, 4-9, 11-14 and 16-19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED M BOKHARI whose telephone number is (571)270-3115. The examiner can normally be reached Monday through Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kwang B Yao can be reached at 5712723182. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYED M BOKHARI/Examiner, Art Unit 2473 1/27/2026 /KWANG B YAO/Supervisory Patent Examiner, Art Unit 2473
Read full office action

Prosecution Timeline

Sep 22, 2023
Application Filed
Sep 11, 2025
Non-Final Rejection — §103
Dec 17, 2025
Response Filed
Jan 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604260
SYSTEMS AND METHODS FOR DYNAMIC SLICE SELECTION IN A WIRELESS NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12574823
RADIO LINK CONTROL (RLC) RECONFIGURATION
2y 5m to grant Granted Mar 10, 2026
Patent 12557002
Assigning User Plane Functions (UPFs) within a 5G core network
2y 5m to grant Granted Feb 17, 2026
Patent 12557074
WIRELESS LINK CONFIGURATION
2y 5m to grant Granted Feb 17, 2026
Patent 12549648
WIRELESS COMMUNICATION METHOD USING MULTIPLE LINKS, AND WIRELESS COMMUNICATION TERMINAL USING SAME
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+18.3%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 841 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month