Prosecution Insights
Last updated: April 19, 2026
Application No. 18/684,803

METHOD AND APPARATUS FOR EFFICIENT CHANNEL STATE INFORMATION REPRESENTING

Non-Final OA §102§103
Filed
Feb 19, 2024
Examiner
VU, QUOC THAI NGOC
Art Unit
2642
Tech Center
2600 — Communications
Assignee
MediaTek Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
411 granted / 591 resolved
+7.5% vs TC avg
Strong +30% interview lift
Without
With
+30.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
38 currently pending
Career history
629
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
23.3%
-16.7% vs TC avg
§112
6.9%
-33.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 591 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on February 19, 2024 has been considered by the Examiner and made of record in the application file. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5-8, 10-13, 15-18 and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Li et al. (US 2025/0141514, “Li”). Regarding claim 1, Li teaches a method of compressing channel state information (CSI) ([0005] “processing channel state information, including… performing compression coding”), the method comprising: classifying, at a first device, a CSI element into one of multiple classes of CSI elements, each class of CSI elements being associated with a different one of multiple encoders ([0005] “processing channel state information, including: pre-processing original channel information to generate first channel information including a plurality of first channel information components; performing compression coding according to at least one first channel information component of the first channel information to generate second channel information.” [0208] Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder. Further, the terminal and the base station respectively store neural network parameters of K0 sets of encoders and decoders… The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel”- Note: K0 sets of autoencoders corresponding to scene of channel, angle spread, delay spread, or Doppler spread of channel teaches claimed feature “classes of CSI elements”); compressing, at the first device, the CSI element based on one of the multiple encoders that is associated with the one of the multiple classes of CSI elements ([0208] “Further, the terminal and the base station respectively store neural network parameters of K0 sets of encoders and decoders…. The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel.” [0129] “the terminal may perform compression coding on all first channel information components of the first channel information to obtain second channel information, and the second channel information is fed back to the base station”); and sending, to a second device, the compressed CSI element and a class index of the one of the multiple classes of CSI elements ([0129] “the terminal may perform compression coding on all first channel information components of the first channel information to obtain second channel information, and the second channel information is fed back to the base station” [0208] “The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel, or the like, and transmits an index of the selected encoder to the base station through physical layer signaling and/or higher-layer signaling. Based on the index of the encoder fed back by the terminal, the base station selects the corresponding decoder, and processes the received second channel information to obtain third channel information.” [0113] “When the terminal determines to perform compression coding on part of first channel information components of the first channel information, it is further determined which part of the first channel information components is to be used as the component to be compressed. The information of the feedback mode fed back to the base station includes information indicating performing compression coding on part of first channel information components of the first channel information, and information of the component to be compressed, so that the base station can use uplink channel information in the same form as the second channel information to assist the downlink channel information.”) Regarding claim 2, Li teaches claim 1 and further teaches further comprising: clustering, at the first device, a plurality of CSI elements into the multiple classes of CSI elements; and training, at the first device, a pair of encoder-decoder algorithm for each class of CSI elements ([0208] “Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder. Further, the terminal and the base station respectively store neural network parameters of K0 sets of encoders and decoders. In use, the base station configures indexes of the K0 sets of encoders and decoders through higher-layer signaling according to the channel condition, so that the terminal knows from which the parameter set of the encoder is used upon receiving the index. In other words, the base station configures indexes of the pairs of encoders and decoders, while the terminal receives an index of one of the pairs of encoders and decoders and determines parameter corresponding to the encoder. The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel”). Regarding claim 3, Li teaches claim 1 and further teaches wherein the compression include: compressing, at the first device, the CSI element based on one of the multiple pairs of encoder-decoder algorithms that is associated with the one of the multiple classes of CSI elements ([0005] In a first aspect, an embodiment of the present disclosure provides a method for processing channel state information, including: pre-processing original channel information to generate first channel information including a plurality of first channel information components; performing compression coding according to at least one first channel information component of the first channel information to generate second channel information. [0208] “Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder”). Regarding claim 5, Li teaches claim 1 and further teaches wherein a number of the multiple classes of CSI elements is predetermined ([0208] “neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder… The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel” It is understood K0 is an known number). Regarding claim 6, Li teaches a method of decompressing channel state information (CSI) ([0006] “processing channel state information, including:… performing decompression decoding”), the method comprising: receiving, at an apparatus, a compressed CSI element and a class index of one of multiple classes of CSI elements, each class of CSI elements being associated with a different one of multiple decoders ([0006] “receiving channel state information including at least second channel information obtained by performing compression coding according to at least one first channel information component of first channel information” [0208] “Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder… The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel, or the like, and transmits an index of the selected encoder to the base station. Based on the index of the encoder fed back by the terminal, the base station selects the corresponding decoder, and processes the received second channel information to obtain third channel information” [0208] Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder); determining, at the apparatus, one of the multiple decoders based on the class index ([0208] “Based on the index of the encoder fed back by the terminal, the base station selects the corresponding decoder, and processes the received second channel information to obtain third channel information”); and decompressing, at the apparatus, the CSI element based on the one of the multiple decoders to obtain a decompressed CSI element ([0208] “Based on the index of the encoder fed back by the terminal, the base station selects the corresponding decoder, and processes the received second channel information to obtain third channel information.” [0206] “At the base station, there is a decoder corresponding to the encoder, the decoder includes a decompression layer and a second processing layer”). Regarding claim 7, Li teaches claim 6 and further teaches wherein each class of CSI elements is associated with a pair of encoder-decoder algorithm ([0208] “Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder… The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel, or the like”). Regarding claim 8, Li teaches claim 7 and further teaches wherein the decompressing includes: decompressing, at the apparatus, the CSI element based on one of the multiple pairs of encoder-decoder algorithms that is associated with the one of the multiple classes of CSI elements ([0208] “Based on the index of the encoder fed back by the terminal, the base station selects the corresponding decoder, and processes the received second channel information to obtain third channel information.” [0206] “At the base station, there is a decoder corresponding to the encoder, the decoder includes a decompression layer and a second processing layer”). Regarding claim 10, Li teaches claim 6 and further teaches wherein a number of the multiple classes of CSI elements is predetermined ([0208] “neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder… The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel” It is understood K0 is an known number). Regarding claim 11, Li teaches an apparatus, comprising: processing circuitry (terminal of FIG. 23, [0183]) configured to: classify a channel state information (CSI) element into one of multiple classes of CSI elements, each class of CSI elements being associated with a different one of multiple encoders ([0005] “processing channel state information, including: pre-processing original channel information to generate first channel information including a plurality of first channel information components; performing compression coding according to at least one first channel information component of the first channel information to generate second channel information.” [0208] Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder. Further, the terminal and the base station respectively store neural network parameters of K0 sets of encoders and decoders… The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel”- Note: K0 sets of autoencoders corresponding to scene of channel, angle spread, delay spread, or Doppler spread of channel teaches claimed feature “classes of CSI elements”); compress the CSI element based on one of the multiple encoders that is associated with the one of the multiple classes of CSI elements ([0208] “Further, the terminal and the base station respectively store neural network parameters of K0 sets of encoders and decoders…. The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel.” [0129] “the terminal may perform compression coding on all first channel information components of the first channel information to obtain second channel information, and the second channel information is fed back to the base station”); and send, to a second apparatus, the compressed CSI element and a class index of the one of the multiple classes of CSI elements ([0129] “the terminal may perform compression coding on all first channel information components of the first channel information to obtain second channel information, and the second channel information is fed back to the base station” [0208] “The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel, or the like, and transmits an index of the selected encoder to the base station through physical layer signaling and/or higher-layer signaling. Based on the index of the encoder fed back by the terminal, the base station selects the corresponding decoder, and processes the received second channel information to obtain third channel information.” [0113] “When the terminal determines to perform compression coding on part of first channel information components of the first channel information, it is further determined which part of the first channel information components is to be used as the component to be compressed. The information of the feedback mode fed back to the base station includes information indicating performing compression coding on part of first channel information components of the first channel information, and information of the component to be compressed, so that the base station can use uplink channel information in the same form as the second channel information to assist the downlink channel information”). Regarding claim 12, Li teaches claim 1 and further teaches to: cluster a plurality of CSI elements into the multiple classes of CSI elements; and train a pair of encoder-decoder algorithm for each class of CSI elements ([0208] “Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder. Further, the terminal and the base station respectively store neural network parameters of K0 sets of encoders and decoders. In use, the base station configures indexes of the K0 sets of encoders and decoders through higher-layer signaling according to the channel condition, so that the terminal knows from which the parameter set of the encoder is used upon receiving the index. In other words, the base station configures indexes of the pairs of encoders and decoders, while the terminal receives an index of one of the pairs of encoders and decoders and determines parameter corresponding to the encoder. The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel”). Regarding claim 13, Li teaches claim 12 and further teaches to: compress the CSI element based on one of the multiple pairs of encoder-decoder algorithms that is associated with the one of the multiple classes of CSI elements ([0005] In a first aspect, an embodiment of the present disclosure provides a method for processing channel state information, including: pre-processing original channel information to generate first channel information including a plurality of first channel information components; performing compression coding according to at least one first channel information component of the first channel information to generate second channel information. [0208] “Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder”). Regarding claim 15, Li teaches claim 11 and further teaches wherein a number of the multiple classes of CSI elements is predetermined ([0208] “neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder… The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel” It is understood K0 is an known number). Regarding claim 16, Li teaches an apparatus, comprising: processing circuitry (base station of FIG. 24, [0186]) configured to: receive a compressed channel state information (CSI) element and a class index of one of multiple classes of CSI elements, each class of CSI elements being associated with a different one of multiple decoders ([0006] “receiving channel state information including at least second channel information obtained by performing compression coding according to at least one first channel information component of first channel information” [0208] “Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder… The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel, or the like, and transmits an index of the selected encoder to the base station. Based on the index of the encoder fed back by the terminal, the base station selects the corresponding decoder, and processes the received second channel information to obtain third channel information” [0208] Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder); determine one of the multiple decoders based on the class index ([0208] “Based on the index of the encoder fed back by the terminal, the base station selects the corresponding decoder, and processes the received second channel information to obtain third channel information”); and decompress the CSI element based on the one of the multiple decoders to obtain a decompressed CSI element ([0208] “Based on the index of the encoder fed back by the terminal, the base station selects the corresponding decoder, and processes the received second channel information to obtain third channel information.” [0206] “At the base station, there is a decoder corresponding to the encoder, the decoder includes a decompression layer and a second processing layer”). Regarding claim 17, Li teaches claim 16 and further teaches wherein each class of CSI elements is associated with a pair of encoder-decoder algorithm ([0208] “Through offline training or a process combining offline training and online training, neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder… The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel, or the like”). Regarding claim 18, Li teaches claim 17 and further teaches to: decompress the CSI element based on one of the multiple pairs of encoder-decoder algorithms that is associated with the one of the multiple classes of CSI elements ([0208] “Based on the index of the encoder fed back by the terminal, the base station selects the corresponding decoder, and processes the received second channel information to obtain third channel information.” [0206] “At the base station, there is a decoder corresponding to the encoder, the decoder includes a decompression layer and a second processing layer”). Regarding claim 20, Li teaches claim 6 and further teaches wherein a number of the multiple classes of CSI elements is predetermined ([0208] “neural network parameters of K0 sets of auto-encoders are obtained, where each auto-encoder includes a pair of encoder and decoder… The terminal selects one encoder according to at least one factor, such as a scene of channel, angle spread, delay spread, or Doppler spread of channel” It is understood K0 is an known number). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 4, 9 14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Wang et al. (CN 110337066 A, “Wang”). Regarding claim 4, Li teaches claim 1 above but fails to teach wherein the clustering includes: clustering, at the first device, the plurality of CSI elements into the multiple classes of CSI elements based on a K-mean clustering algorithm. Wang teaches wherein the clustering includes: clustering, at the first device, the plurality of CSI elements into the multiple classes of CSI elements based on a K-mean clustering algorithm (“the original CSI data collected in the step four for further processing to obtain the activity feature information, using an extremum removing in the CSI data extremum detection algorithm based on K-means clustering” – see English translation section, page 6). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include the feature wherein the clustering includes: clustering, at the first device, the plurality of CSI elements into the multiple classes of CSI elements based on a K-mean clustering algorithm, as taught by Wang in Li to effectively remove data anomalies. Regarding claim 9, Li teaches claim 6 above but fails to teach wherein the multiple classes of CSI elements are clustered from a plurality of CSI elements based on a K-mean clustering algorithm. Wang teaches wherein the multiple classes of CSI elements are clustered from a plurality of CSI elements based on a K-mean clustering algorithm (“the original CSI data collected in the step four for further processing to obtain the activity feature information, using an extremum removing in the CSI data extremum detection algorithm based on K-means clustering” – see English translation section, page 6). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include the feature wherein the multiple classes of CSI elements are clustered from a plurality of CSI elements based on a K-mean clustering algorithm, as taught by Wang in Li to effectively remove data anomalies. Regarding claim 14, Li teaches claim 12 above but fails to teach to cluster the plurality of CSI elements into the multiple classes of CSI elements based on a K-mean clustering algorithm. Wang teaches to cluster the plurality of CSI elements into the multiple classes of CSI elements based on a K-mean clustering algorithm (“the original CSI data collected in the step four for further processing to obtain the activity feature information, using an extremum removing in the CSI data extremum detection algorithm based on K-means clustering” – see English translation section, page 6). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include the feature to cluster the plurality of CSI elements into the multiple classes of CSI elements based on a K-mean clustering algorithm, as taught by Wang in Li to effectively remove data anomalies. Regarding claim 19, Li teaches claim 16 above but fails to teach wherein the multiple classes of CSI elements are clustered from a plurality of CSI elements based on a K-mean clustering algorithm. Wang teaches wherein the multiple classes of CSI elements are clustered from a plurality of CSI elements based on a K-mean clustering algorithm (“the original CSI data collected in the step four for further processing to obtain the activity feature information, using an extremum removing in the CSI data extremum detection algorithm based on K-means clustering” – see English translation section, page 6). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include the feature wherein the multiple classes of CSI elements are clustered from a plurality of CSI elements based on a K-mean clustering algorithm, as taught by Wang in Li to effectively remove data anomalies. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC THAI NGOC VU whose telephone number is (571)270-5901. The examiner can normally be reached M-F, 9:30AM-6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rafael Perez-Gutierrez can be reached at 571-272-7915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUOC THAI N VU/ Primary Examiner, Art Unit 2642
Read full office action

Prosecution Timeline

Feb 19, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598370
ELECTRONIC DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12597955
HIGH FREQUENCY MODULE AND COMMUNICATION APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12593275
Power Saving Method for Monitoring Data Channel
2y 5m to grant Granted Mar 31, 2026
Patent 12592727
OVERSAMPLED MULTIPLE-CORRELATOR SYMBOL SYNCHRONIZATION
2y 5m to grant Granted Mar 31, 2026
Patent 12587921
STORING BEAM RELATED INFORMATION OF USED BEAM AT THE TIME OF INITIATING THE TIME-TO-TRIGGER PROCEDURE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+30.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 591 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month