Prosecution Insights
Last updated: April 18, 2026
Application No. 18/240,350

SYSTEMS, METHODS, AND APPARATUS FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING BASED REPORTING OF COMMUNICATION CHANNEL INFORMATION

Final Rejection §103
Filed
Aug 30, 2023
Examiner
CHANG, JUNGWON
Art Unit
2454
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
702 granted / 815 resolved
+28.1% vs TC avg
Moderate +15% lift
Without
With
+14.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
31 currently pending
Career history
846
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 815 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office action is in response to the application filed on 08/30/2023. Claims 1-20 are presented for examination. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al. (US 2021/0184744), in view of Hao et al. (US 2020/0145075). As to claims 1, 17 and 20, Pezeshki discloses the invention as claimed, including an apparatus comprising: a receiver configured to receive a reference signal using a channel (Fig. 2, 215; Fig. 5, 510; ¶0004, “a UE may receive the reference signals”; ¶0019, “receiving the control signaling may include operations, features, means, or instructions for receiving the control signaling that may be radio resource control (RRC) signaling, a medium access control (MAC) control element (MAC-CE), a downlink control channel transmission, or any combination thereof”; ¶0099, “UE 115-a may receive the reference signals via receiver 205-a at 215”; ¶0183, “receive the set of reference signal”); at least one processor configured to (¶0134; ¶0137; ¶0142; ¶0155): determine channel information based on the reference signal (Fig. 2, 220; ¶0099, “UE 115-a may perform channel measurements 220 based on the reference signals”; ¶0107, “a UE may receive one or more reference signals over one or more paths and measure each reference signal”); generate a representation based on the channel information using a first machine learning model (i.e., autoencoder, encoder) (¶0022, “machine learning processing on channel information measurements of the set of reference signal transmissions to identify the set of per-path AoAs”; ¶0099, “UE 115-a may perform channel measurements 220 based on the reference signals to produce a channel information matrix at 225; ¶0100, “UE 115-a may input the channel information matrix into an encoder 230 to compress the channel information matrix. Encoder 230 may be a component of UE 115-a. Encoder 230 may compress the channel information matrix into a smaller form (e.g., codeword 235) in one or more encoding operations”); generate, based on the representation, precoding information using a second machine learning model (i.e., autoencoder, decoder) (¶0101, “base station 105-a may receive a signal that includes the codeword with receiver 205-b and may input the signal into a decoder attempting to recover the codeword. A decoder may be a component of base station 105-a. The decoder may decompress the codeword to retrieve a version of the original channel information matrix. In some cases, the decoder performs one or more mathematical techniques (e.g., batch normalization) to decompress the codeword”; ¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”); and a transmitter configured to transmit the representation and the channel quality information (Fig. 2, 205-b, 245; ¶0005, “The UE may transmit to the base station a feedback report that includes the multi-path channel cluster information for a defined number of paths for the reference signal”; ¶0094, “The UE 115 may provide feedback for beam selection, which may be a precoding matrix indicator (PMI) or codebook-based feedback (e.g., a multi-panel type codebook, a linear combination type codebook, a port selection type codebook)”; ¶0098, “CSI compression may compress a channel information matrix into a matrix of a smaller size, such as a codeword that may be more easily transmitted over the air”; ¶0100, “UE 115-a may transmit the codeword to base station 105-a using transmitter 210”; ¶0102; ¶0119, “a base station 105 may indicate to a UE 115 that the UE 115 is to feedback the AoAs of the top N most dominant paths 415 (e.g., the top 3 most dominant paths)”). Although Pezeshki discloses channel quality information and precoding information (¶0094, “The UE 115 may provide feedback for beam selection, which may be a precoding matrix indicator (PMI) or codebook-based feedback (e.g., a multi-panel type codebook, a linear combination type codebook, a port selection type codebook)”; ¶0099, “channel quality indicator (CQI), etc. The channel information matrix may be a large matrix including one or more channel measurements 220 (e.g., RSRP, RSRQ, SINR, CQI, RSSI) for one or more received reference signals over one or more paths”; ¶0115; ¶0119), Pezeshki does not specifically disclose generating channel quality information based on the precoding information. However, Hao discloses generating channel quality information based on the precoding information (Abstract, “a precoding matrix indicator (PMI) for a channel state information (CSI) report, and generate CQI accordingly”; ¶0005, “a precoder cycling granularity value, and a preceding matrix indicator (PMI) matrix for channel state information (CSI), and generate CQI accordingly”; ¶0006, “a precoder cycling granularity value associated with a transmission scheme for CQI, generating the CQI based on at least one of the determined time offset value or the determined precoder cycling granularity value, and transmitting, in a CSI report, the generated CQI”; ¶0018, “where the PMI reporting scheme includes full PMI reporting, partial PMI reporting, or no PMI reporting and deriving the CQI based on the determined PMI reporting scheme, a rank indicator (RI), or combination thereof”; ¶0028; ¶0037, “the CQI may be based on a PMI reporting scheme”; ¶0050, “deriving the CQI based at least in part on a RI, a first PMI matrix, a second PMI matrix”; ¶0111). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pezeshki to include generating channel quality information based on the precoding information, as taught by Hao because it would provide a more accurate and practical assessment of the channel, thereby maximizing data rates and system efficiency (Hao; ¶0018; ¶0050). As to claim 2, Pezeshki discloses the apparatus of claim 1, wherein the at least one processor is configured to receive the second machine learning model (¶0101, “base station 105-a may receive a signal that includes the codeword with receiver 205-b and may input the signal into a decoder attempting to recover the codeword. A decoder may be a component of base station 105-a. The decoder may decompress the codeword to retrieve a version of the original channel information matrix”; ¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”). As to claim 3, Pezeshki discloses the apparatus of claim 1, wherein the at least one processor is configured to train the second machine learning model based on a reference model (¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”; ¶0111, “The neural network may be trained to learn a mapping of input values to output values (e.g., supervised learning). In the example of per-path AoA, reference channel measurements may be input to the neural network, and the neural network may output estimated AoA, ToA, positioning information, or the like. The UE (or some other device) may train the neural network to learn the mapping from reference signal measurements to per-path AoAs, or the like”). As to claim 4, Pezeshki discloses the apparatus of claim 1, wherein the channel quality information comprises a channel quality indicator (CQI) (¶0099, “channel quality indicator (CQI), etc. The channel information matrix may be a large matrix including one or more channel measurements 220 (e.g., RSRP, RSRQ, SINR, CQI, RSSI) for one or more received reference signals over one or more paths”; ¶0115; ¶0119). As to claim 5, Pezeshki discloses the apparatus of claim 1, wherein the channel information comprises a channel matrix (¶0098, “CSI compression may compress a channel information matrix into a matrix of a smaller size”; ¶0099; ¶0100, “input the channel information matrix into an encoder 230 to compress the channel information matrix”). As to claim 6, Pezeshki discloses the apparatus of claim 1, wherein the at least one processor is configured to combine the representation and the channel quality information (Fig. 2, 205-b, 245; ¶0005, “The UE may transmit to the base station a feedback report that includes the multi-path channel cluster information for a defined number of paths for the reference signal”; ¶0094, “The UE 115 may provide feedback for beam selection, which may be a precoding matrix indicator (PMI) or codebook-based feedback (e.g., a multi-panel type codebook, a linear combination type codebook, a port selection type codebook)”; ¶0098, “CSI compression may compress a channel information matrix into a matrix of a smaller size, such as a codeword that may be more easily transmitted over the air”; ¶0100, “UE 115-a may transmit the codeword to base station 105-a using transmitter 210”; ¶0102; ¶0119, “a base station 105 may indicate to a UE 115 that the UE 115 is to feedback the AoAs of the top N most dominant paths 415 (e.g., the top 3 most dominant paths)”). As to claim 7, it is rejected for the same reasons set forth in claim 1 above. In addition, Pezeshki discloses an apparatus comprising: generate, using a compression scheme, the representation of the channel information based on the channel information using at least one machine learning model (¶0100, “UE 115-a may input the channel information matrix into an encoder 230 to compress the channel information matrix. Encoder 230 may be a component of UE 115-a. Encoder 230 may compress the channel information matrix into a smaller form (e.g., codeword 235) in one or more encoding operations”; ¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”; ¶0107, “The UE may input the channel information matrix to one or more of the encoder components of a neural network to compress the channel information matrix to a codeword”; ¶0109). As to claim 8, Pezeshki discloses the apparatus of claim 7, wherein the at least one machine learning model comprises an encoder configured to perform spatial compression (¶0100, “UE 115-a may input the channel information matrix into an encoder 230 to compress the channel information matrix. Encoder 230 may be a component of UE 115-a. Encoder 230 may compress the channel information matrix into a smaller form (e.g., codeword 235) in one or more encoding operations”; ¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”; ¶0107, “The UE may input the channel information matrix to one or more of the encoder components of a neural network to compress the channel information matrix to a codeword”; ¶0109). As to claim 9, Pezeshki discloses the apparatus of claim 8, wherein the encoder is configured to perform spatial compression for a subband (¶0094, “The UE 115 may report feedback that indicates precoding weights for one or more beam directions, and the feedback may correspond to a configured number of beams across a system bandwidth or one or more sub-bands”; ¶0100; ¶0107, “The UE may input the channel information matrix to one or more of the encoder components of a neural network to compress the channel information matrix to a codeword”; ¶0109). As to claim 10, Pezeshki discloses the apparatus of claim 9, wherein the encoder is a first encoder, the subband is a first subband, and the at least one machine learning model comprises a second encoder configured to perform spatial compression for a second subband (¶0094, “The UE 115 may report feedback that indicates precoding weights for one or more beam directions, and the feedback may correspond to a configured number of beams across a system bandwidth or one or more sub-bands”; ¶0100; ¶0107, “The UE may input the channel information matrix to one or more of the encoder components of a neural network to compress the channel information matrix to a codeword”; ¶0109). As to claim 11, Pezeshki discloses the apparatus of claim 10, wherein the at least one machine learning model comprises a third encoder configured to perform frequency compression for the first subband and the second subband (¶0094, “The UE 115 may report feedback that indicates precoding weights for one or more beam directions, and the feedback may correspond to a configured number of beams across a system bandwidth or one or more sub-bands”; ¶0100, “UE 115-a may input the channel information matrix into an encoder 230 to compress the channel information matrix. Encoder 230 may be a component of UE 115-a. Encoder 230 may compress the channel information matrix into a smaller form (e.g., codeword 235) in one or more encoding operations”; ¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”; ¶0107, “The UE may input the channel information matrix to one or more of the encoder components of a neural network to compress the channel information matrix to a codeword”; ¶0109). As to claim 12, Pezeshki discloses the apparatus of claim 7, wherein the at least one machine learning model comprises an encoder configured to perform spatial compression and frequency compression (¶0100, “UE 115-a may input the channel information matrix into an encoder 230 to compress the channel information matrix. Encoder 230 may be a component of UE 115-a. Encoder 230 may compress the channel information matrix into a smaller form (e.g., codeword 235) in one or more encoding operations”; ¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”; ¶0107, “The UE may input the channel information matrix to one or more of the encoder components of a neural network to compress the channel information matrix to a codeword”; ¶0109). As to claim 13, Pezeshki discloses the apparatus of claim 12, wherein the encoder configured to perform spatial compression and frequency compression for a first subband and spatial compression and frequency compression for a second subband (¶0094, “The UE 115 may report feedback that indicates precoding weights for one or more beam directions, and the feedback may correspond to a configured number of beams across a system bandwidth or one or more sub-bands”; ¶0100, “UE 115-a may input the channel information matrix into an encoder 230 to compress the channel information matrix. Encoder 230 may be a component of UE 115-a. Encoder 230 may compress the channel information matrix into a smaller form (e.g., codeword 235) in one or more encoding operations”; ¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”; ¶0107, “The UE may input the channel information matrix to one or more of the encoder components of a neural network to compress the channel information matrix to a codeword”; ¶0109). As to claim 14, Pezeshki discloses the apparatus of claim 7, wherein the at least one machine learning model is configured to generate the representation of the channel information using spatial compression (¶0100, “UE 115-a may input the channel information matrix into an encoder 230 to compress the channel information matrix. Encoder 230 may be a component of UE 115-a. Encoder 230 may compress the channel information matrix into a smaller form (e.g., codeword 235) in one or more encoding operations”; ¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”; ¶0109). As to claim 15, Pezeshki discloses the apparatus of claim 7, wherein the at least one machine learning model is configured to generate the representation of the channel information using frequency compression (¶0100, “UE 115-a may input the channel information matrix into an encoder 230 to compress the channel information matrix. Encoder 230 may be a component of UE 115-a. Encoder 230 may compress the channel information matrix into a smaller form (e.g., codeword 235) in one or more encoding operations”; ¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”; ¶0109). As to claim 16, Pezeshki discloses the apparatus of claim 7, wherein the at least one machine learning model is configured to generate the representation of the channel information using spatial compression and frequency compression (¶0100, “UE 115-a may input the channel information matrix into an encoder 230 to compress the channel information matrix. Encoder 230 may be a component of UE 115-a. Encoder 230 may compress the channel information matrix into a smaller form (e.g., codeword 235) in one or more encoding operations”; ¶0105, “a machine learning autoencoder may use training data and trial and error with machine learning algorithms to develop efficient techniques for compressing information”; ¶0109). As to claim 18, it is rejected for the same reasons set forth in claim 5 above. As to claim 19, Pezeshki discloses the apparatus of claim 17, wherein the channel information comprises a precoding matrix (¶0094, “The UE 115 may provide feedback for beam selection, which may be a precoding matrix indicator (PMI) or codebook-based feedback (e.g., a multi-panel type codebook, a linear combination type codebook, a port selection type codebook)”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Qin et al. (US 2024/0275461), Kwon et al. (US 2023/0145844), YUM et al. (US 2021/0409174), XI et al. (US 2024/0275440), disclose method and apparatus for providing channel state information (CSI) feedback. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUNGWON CHANG whose telephone number is (571)272-3960. The examiner can normally be reached 9AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, GLENTON BURGESS can be reached at (571)272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUNGWON CHANG/Primary Examiner, Art Unit 2454 September 6, 2025
Read full office action

Prosecution Timeline

Aug 30, 2023
Application Filed
Sep 06, 2025
Non-Final Rejection — §103
Mar 04, 2026
Applicant Interview (Telephonic)
Mar 04, 2026
Examiner Interview Summary
Mar 09, 2026
Response Filed
Apr 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592853
AUTOMATED DETERMINATION OF ERROR-CAUSING NETWORK PACKETS UTILIZING NETWORK PACKET REPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12587459
METHOD FOR DYNAMIC MULTIHOMING FOR RELIABLE DATA TRANSMISSION
2y 5m to grant Granted Mar 24, 2026
Patent 12587498
METHOD AND COMMUNICATION DEVICE FOR PROCESSING DATA FOR TRANSMISSION FROM THE COMMUNICATION DEVICE TO A SECOND COMMUNICATION DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12581559
REPEATER ASSOCIATION FOR SIDELINK
2y 5m to grant Granted Mar 17, 2026
Patent 12561179
SYSTEMS AND METHODS CONFIGURED TO ENABLE AN OPERATING SYSTEM FOR CONNECTED COMPUTING THAT SUPPORTS USER USE OF SUITABLE TO USER PURPOSE RESOURCES SOURCED FROM ONE OR MORE RESOURCE ECOSPHERES
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+14.9%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 815 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month