Prosecution Insights
Last updated: April 19, 2026
Application No. 17/959,291

SYSTEMS, METHODS, AND APPARATUS FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR A PHYSICAL LAYER OF COMMUNICATION SYSTEM

Non-Final OA §103
Filed
Oct 03, 2022
Examiner
CHANG, JUNGWON
Art Unit
2454
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
702 granted / 815 resolved
+28.1% vs TC avg
Moderate +15% lift
Without
With
+14.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
31 currently pending
Career history
846
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 815 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/30/2026 has been entered. This Office action is in response to the RCE filed on 01/30/2026. Claims 2-20 have been canceled, and new claims 21-39 have been added. Claims 1 and 21-39 are presented for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/10/2025 is compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 21-25 and 31-39 are rejected under 35 U.S.C. 103 as being unpatentable over Kwon et al. (US 2023/0145844 A1), in view of Echigo et al. (US 2024/0187186 A1). As to claims 1 and 31, Kwon discloses the invention as claimed, including an apparatus (Figs. 2-3A) comprising: a receiver configured to receive a signal using a channel (Fig. 10, S1000; Fig. 11, S1100; ¶0016, “a base station may comprise: a transceiver configured to transmit and receive signals”; ¶0169, “receiving the reference signal in step S1000 may measure a channel using the received reference signal to obtain channel information (S1002)”); a transmitter configured to transmit a compressed representation of channel information relating to the channel (Fig. 10, S1004-S1006; Fig. 11, S1104; ¶0025, “information by using the updated ML model; and transmitting the generated compressed bits to the base station”; ¶0188, “generate compressed bits by encoding the measured channel information. Then, the terminal 1202 may transmit the compressed bits to the base station 1201 (S1204)”); and at least one processor (Fig. 2, 210) configured to: determine channel information indicating a condition of the channel based on the signal (¶0013, “channel information measured based on the reference signal”; ¶0077, “receives a specific reference signal transmitted from a transmitting end, measures channel information”; ¶0082, “The channel measurement unit 311 of the transmitter 310 may estimate channel information by measuring the received reference signal”); and generate the compressed representation of the channel information based on the condition of the channel using a machine learning model (¶0014, “generating compressed bits based on the channel information through an encoder of the determined ML model; decoding the generated compressed bits by using a decoder of the determined ML model”; ¶0023; ¶0025, “generating compressed bits corresponding to the measured channel information by using the updated ML model”; ¶0114, “The AI-based encoder 611 included in the terminal 610 described above may compress and transfer measured channel information using a specific ML mode”; ¶0171, “When the ML model uses a supervised learning scheme, the base station 1001 may receive the measured channel information and the compressed bits through steps S1004 and S1006”; ¶0201, “The measured channel information may be compressed through an encoder trained online using one of the online learning methods of the ML model described above”). Although Kwon discloses training the machine learning model (¶0111, “using the AI-based encoder 611 trained in advance through machine learning”; ¶0112, “the AI-based decoder 621 trained through machine learning”; ¶0113, “the AI-based encoder 611 and the AI-based decoder 621 have an autoencoder structure in the process of training the machine learning model”; ¶0199, “a training process for determining parameters of the ML model to be applied to the encoder and the decoder used in the terminal and the base station”), and it is noted that an AI-based encoder, decoder, or autoencoder can be interpreted as a reference model, Kwon does not specifically disclose train the machine learning model using a reference model. However, Echigo discloses train the machine learning model using a reference model (Fig. 3B; ¶0070, “the training is implemented so that, when measurement results of the CSI-RS sampled from the training CSI-RS of FIG. 3A are input to a pre-trained ML model”; ¶0148, “perform training/adjustment/update (for example, fine-tuning)/transfer learning of the ML model, based on information of the trained or pre-trained ML model being received”; ¶0149; ¶0152; ¶0233, “a channel state information reference signal (CSI-RS) based on first configuration for training of a machine learning model”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Kwon to include train the machine learning model using a reference model as taught by Echigo because it would enhance the accuracy of the ML model and significantly reduce training time (Echigo; ¶0168). As to claim 21, Kwon discloses the apparatus of claim 1, wherein the reference model is determined by a specification (It is noted that an AI-based encoder, decoder, or autoencoder can be interpreted as a reference model; ¶0111, “obtain compressed bits using the AI-based encoder 611 trained in advance through machine learning in order to transmit channel information obtained by measuring the reference signal… the compressed bits may be information for the Type II codebook scheme or may be a compressed form of the channel matrix itself”; ¶0112, “the information for the Type II codebook scheme or the compressed form of the channel matrix itself. In addition, the base station 620 may include the AI-based decoder 621 trained through machine learning. The AI-based decoder 611 may obtain the channel information by decompressing the compressed information from the received compressed bits”; ¶0113, “the AI-based encoder 611 and the AI-based decoder 621 have an autoencoder structure in the process of training the machine learning model”). As to claim 22, Kwon discloses the apparatus of claim 1, wherein the at least one processor is configured to receive the reference model using signaling (It is noted that an AI-based encoder, decoder, or autoencoder can be interpreted as a reference model; ¶0111, “obtain compressed bits using the AI-based encoder 611 trained in advance through machine learning in order to transmit channel information obtained by measuring the reference signal”). As to claim 23, Kwon discloses the apparatus of claim 1, wherein the signaling comprises radio resource (RRC) control signaling (¶0101, “a radio resource control (RRC)”). As to claim 24, Kwon discloses the apparatus of claim 1, wherein: the machine learning model comprises a generation model (Fig. 6, 611); and the reference model comprises a reconstruction model (Fig. 6, 621) (¶0010, “The ML model may be a model in which a decoder of the base station and an encoder of the terminal are connected”; ¶0023, “an encoder of the determined ML model; decode the generated compressed bits by using a decoder of the determined ML model”). As to claim 25, Kwon discloses the apparatus of claim 24, wherein: the generation model comprises an encoder (Fig. 6, 611); and the reconstruction model comprises a decoder (Fig. 6, 621) (¶0010, “The ML model may be a model in which a decoder of the base station and an encoder of the terminal are connected”; ¶0023, “an encoder of the determined ML model; decode the generated compressed bits by using a decoder of the determined ML model”). As to claim 32, Kwon discloses the apparatus of claim 31, wherein the at least one processor is configured to select the machine learning model from a first one of the at least one reference model and a second one of the at least one reference model (Figs. 2-3A; ¶0136, “an encoder of a first terminal uses a first ML model and an encoder of a second terminal uses a second ML model…a first decoder corresponding to the first ML model used by the encoder of the first terminal…a second decoder corresponding to the second ML model used by the encoder of the second terminal”). As to claim 33, Kwon discloses the apparatus of claim 32, wherein the machine learning model is a first machine learning model, and the at least one processor is configured to receive, using the receiver, information about a second machine learning model corresponding to the first one of the at least one reference model (Figs. 2-3A; ¶0136, “an encoder of a first terminal uses a first ML model and an encoder of a second terminal uses a second ML model…a first decoder corresponding to the first ML model used by the encoder of the first terminal…a second decoder corresponding to the second ML model used by the encoder of the second terminal”). As to claim 34, Kwon discloses the apparatus of claim 33, wherein the at least one processor is configured to select the first one of the at least one reference model based on the information about the second machine learning model (Figs. 2-3A; ¶0136, “an encoder of a first terminal uses a first ML model and an encoder of a second terminal uses a second ML model…a first decoder corresponding to the first ML model used by the encoder of the first terminal…a second decoder corresponding to the second ML model used by the encoder of the second terminal”). As to claim 35, Kwon discloses the apparatus of claim 33, wherein the at least one processor is configured to select the second one of the at least one reference model based on the information about the second machine learning model (Figs. 2-3A; ¶0136, “an encoder of a first terminal uses a first ML model and an encoder of a second terminal uses a second ML model…a first decoder corresponding to the first ML model used by the encoder of the first terminal…a second decoder corresponding to the second ML model used by the encoder of the second terminal”). As to claim 36, Kwon discloses the apparatus of claim 33, wherein: the first machine learning model comprises a reconstruction model; and the second machine learning model comprises a generation model (¶0010, “The ML model may be a model in which a decoder of the base station and an encoder of the terminal are connected”; ¶0023, “an encoder of the determined ML model; decode the generated compressed bits by using a decoder of the determined ML model”). As to claim 37, Kwon discloses the apparatus of claim 36, wherein: the reconstruction model comprises a decoder; and the generation model comprises an encoder (¶0010, “The ML model may be a model in which a decoder of the base station and an encoder of the terminal are connected”; ¶0023, “an encoder of the determined ML model; decode the generated compressed bits by using a decoder of the determined ML model”). As to claim 38, Kwon discloses the apparatus of claim 31, wherein the one of at least one reference model is determined by a specification (It is noted that an AI-based encoder, decoder, or autoencoder can be interpreted as a reference model; ¶0111, “obtain compressed bits using the AI-based encoder 611 trained in advance through machine learning in order to transmit channel information obtained by measuring the reference signal… the compressed bits may be information for the Type II codebook scheme or may be a compressed form of the channel matrix itself”; ¶0112, “the information for the Type II codebook scheme or the compressed form of the channel matrix itself. In addition, the base station 620 may include the AI-based decoder 621 trained through machine learning. The AI-based decoder 611 may obtain the channel information by decompressing the compressed information from the received compressed bits”; ¶0113, “the AI-based encoder 611 and the AI-based decoder 621 have an autoencoder structure in the process of training the machine learning model”). As to claim 39, Kwon discloses the apparatus of claim 31, wherein the at least one processor is configured to receive the one of at least one reference model using signaling (¶0013, “transmitting a reference signal to the terminal; receiving, from the terminal, channel information measured based on the reference signal; receiving compressed bits for the channel information; decoding the compressed bits to obtain channel information”; ¶0168, “transmit a reference signal to the terminal 1002 (S1000). In this case, the reference signal used may be a dedicated reference signal for parameter update of the ML model, or various types of reference signals used in the mobile communication system”). Claims 26-30 are rejected under 35 U.S.C. 103 as being unpatentable over Kwon et al. (US 2023/0145844 A1), in view of YOO et al. (US 2021/0273707 A1). As to claim 26, Kwon discloses an apparatus (Figs. 2-3A) comprising: a receiver configured to receive a signal using a channel (Fig. 10, S1000; Fig. 11, S1100; ¶0016, “a base station may comprise: a transceiver configured to transmit and receive signals”; ¶0169, “receiving the reference signal in step S1000 may measure a channel using the received reference signal to obtain channel information (S1002)”); a transmitter configured to transmit a compressed representation of channel information relating to the channel (Fig. 10, S1004-S1006; Fig. 11, S1104; ¶0025, “information by using the updated ML model; and transmitting the generated compressed bits to the base station”; ¶0188, “generate compressed bits by encoding the measured channel information. Then, the terminal 1202 may transmit the compressed bits to the base station 1201 (S1204)”); and at least one processor (Fig. 2, 210) configured to: determine channel information indicating a condition of the channel based on the signal (¶0013, “channel information measured based on the reference signal”; ¶0077, “receives a specific reference signal transmitted from a transmitting end, measures channel information”; ¶0082, “The channel measurement unit 311 of the transmitter 310 may estimate channel information by measuring the received reference signal”); and generate the compressed representation of the channel information based on the condition of the channel using the selected (i.e., determined, specific) machine learning model (¶0014, “generating compressed bits based on the channel information through an encoder of the determined ML model; decoding the generated compressed bits by using a decoder of the determined ML model”; ¶0023; ¶0025, “generating compressed bits corresponding to the measured channel information by using the updated ML model”; ¶0114, “The AI-based encoder 611 included in the terminal 610 described above may compress and transfer measured channel information using a specific ML mode”; ¶0171, “When the ML model uses a supervised learning scheme, the base station 1001 may receive the measured channel information and the compressed bits through steps S1004 and S1006”; ¶0201, “The measured channel information may be compressed through an encoder trained online using one of the online learning methods of the ML model described above”); select the first machine learning model as a selected machine learning model (¶0007, “determining one of machine learning (ML) models for receiving channel information for a channel to communicate with a terminal based on capability information of the terminal; providing configuration information of the determined ML model to the terminal”; ¶0016, “determine one of ML models for receiving channel information for a channel to communicate with a terminal based on capability information of the terminal; provide configuration information of the determined ML model to the terminal through the transceiver”; ¶0136, “an encoder of a first terminal uses a first ML model and an encoder of a second terminal uses a second ML model”; ¶0148, “when the first encoder 811 of the first terminal 810 uses a first ML model, the second encoder 821 of the second terminal 820 may use a second ML model different from the first ML model”). Although Kwon discloses training the machine learning model (¶0111, “using the AI-based encoder 611 trained in advance through machine learning”; ¶0112, “the AI-based decoder 621 trained through machine learning”; ¶0113, “the AI-based encoder 611 and the AI-based decoder 621 have an autoencoder structure in the process of training the machine learning model”; ¶0199, “a training process for determining parameters of the ML model to be applied to the encoder and the decoder used in the terminal and the base station”), and it is noted that an AI-based encoder, decoder, or autoencoder can be interpreted as a reference model, Kwon does not specifically disclose train a first machine learning model using a first reference model; train a second machine learning model using a second reference model. However, YOO discloses train a first machine learning model using a first reference model (i.e., encoder); train a second machine learning model using a second reference model (i.e., decoder) (¶0005, “training the neural network model based at least in part on encoding the CSI instance into encoded CSI, decoding the encoded CSI into decoded CSI”; ¶0063, “use encoder weights from a trained neural network model, to encode CSI into a more compact representation of CSI that is accurate. As a result of using encoder weights of a trained neural network model for a CSI encoder and using decoder weights of the trained neural network model for a CSI decoder”; ¶0066, “a training of a neural network model associated with a CSI encoder and a CSI decoder”; ¶0068, “train a neural network model to determine the encoder and decoder weights. The device may train the neural network model by encoding a CSI instance into encoded CSI with a CSI encoder, decoding the encoded CSI into decoded CSI with a CSI decoder”; ¶0087, “train a neural network model using separately encoded H and N”; ¶0092, “one or more trained neural network models for CSI, including for CSI encoders or CSI decoders. There may be different BS configurations, different neural network structures”; ¶0112, “training the neural network model includes training the neural network model based at least in part on a target size of the encoded CSI”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Kwon to include train a first machine learning model using a first reference model; train a second machine learning model using a second reference model as taught by YOO because it would provide more accurate Channel State Information (CSI) estimation at the base station, thereby improving link and system level performance (YOO; ¶0059; ¶0063-0064; ¶0083). As to claim 27, Kwon discloses the apparatus of claim 26, wherein the at least one processor is configured to indicate the selected machine learning model using the transmitter (Fig. 3A; ¶0016, “at least one processor, wherein the at least one processor may be executed to: determine one of ML models for receiving channel information for a channel to communicate with a terminal based on capability information of the terminal; provide configuration information of the determined ML model to the terminal through the transceiver”; ¶0021, “the at least one processor may be further executed to: transmit an online training request for the determined ML model to the terminal through the transceiver”). As to claim 28, Kwon discloses the apparatus of claim 26, wherein the at least one processor is configured to indicate a performance of the selected machine learning model using the transmitter (Fig. 3A; ¶0016, “at least one processor, wherein the at least one processor may be executed to: determine one of ML models for receiving channel information for a channel to communicate with a terminal based on capability information of the terminal; provide configuration information of the determined ML model to the terminal through the transceiver”; ¶0021, “the at least one processor may be further executed to: transmit an online training request for the determined ML model to the terminal through the transceiver”). As to claim 29, Kwon discloses the apparatus of claim 26, wherein at least one of the first reference model and the second reference model is determined by a specification (It is noted that an AI-based encoder, decoder, or autoencoder can be interpreted as a reference model; ¶0111, “obtain compressed bits using the AI-based encoder 611 trained in advance through machine learning in order to transmit channel information obtained by measuring the reference signal… the compressed bits may be information for the Type II codebook scheme or may be a compressed form of the channel matrix itself”; ¶0112, “the information for the Type II codebook scheme or the compressed form of the channel matrix itself. In addition, the base station 620 may include the AI-based decoder 621 trained through machine learning. The AI-based decoder 611 may obtain the channel information by decompressing the compressed information from the received compressed bits”; ¶0113, “the AI-based encoder 611 and the AI-based decoder 621 have an autoencoder structure in the process of training the machine learning model”). As to claim 30, Kwon discloses the apparatus of claim 26, wherein the at least one processor is configured to receive at least one of the first reference model and the second reference model using signaling (¶0013, “transmitting a reference signal to the terminal; receiving, from the terminal, channel information measured based on the reference signal; receiving compressed bits for the channel information; decoding the compressed bits to obtain channel information”; ¶0168, “transmit a reference signal to the terminal 1002 (S1000). In this case, the reference signal used may be a dedicated reference signal for parameter update of the ML model, or various types of reference signals used in the mobile communication system”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. CHI et al. (US 2024/0356591 A1), CHEN et al. (US 2024/0275456 A1), Mermoud et al. (US 2021/0344745 A1), O’Shea (US 2018/0314985 A1), XUE et al. (US 2021/0376895 A1) disclose method and apparatus for qualifying machine learning-based CSI prediction. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUNGWON CHANG whose telephone number is (571)272-3960. The examiner can normally be reached 9AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, GLENTON BURGESS can be reached at (571)272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUNGWON CHANG/Primary Examiner, Art Unit 2454 February 28, 2026
Read full office action

Prosecution Timeline

Oct 03, 2022
Application Filed
Apr 05, 2025
Non-Final Rejection — §103
Jul 07, 2025
Examiner Interview Summary
Jul 07, 2025
Applicant Interview (Telephonic)
Jul 09, 2025
Response Filed
Jul 30, 2025
Final Rejection — §103
Jan 30, 2026
Request for Continued Examination
Feb 08, 2026
Response after Non-Final Action
Feb 28, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592853
AUTOMATED DETERMINATION OF ERROR-CAUSING NETWORK PACKETS UTILIZING NETWORK PACKET REPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12587459
METHOD FOR DYNAMIC MULTIHOMING FOR RELIABLE DATA TRANSMISSION
2y 5m to grant Granted Mar 24, 2026
Patent 12587498
METHOD AND COMMUNICATION DEVICE FOR PROCESSING DATA FOR TRANSMISSION FROM THE COMMUNICATION DEVICE TO A SECOND COMMUNICATION DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12581559
REPEATER ASSOCIATION FOR SIDELINK
2y 5m to grant Granted Mar 17, 2026
Patent 12561179
SYSTEMS AND METHODS CONFIGURED TO ENABLE AN OPERATING SYSTEM FOR CONNECTED COMPUTING THAT SUPPORTS USER USE OF SUITABLE TO USER PURPOSE RESOURCES SOURCED FROM ONE OR MORE RESOURCE ECOSPHERES
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+14.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 815 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month