Prosecution Insights
Last updated: April 19, 2026
Application No. 18/534,434

MODEL UPDATES WITH USER EQUIPMENT LATENT QUERY

Non-Final OA §102§103
Filed
Dec 08, 2023
Examiner
OHRI, ROMANI
Art Unit
2413
Tech Center
2400 — Computer Networks
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
378 granted / 445 resolved
+26.9% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
477
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
55.9%
+15.9% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 445 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-30 are currently pending. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 4-7, 11-13, 15-18, 23-24, 26 and 30 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Timo et al. (US 2025/0047346 A1). Regarding claims 1 and 23, Timo discloses a user equipment (UE), comprising: one or more memories storing processor-executable code; and one or more processors coupled with the one or more memories and individually or collectively operable to execute the code to cause the UE to (Fig. 8 discloses processor 804, memory 802, receiving unit 810 and transmitting unit 830, paragraph 0246): transmit an indication of a first training dataset associated with a network-based auto-encoder (Fig. 5b, paragraph 0172-0175 disclose UE estimates the downlink channel using configured downlink reference signal(s), e.g., CSI-RS. For example, the UE estimates the downlink channel. The UE uses a trained AE (auto encoder) encoder to compress the estimated channel and the first communications node 521 has access to one or more trained NN-based AE-encoder models 521-A, 521-B for encoding the CSI. The binary codework is reported/transmitted to the network over an uplink control channel and/or data channel), wherein the first training dataset comprises one or more channel metrics obtained by the UE, a precoding metric, or both (Fig. 5b, paragraph 0172-0175, Fig. 4b, paragraphs 0048-0054 disclose the UE reports CSI (e.g., channel quality index (CQI), precoding matrix indicator (PMI), rank indicator (RI)) to the wireless communication network over an uplink control channel and/or over a data channel); and receive, in response to transmitting the indication of the first training dataset, a second training dataset associated with the network-based auto-encoder (Fig. 5b, paragraph 0172-0175 discloses the second communications node 511 may comprise an AE-decoder 511-1. The second communications node 511 has access to one or more trained NN-based AE-decoder models 511-A, 511-B for decoding the CSI provided by the first communications node 521. Fig. 4b, paragraphs 0048-0054 disclose the network uses a trained AE decoder to reconstruct the estimated channel. The decompressed output of the AE decoder is used by the network in, for example, MIMO precoding, scheduling, and link adaption. The architecture of an AE, e.g., structure, number of layers, nodes per layer, activation function etc., may need to be tailored for each particular use case, e.g., for CSI reporting), wherein the second training dataset is for training a UE-based encoder that corresponds to the network-based auto-encoder (paragraphs 0174-0175 also 0177-0178, FIG. 6, the first communications node 521 may receive, from the second communications node 511, a decoder configuration comprising an indication to use an AE CSI reporting mode for which the first communications node 521 is configured to use the trained NN-based AE-encoder model 521-A compatible with the indicated NN-based AE-decoder model 511-A. In action 602 of FIG. 6, the first communications node 521 receives, from the second communications node 511, an indication of an NN-based AE-decoder model 511-A out of the one or more trained NN-based AE-decoder models 511-A, 511-B). Regarding claims 12 and 30, Timo discloses a network entity, comprising: one or more memories storing processor-executable code; and one or more processors coupled with the one or more memories and individually or collectively operable to execute the code to cause the network entity to (Fig. 9 discloses processor 904, memory 902, receiving unit 920 and transmitting unit 910, paragraph 0247): obtain, from a user equipment (UE), an indication of a first training dataset associated with a network-based auto-encoder (Fig. 5b, paragraph 0172-0175 disclose UE estimates the downlink channel (or important features thereof) using configured downlink reference signal(s), e.g., CSI-RS. For example, the UE estimates the downlink channel. The UE uses a trained AE (auto encoder) encoder to compress the estimated channel and the first communications node 521 has access to one or more trained NN-based AE-encoder models 521-A, 521-B for encoding the CSI. The binary codework is reported/transmitted to the network over an uplink control channel and/or data channel), wherein the first training dataset comprises one or more channel metrics obtained by the UE, a precoding metric, or both (Fig. 5b, paragraph 0172-0175, Fig. 4b, paragraphs 0048-0054 disclose the UE reports CSI (e.g., channel quality index (CQI), precoding matrix indicator (PMI), rank indicator (RI)) to the wireless communication network over an uplink control channel and/or over a data channel); and provide for output, in response to obtaining the indication of the first training dataset, a second training dataset associated with the network-based auto-encoder (Fig. 5b, paragraph 0172-0175 discloses the second communications node 511 may comprise an AE-decoder 511-1. The second communications node 511 has access to one or more trained NN-based AE-decoder models 511-A, 511-B for decoding the CSI provided by the first communications node 521. Fig. 4b, paragraphs 0048-0054 disclose the network uses a trained AE decoder to reconstruct the estimated channel. The decompressed output of the AE decoder is used by the network in, for example, MIMO precoding, scheduling, and link adaption. The architecture of an AE, e.g., structure, number of layers, nodes per layer, activation function etc., may need to be tailored for each particular use case, e.g., for CSI reporting), wherein the second training dataset is based at least in part on training the network-based auto-encoder using first training inputs obtained from the first training dataset (Paragraph 0113 discloses the network constructs a training dataset for each UE AE encoder by logging the UE's CSI report received over the air interface, e.g., the AE encoder output, together with the network's SRS-based estimate of the UL channel. The resulting dataset may then be used to train the network's AE decoder without having to know the UE's AE encoder since the network knows, from the dataset, both the input and the output of the encoder) and wherein the second training dataset is for training a UE-based encoder that corresponds to the network-based auto-encoder (paragraphs 0174-0175 also 0177-0178, FIG. 6, the first communications node 521 may receive, from the second communications node 511, a decoder configuration comprising an indication to use an AE CSI reporting mode for which the first communications node 521 is configured to use the trained NN-based AE-encoder model 521-A compatible with the indicated NN-based AE-decoder model 511-A. In action 602 of FIG. 6, the first communications node 521 receives, from the second communications node 511, an indication of an NN-based AE-decoder model 511-A out of the one or more trained NN-based AE-decoder models 511-A, 511-B). Regarding claims 2 and 24, Timo discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the UE to: train the UE-based encoder using the second training dataset (Paragraph 0187, in action 603 of FIG. 6, the first communications node 521 selects a trained NN-based AE-encoder model 521-A out of the one or more trained NN-based AE-encoder models 511-A, 521-B to use for the NN-based AE-encoder based on the received indication of the NN-based AE-decoder model 511-A such that the selected trained NN-based AE-encoder model 521-A is compatible with the indicated NN-based AE-decoder model 511-A). Regarding claims 4 and 26, Timo discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the UE to: receive information associated with the network-based auto-encoder, wherein transmitting the indication of the first training dataset is based at least in part on the information (Paragraphs 0178-0179 disclose the first communications node 521 may receive, from the second communications node 511, a decoder configuration comprising an indication to use an AE CSI reporting mode for which the first communications node 521 is configured to use the trained NN-based AE-encoder model 521-A compatible with the indicated NN-based AE-decoder model 511-A. In action 602 of FIG. 6, the first communications node 521 receives, from the second communications node 511, an indication of an NN-based AE-decoder model 511-A out of the one or more trained NN-based AE-decoder models 511-A, 511-B). Regarding claims 5 and 16, Timo discloses wherein the information associated with the network-based auto-encoder comprises an identifier associated with the network-based auto-encoder (Paragraphs 0200-0201 disclose the first communications node 521 may determine the encoder model to be used based on the NN decoder model identifier by selecting an encoder model which is compatible with the identified decoder model. In some embodiments the first communications node 521 may select a most optimal compatible encoder model out of a set of compatible encoder models. Further step 0602, paragraph 0179 discloses receive indication of NN-based AE-decoder model identifier from second communication node). Regarding claims 6 and 17, Timo discloses wherein the information associated with the network-based auto-encoder indicates a set of communication parameters associated with the one or more channel metrics obtained by the UE (Paragraphs 0181-0183 disclose the indication of the NN-based AE-decoder model 511-A may be received from the second communications node 511 with a CSI reporting configuration. The CSI reporting configuration may indicate one or more CSI feedback parameters associated with NN-based AE. The CSI configuration may indicate whether the CSI report shall use periodic PUCCH or aperiodic PUSCH to convey the CSI report to the second communications node 511). Regarding claims 7 and 18, Timo discloses wherein the information associated with the network-based auto-encoder indicates at least one of a time period or a location in which the network-based auto-encoder is active (Paragraphs 0181-0183, 0274-0275 disclose the communication system 3300 further includes the UE 3330 already referred to. It’s hardware 3335 may include a radio interface 3337 configured to set up and maintain a wireless connection 3370 with a base station serving a coverage area in which the UE 3330 is currently located). Regarding claim 11, Timo discloses receiving an indication that the second training dataset is for training the UE-based encoder of the UE (Paragraph 0113 discloses the network constructs a training dataset for each UE AE encoder by logging the UE's CSI report received over the air interface, e.g., the AE encoder output, together with the network's SRS-based estimate of the UL channel. The resulting dataset may then be used to train the network's AE decoder without having to know the UE's AE encoder since the network knows, from the dataset, both the input and the output of the encoder). Regarding claim 13, Timo discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the network entity to: input the first training dataset into a network-based encoder of the network-based auto-encoder, wherein an output of the network-based encoder provides the second training dataset (Fig. 5b, paragraph 0172-0175 discloses the second communications node 511 may comprise an AE-decoder 511-1. The second communications node 511 has access to one or more trained NN-based AE-decoder models 511-A, 511-B for decoding the CSI provided by the first communications node 521. Fig. 4b, paragraphs 0048-0054 disclose the network uses a trained AE decoder to reconstruct the estimated channel. The decompressed output of the AE decoder is used by the network in, for example, MIMO precoding, scheduling, and link adaption. The architecture of an AE, e.g., structure, number of layers, nodes per layer, activation function etc., may need to be tailored for each particular use case, e.g., for CSI reporting). Regarding claim 15, Timo discloses wherein the one or more processors are individually or collectively further operable to execute the code to cause the network entity to: provide for output information associated with the network-based auto-encoder, wherein obtaining the indication of the first training dataset is based at least in part on the information (Paragraphs 0178-0179 disclose the first communications node 521 may receive, from the second communications node 511, a decoder configuration comprising an indication to use an AE CSI reporting mode for which the first communications node 521 is configured to use the trained NN-based AE-encoder model 521-A compatible with the indicated NN-based AE-decoder model 511-A. In action 602 of FIG. 6, the first communications node 521 receives, from the second communications node 511, an indication of an NN-based AE-decoder model 511-A out of the one or more trained NN-based AE-decoder models 511-A, 511-B). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 3, 8-10, 14, 19-22, 25 and 27-29 are rejected under 35 U.S.C. 103 as being unpatentable over Timo et al. (US 2025/0047346 A1) in view of Kyung (US 2024/0380443 A1). Regarding claims 3 and 25, Timo does not explicitly disclose transmit the second training dataset to a UE server associated with the UE; and receive an updated encoder model for the UE-based encoder from the UE server, wherein the updated encoder model is based at least in part on the second training dataset. In an analogous art, Kyung discloses transmit the second training dataset to a UE server associated with the UE; and receive an updated encoder model for the UE-based encoder from the UE server, wherein the updated encoder model is based at least in part on the second training dataset (Figs. 3C-3E paragraphs 0081-0104 disclose the cloud server 305 can perform the online training by training the entire encoder-decoder model pair based on the collected channel data to obtain an updated model for the encoder model 303 and an updated model for the decoder model 304. For example, through the online training, the cloud server 305 can obtain updated values for at least partial weights of the encoder-decoder model pair. [0085] At step S334, after the entire encoder-decoder model pair has been trained and updated, the cloud server 305 can send updated models for the encoder 303 and the decoder 304 to the BS 302). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Kyung to the system of Timo to provide the updated model for the decoder model is sent from the UE to the BS. The encoder model is updated at the UE based on the updated model for the encoder model (Paragraph 0005). Regarding claims 8 and 27, Timo discloses Timo does not explicitly disclose cause the UE to transmit the first training dataset as a non-compressed dataset. In an analogous art, Kyung discloses cause the UE to transmit the first training dataset as a non-compressed dataset (Paragraphs 0056-0061 discloses provide methods and embodiments to feedback a compressed version of raw CSI to a transmitter. Based on the compressed CSI, the transmitter is able to optimally compute a precoder for precoding a transmitting signal, and also optimally decide on other transmission parameters such as RI, MCS, and the like. Further, a compression ratio used in compressing the raw CSI can be decided dynamically after the raw CSI has been estimated). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Kyung to the system of Timo to provide the updated model for the decoder model is sent from the UE to the BS. The encoder model is updated at the UE based on the updated model for the encoder model (Paragraph 0005). Regarding claim 19, Timo does not explicitly disclose the first training dataset as a non-compressed dataset. In an analogous art, Kyung discloses the first training dataset as a non-compressed dataset (Paragraphs 0056-0061, 0062-0066 discloses provide methods and embodiments to feedback a compressed version of raw CSI to a transmitter. Based on the compressed CSI, the transmitter is able to optimally compute a precoder for precoding a transmitting signal, and also optimally decide on other transmission parameters such as RI, MCS, and the like. Further, a compression ratio used in compressing the raw CSI can be decided dynamically after the raw CSI has been estimated). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Kyung to the system of Timo to provide the updated model for the decoder model is sent from the UE to the BS. The encoder model is updated at the UE based on the updated model for the encoder model (Paragraph 0005). Regarding claims 9 and 28, Timo does not explicitly disclose apply a compression algorithm to the first training dataset prior to transmission. In an analogous art, Kyung discloses apply a compression algorithm to the first training dataset prior to transmission (Paragraphs 0056-0061, 0062-0066 discloses provide methods and embodiments to feedback a compressed version of raw CSI to a transmitter. Based on the compressed CSI, the transmitter is able to optimally compute a precoder for precoding a transmitting signal, and also optimally decide on other transmission parameters such as RI, MCS, and the like. Further, a compression ratio used in compressing the raw CSI can be decided dynamically after the raw CSI has been estimated). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Kyung to the system of Timo to provide the updated model for the decoder model is sent from the UE to the BS. The encoder model is updated at the UE based on the updated model for the encoder model (Paragraph 0005). Regarding claim 20, Timo does not explicitly disclose wherein the first training dataset comprises a compressed dataset based at least in part on a compression algorithm. In an analogous art, Kyung discloses wherein the first training dataset comprises a compressed dataset based at least in part on a compression algorithm (Paragraphs 0056-0061, 0062-0066 discloses provide methods and embodiments to feedback a compressed version of raw CSI to a transmitter. Based on the compressed CSI, the transmitter is able to optimally compute a precoder for precoding a transmitting signal, and also optimally decide on other transmission parameters such as RI, MCS, and the like. Further, a compression ratio used in compressing the raw CSI can be decided dynamically after the raw CSI has been estimated). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Kyung to the system of Timo to provide the updated model for the decoder model is sent from the UE to the BS. The encoder model is updated at the UE based on the updated model for the encoder model (Paragraph 0005). Regarding claims 10, 21 and 29, Timo does not explicitly disclose wherein the compression algorithm comprises at least one of a machine-learning-based algorithm or a non-machine-learning-based algorithm. In an analogous art, Kyung discloses wherein the compression algorithm comprises at least one of a machine-learning-based algorithm or a non-machine-learning-based algorithm (Paragraphs 0062-0066 discloses There are various CSI compression algorithms, for example compressive sensing-based CSI compression and deep learning (or machine learning) based CSI compression. Compared with the compressive sensing-based CSI compression, the deep learning-based solution can provide a better reconstruction performance, for example, in terms of mean squared error, at a base station. In an embodiment, an encoder can use a deep neural network at a UE to compress original CSI and a decoder can use a deep neural network at a base station to decompress the compressed CSI and reconstruct the CSI). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Kyung to the system of Timo to provide the updated model for the decoder model is sent from the UE to the BS. The encoder model is updated at the UE based on the updated model for the encoder model (Paragraph 0005). Regarding claim 14, Timo does not explicitly disclose wherein the one or more processors are individually or collectively further operable to execute the code to cause the network entity to: provide the first training dataset for output to a network server associated with the network entity; and obtain the second training dataset from the network server in response to providing the first training dataset for output In an analogous art, Kyung discloses provide the first training dataset for output to a network server associated with the network entity; and obtain the second training dataset from the network server in response to providing the first training dataset for output (Figs. 3C-3E paragraphs 0081-0104 disclose the cloud server 305 can perform the online training by training the entire encoder-decoder model pair based on the collected channel data to obtain an updated model for the encoder model 303 and an updated model for the decoder model 304. For example, through the online training, the cloud server 305 can obtain updated values for at least partial weights of the encoder-decoder model pair. [0085] At step S334, after the entire encoder-decoder model pair has been trained and updated, the cloud server 305 can send updated models for the encoder 303 and the decoder 304 to the BS 302). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Kyung to the system of Timo to provide the updated model for the decoder model is sent from the UE to the BS. The encoder model is updated at the UE based on the updated model for the encoder model (Paragraph 0005). Regarding claim 22, Timo discloses provide for output an indication that the second training dataset is for training the UE-based encoder of the UE (paragraphs 0174-0175 also 0177-0178, FIG. 6, the first communications node 521 may receive, from the second communications node 511, a decoder configuration comprising an indication to use an AE CSI reporting mode for which the first communications node 521 is configured to use the trained NN-based AE-encoder model 521-A compatible with the indicated NN-based AE-decoder model 511-A. In action 602 of FIG. 6, the first communications node 521 receives, from the second communications node 511, an indication of an NN-based AE-decoder model 511-A out of the one or more trained NN-based AE-decoder models 511-A, 511-B). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Ahmed et al. (US 2024/0364393 A1) discloses method may include a UE transmitting CSI-RS based CSI feedback generated according to a configured codebook, generating DMRS based precoding matrix indicator feedback according to the configured codebook, and transmitting the DMRS based precoding matrix indicator feedback to a network entity Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROMANI OHRI whose telephone number is (571)272-5420. The examiner can normally be reached 8:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, UN C CHO can be reached at 5712727919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROMANI OHRI/Primary Examiner, Art Unit 2413
Read full office action

Prosecution Timeline

Dec 08, 2023
Application Filed
Mar 18, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604331
METHOD AND APPARATUS FOR RESOURCE RESTRICTION
2y 5m to grant Granted Apr 14, 2026
Patent 12574802
RECONFIGURABLE INTELLIGENT SURFACE (RIS) SCHEDULING
2y 5m to grant Granted Mar 10, 2026
Patent 12568505
METHODS AND SYSTEMS FOR DETERMINING DOWNLINK CONTROL INFORMATION IN WIRELESS NETWORKS
2y 5m to grant Granted Mar 03, 2026
Patent 12563424
METHOD AND DEVICE FOR DISCONTINUOUS WIRELESS COMMUNICATION
2y 5m to grant Granted Feb 24, 2026
Patent 12557133
SIDELINK RESOURCE INDICATIONS AND USAGE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+17.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 445 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month