Prosecution Insights
Last updated: April 19, 2026
Application No. 17/658,977

METHOD AND APPARATUS FOR SUPPORT OF MACHINE LEARNING OR ARTIFICIAL INTELLIGENCE TECHNIQUES FOR CSI FEEDBACK IN FDD MIMO SYSTEMS

Final Rejection §103§112
Filed
Apr 12, 2022
Examiner
CERLANEK, ADAM JOEL
Art Unit
2478
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Final)
70%
Grant Probability
Favorable
5-6
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
23 granted / 33 resolved
+11.7% vs TC avg
Strong +45% interview lift
Without
With
+44.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
28 currently pending
Career history
61
Total Applications
across all art units

Statute-Specific Performance

§101
1.8%
-38.2% vs TC avg
§103
49.5%
+9.5% vs TC avg
§102
35.2%
-4.8% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Remarks This Office Action is considered to be fully responsive to the communications filed on 12/08/2025. Claims 1-20 are currently pending in this application. Response to Arguments Applicant’s arguments, see Remarks page 15, filed 12/08/2025, with respect to the rejections of claims 15-20 under 35 U.S.C. 112(b) have been fully considered and are persuasive. The amendments made to the claims overcome the previous rejections, and thus they have been withdrawn. Applicant’s arguments, see Remarks pages 15-22, filed 12/08/2025, with respect to the rejections of claims 1-20 under 35 U.S.C. 102 and 35 U.S.C. 103 have been fully considered but are moot in view of the new grounds rejections made, as is necessitated by amendment, under 35 U.S.C. 103 in view of Bai and Chavva for claims 1-4, 6, 8-11, 13, 15-18, and 20, and in view of Bai, Chavva, and Cheng for claims 5 and 7, 12, 14, and 19. Applicant has amended independent claim 1 to specify that “when the one or more indications enable ML-assisted CSI prediction, the ML- assisted CSI configurations include a timing offset for future CSI prediction and an indication of an ML model used for the ML-assisted CSI prediction, and when the one or more indications enable ML-assisted CSI reporting, the ML- assisted CSI configurations include a configuration for a CSI report quantity, artificial intelligence channel feature information (AI-CFI), an indication of an ML model for the ML assisted-CSI reporting”, where independent claims 8 and 15 are also amended in a similar way to include analogous subject matter. Applicant argues on page 20 of remarks that Bai “cannot be reasonably interpreted to disclose that ‘the ML-assisted CSI configurations include a timing offset for future CSI prediction‘”. This argument is however moot, as Chavva does teach this feature, and a claim mapping has been provided below. Applicant also argues on pages 21-22 of Remarks that Bai, Cheng, and Elshafie do not disclose certain features of independent claims 8 and 15. This argument is however moot, as Bai modified by Chavva does teach these features, and a claim mapping has been provided below. For more details about any of the above mentioned, please see the Claim Rejections section below. Claim Objections Claims 3, 10, and 17 objected to because of the following informalities (appropriate correction is required): Claims 3, 10, and 17 make reference to an “AI-CFI information”. However, as shown in claim 1, ‘AI-CFI’ is an abbreviation for the term “artificial intelligence channel feature information”. Therefore to recite ‘AI-CFI information’ is a redundancy, as it would mean “artificial intelligence channel feature information information”. Examiner requests that Applicant please correct this issue. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites “include AI-CFI to configure the AI-CFI, wherein the AI-CFI information is one of”, where this is the first time ‘AI-CFI information’ is mentioned. This is an antecedent basis issue, making the claim indefinite. Examiner requests that Applicant please correct this issue. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-4, 6, 8-11, 13, 15-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bai et al (US 20210091838 A1), and further in view of Chavva et al (US 20210351885 A1). Regarding claim 1, Bai teaches A method, comprising([0026] method): indicating capability of a user equipment (UE) to support one of machine learning (ML) assisted channel state information (CSI) reporting or ML assisted CSI prediction ([0068]-[0069] and [Fig. 4] indication of high-mobility state from UE, where channel conditions can rapidly change and thus the UE and base station acknowledge use of a predictive machine-learning model to be used for predicting CSI); receiving ML-assisted CSI configurations, wherein the ML-assisted CSI configurations include one or more indications that enable at least one of ML-assisted CSI prediction or ML-assisted CSI reporting ([0069] and [Fig. 4] the base station may or may not instruct the UE that the predictive machine-learning model associated with predicting CSI is to be used (configurations being received that include indications which enables ML-assisted CSI prediction); [0064] and [Fig. 4] UE may determine/predict CSI which is then reported to the base station (CSI reporting), where predictions are performed using a predictive model (i.e. UE is capable of ML-assisted CSI reporting) and use the reference signals sent by the base station (ML-assisted CSI configurations indicating that ML-assisted CSI reporting is enabled); [0082] base station sends configuration information including activation indications, where the configuration information is associated with the bundle of reference signals), wherein when the one or more indications enable ML-assisted CSI reporting, the ML- assisted CSI configurations include a configuration for a CSI report quantity, artificial intelligence channel feature information (AI-CFI), an indication of an ML model for the ML assisted-CSI reporting ([0064] and [Fig. 4] UE may determine/predict CSI which is then reported to the base station (CSI reporting), where predictions are performed using a predictive model (i.e. UE is capable of ML-assisted CSI reporting) and use the reference signals sent by the base station (ML-assisted CSI configurations indicating that ML-assisted CSI reporting is enabled); [Fig. 4] configuration to indicate use of predictive model (indication of an ML model for the ML-assisted CSI reporting); [0079] and [Fig. 4] reference signals (ML-assisted CSI configurations) are used to determine the parameter associated with the predictive model (AI-CFI); [0071] and [Fig. 4] a specific number of reference signals (ML-assisted CSI configurations) are sent via a bundle (configuration for CSI report quantity, as each reference signal has CSI calculated for it)); and one of performing ML model training or receiving trained ML model parameters ([0080] UE may determine at least two parameters associated with the predictive model (performing ML model training)); receiving CSI reference signals corresponding to at least one of the ML-assisted CSI configurations ([0070]-[0071] and [Fig. 4] the base station sends a bundle of refence signals to the UE (CSI reference signals) in response to the indication that the predictive machine-learning model is to be used (corresponding to at least one of the configurations)); in response to ML-assisted CSI prediction being enabled by the one or more indications, determining and transmitting predicted CSI as feedback ([0077] and [Fig. 4] the UE determines and transmits at least one CSI 436); and in response to ML-assisted CSI reporting being enabled by the one or more indications, measuring the CSI reference signals based on a configuration for a CSI report quantity ([0072] the UE performs a measurement on the reference signals to determine different values, depending on what value is required (based on the configuration for a CSI report quantity)), and transmitting a CSI report that includes the AI-CFI ([0084] and [Fig. 4] CSI associated with reference signal(s) (CSI report) and at least one parameter associated with the predictive model (AI-CFI) are transmitted 440 together). Bai does not explicitly teach when the one or more indications enable ML-assisted CSI prediction, the ML- assisted CSI configurations include a timing offset for future CSI prediction and an indication of an ML model used for the ML-assisted CSI prediction, and However, Chavva does teach when the one or more indications enable ML-assisted CSI prediction, the ML- assisted CSI configurations include a timing offset for future CSI prediction and an indication of an ML model used for the ML-assisted CSI prediction ([0165] and [Fig. 10] configuration to indicate a neural network model of multiple neural network models (indication of an ML model used for the ML-assisted CSI prediction); [0033] and [0124] the gNB sends CSI-ReportConfig IE (ML-assisted CSI configurations) to configure the neural network to predict CSI at a future time instance (timing offset for future CSI prediction)), and Bai and Chavva are considered to be analogous to the claimed invention, as they are both in the same field of predicting CSI. It would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Bai to include the teachings of Chavva where the configurations include a future time instance for CSI prediction and an indication of a neural network. The rationale behind this would be to improve accuracy of prediction and increase optimality of the CSI report ([0086] Chavva). Regarding claim 2, Bai modified by Chavva teaches The method of claim 1, as is described above. Bai further teaches wherein the AI-CFI includes at least one of: a quantized output of an ML model that corresponds to compressed knowledge of a channel ([0086] the value obtained through evaluation of the predictive model is quantized; [0058] base station provides demultiplexing, packet reassembly, deciphering, and header decompression for packets; [0084] and [Fig. 4] CSI associated with reference signal(s) (CSI report) and at least one parameter associated with the predictive model (AI-CFI) are transmitted 440 together), and a quantized output of an ML model that correspond to relevant features of the channel. Regarding claim 3, Bai modified by Chavva teaches The method of claim 1, as is described above. Bai further teaches wherein the ML-assisted CSI configurations include AI-CFI information to configure the AI-CFI, and wherein the AI-CFI information is one of 0082] base station configuration information includes indication for periodicity associated with CSI and/or parameter reporting by the UE (configuring value/parameter associated with the predictive model)): a quantization method to be used to quantize an output of the ML model, a number of quantization bits to be used, a compression ratio from original CSI to the AI-CFI, or a total number of CSI feedback bits ([0110] base station can map a set of bits corresponding to the UE response (CSI feedback bits) and indicate it to the UE through control information). Regarding claim 4, Bai modified by Chavva teaches The method of claim 1, as is described above. Bai further teaches wherein the ML-assisted CSI configurations include additional information used for selecting the ML model, the additional information comprising signal-to-noise (SNR) ratio ranges ([0069] and [Fig. 4] the base station instructs the UE that the predictive machine-learning model is to be used (selecting ML model); [0072] UE performs measurement for signal to noise ratio SNR based on values (ranges) corresponding to each of the reference signals). Regarding claim 6, Bai modified by Chavva teaches The method of claim 1, as is described above. Bai further teaches the timing offset for future CSI prediction is received as one of: a fixed value configured via a radio resource control (RRC) message or a set of values ([0069] and [Fig. 4] the base station instructs the UE that the predictive machine-learning model is to be used; [0071] and [Fig. 4] each reference signal is associated with a time offset; [0072] values (set of values) corresponding to each of the reference signals). Regarding claim 8, Bai teaches A user equipment (UE), comprising ([0066] and [Fig. 4] UE 404): a transceiver configured to ([0052]-[0053] and [Fig. 3] UE has receiver and transmitter with an antenna): indicate capability of the UE to support one of machine learning (ML)-assisted channel state information (CSI) reporting or ML-assisted CSI prediction ([0068]-[0069] and [Fig. 4] indication of high-mobility state from UE, where channel conditions can rapidly change and thus the UE and base station acknowledge use of a predictive machine-learning model to be used for predicting CSI), receive ML-assisted CSI configurations, wherein the ML-assisted CSI configurations include one or more indications that enable at least one of: ML- assisted CSI prediction or ML-assisted CSI reporting ([0069] and [Fig. 4] the base station may or may not instruct the UE that the predictive machine-learning model associated with predicting CSI is to be used (configurations being received that include indications which enables ML-assisted CSI prediction); [0064] and [Fig. 4] UE may determine/predict CSI which is then reported to the base station (CSI reporting), where predictions are performed using a predictive model (i.e. UE is capable of ML-assisted CSI reporting) and use the reference signals sent by the base station (ML-assisted CSI configurations indicating that ML-assisted CSI reporting is enabled); [0082] base station sends configuration information including activation indications, where the configuration information is associated with the bundle of reference signals), wherein when the one or more indications enable ML-assisted CSI reporting, the ML-assisted CSI configurations include a configuration for a CSI report quantity, artificial intelligence channel feature information (AI-CFI), an indication of an ML model for the ML assisted-CSI reporting ([0064] and [Fig. 4] UE may determine/predict CSI which is then reported to the base station (CSI reporting), where predictions are performed using a predictive model (i.e. UE is capable of ML-assisted CSI reporting) and use the reference signals sent by the base station (ML-assisted CSI configurations indicating that ML-assisted CSI reporting is enabled); [Fig. 4] configuration to indicate use of predictive model (indication of an ML model for the ML-assisted CSI reporting); [0079] and [Fig. 4] reference signals (ML-assisted CSI configurations) are used to determine the parameter associated with the predictive model (AI-CFI); [0071] and [Fig. 4] a specific number of reference signals (ML-assisted CSI configurations) are sent via a bundle (configuration for CSI report quantity, as each reference signal has CSI calculated for it)), and receive CSI reference signals corresponding to at least one of the ML-assisted CSI configurations ([0070]-[0071] and [Fig. 4] the base station sends a bundle of refence signals to the UE (CSI reference signals) in response to the indication that the predictive machine-learning model is to be used (corresponding to at least one of the configurations)); and a processor configured to ([0072] UE performs): one of perform ML model training or receive trained ML model parameters ([0080] UE may determine at least two parameters associated with the predictive model (performing ML model training)), in response to ML-assisted CSI prediction being enabled by the one or more indications, determine and transmit predicted CSI as feedback ([0077] and [Fig. 4] the UE determines and transmits at least one CSI 436), and in response to ML-assisted CSI reporting being enabled by the one or more indications, measure the CSI reference signals based on a configuration for a CSI report quantity ([0072] the UE performs a measurement on the reference signals to determine different values, depending on what value is required (based on the configuration for a CSI report quantity)), and transmit a CSI report that includes the AI-CFI ([0084] and [Fig. 4] CSI associated with reference signal(s) (CSI report) and at least one parameter associated with the predictive model (AI-CFI) are transmitted 440 together). Bai does not explicitly teach when the one or more indications enable ML-assisted CSI prediction, the ML-assisted CSI configurations include a timing offset for future CSI prediction and an indication of an ML model used for the ML-assisted CSI prediction, and However, Chavva does teach when the one or more indications enable ML-assisted CSI prediction, the ML-assisted CSI configurations include a timing offset for future CSI prediction and an indication of an ML model used for the ML-assisted CSI prediction ([0165] and [Fig. 10] configuration to indicate a neural network model of multiple neural network models (indication of an ML model used for the ML-assisted CSI prediction); [0033] and [0124] the gNB sends CSI-ReportConfig IE (ML-assisted CSI configurations) to configure the neural network to predict CSI at a future time instance (timing offset for future CSI prediction)), and Bai and Chavva are considered to be analogous to the claimed invention, as they are both in the same field of predicting CSI. It would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Bai to include the teachings of Chavva where the configurations include a future time instance for CSI prediction and an indication of a neural network. The rationale behind this would be to improve accuracy of prediction and increase optimality of the CSI report ([0086] Chavva). Regarding claim 9, Bai modified by Chavva teaches The UE of claim 8, as is described above. Bai further teaches wherein the AI-CFI includes at least one of: a quantized output of an ML model that corresponds to compressed knowledge of a channel ([0086] the value obtained through evaluation of the predictive model is quantized; [0058] base station provides demultiplexing, packet reassembly, deciphering, and header decompression for packets; [0084] and [Fig. 4] CSI associated with reference signal(s) (CSI report) and at least one parameter associated with the predictive model (AI-CFI) are transmitted 440 together), and a quantized output of an ML model that corresponds to relevant features of the channel. Regarding claim 10, Bai modified by Chavva teaches The UE of claim 8, as is described above. Bai further teaches wherein the ML-assisted CSI configurations include AI-CFI to configure the AI-CFI, and wherein the AI-CFI information is one of ([0082] base station configuration information includes indication for periodicity associated with CSI and/or parameter reporting by the UE (configuring value/parameter associated with the predictive model)): a quantization method to be used to quantize an output of the ML model ([0086] value indicative of predicted CSI is obtained through evaluation of the predictive model is quantized (output of the model is quantized)), a number of quantization bits to be used, a compression ratio from original CSI to the AI-CFI, or a total number of CSI feedback bits ([0110] base station can map a set of bits corresponding to the UE response (CSI feedback bits) and indicate it to the UE through control information). Regarding claim 11, Bai modified by Chavva teaches The UE of claim 8, as is described above. Bai further teaches wherein the ML-assisted CSI configurations include additional information used for selecting the ML model, the additional information comprising signal-to-noise (SNR) ratio ranges ([0069] and [Fig. 4] the base station instructs the UE that the predictive machine-learning model is to be used (selecting ML model); [0072] UE performs measurement for signal to noise ratio SNR based on values (ranges) corresponding to each of the reference signals). Regarding claim 13, Bai modified by Chavva teaches The UE of claim 8, as is described above. Bai further teaches wherein the timing offset for future CSI prediction is received as one of: a fixed value configured via a radio resource control (RRC) message or a set of values ([0069] and [Fig. 4] the base station instructs the UE that the predictive machine-learning model is to be used; [0071] and [Fig. 4] each reference signal is associated with a time offset; [0072] values (set of values) corresponding to each of the reference signals). Regarding claim 15, Bai teaches A base station (BS), comprising ([0066] and [Fig. 4] base station 402): a processor ([Fig. 3] base station 310 has controller/processor); and a transceiver configured to ([Fig. 3] base station 310 has TX and RX with antenna 320) obtain an indication of capability of a user equipment (UE) to support one of machine learning (ML)-assisted channel state information (CSI) reporting or ML assisted CSI prediction ([0068]-[0069] and [Fig. 4] indication of high-mobility state from UE, where channel conditions can rapidly change and thus the UE and base station acknowledge use of a predictive machine-learning model to be used for predicting CSI), transmit ML-assisted CSI, wherein the ML-assisted CSI configurations include one or more indications that enable at least one of: ML-assisted CSI prediction or ML-assisted CSI reporting ([0069] and [Fig. 4] the base station may or may not instruct the UE that the predictive machine-learning model associated with predicting CSI is to be used (configurations being received that include indications which enables ML-assisted CSI prediction); [0064] and [Fig. 4] UE may determine/predict CSI which is then reported to the base station (CSI reporting), where predictions are performed using a predictive model (i.e. UE is capable of ML-assisted CSI reporting) and use the reference signals sent by the base station (ML-assisted CSI configurations indicating that ML-assisted CSI reporting is enabled); [0082] base station sends configuration information including activation indications, where the configuration information is associated with the bundle of reference signals), wherein when the one or more indications enable ML-assisted CSI reporting, the ML-assisted CSI configurations include a configuration for a CSI report quantity, artificial intelligence channel feature information (AI-CFI), an indication of an ML model for the ML assisted-CSI reporting ([0064] and [Fig. 4] UE may determine/predict CSI which is then reported to the base station (CSI reporting), where predictions are performed using a predictive model (i.e. UE is capable of ML-assisted CSI reporting) and use the reference signals sent by the base station (ML-assisted CSI configurations indicating that ML-assisted CSI reporting is enabled); [Fig. 4] configuration to indicate use of predictive model (indication of an ML model for the ML-assisted CSI reporting); [0079] and [Fig. 4] reference signals (ML-assisted CSI configurations) are used to determine the parameter associated with the predictive model (AI-CFI); [0071] and [Fig. 4] a specific number of reference signals (ML-assisted CSI configurations) are sent via a bundle (configuration for CSI report quantity, as each reference signal has CSI calculated for it)), and transmit CSI reference signals corresponding to at least one of the ML-assisted CSI configurations ([0070]-[0071] and [Fig. 4] the base station sends a bundle of refence signals to the UE (CSI reference signals) in response to the indication that the predictive machine-learning model is to be used (corresponding to at least one of the configurations)), in response to ML-assisted CSI prediction being enabled by the one or more indications, receive predicted CSI determined by the UE as feedback ([0077] and [Fig. 4] the UE determines and transmits at least one CSI 436), in response to ML-assisted CSI reporting being enabled by the one or more indications, receive a CSI report that includes the AI-CFI and corresponding to measurement of the CSI reference signals based on a configuration for a CSI report quantity ([0084] and [Fig. 4] CSI associated with reference signal(s) (CSI report) and at least one parameter associated with the predictive model (AI-CFI) are transmitted 440 together). Bai does not explicitly teach when the one or more indications enable ML-assisted CSI prediction, the ML-assisted CSI configurations include a timing offset for future CSI prediction and an indication of an ML model used for the ML-assisted CSI prediction, and However, Chavva does teach when the one or more indications enable ML-assisted CSI prediction, the ML-assisted CSI configurations include a timing offset for future CSI prediction and an indication of an ML model used for the ML-assisted CSI prediction ([0165] and [Fig. 10] configuration to indicate a neural network model of multiple neural network models (indication of an ML model used for the ML-assisted CSI prediction); [0033] and [0124] the gNB sends CSI-ReportConfig IE (ML-assisted CSI configurations) to configure the neural network to predict CSI at a future time instance (timing offset for future CSI prediction)), and Bai and Chavva are considered to be analogous to the claimed invention, as they are both in the same field of predicting CSI. It would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Bai to include the teachings of Chavva where the configurations include a future time instance for CSI prediction and an indication of a neural network. The rationale behind this would be to improve accuracy of prediction and increase optimality of the CSI report ([0086] Chavva). Regarding claim 16, Bai modified by Chavva teaches The BS of claim 15, as is described above. Bai further teaches wherein the AI-CFI includes at least one of ([0069] and [Fig. 4] the base station instructs the UE that the predictive machine-learning model is to be used (ML-assisted CSI prediction is configured)): a quantized output of an ML model that corresponds to compressed knowledge of a channel ([0086] the value obtained through evaluation of the predictive model is quantized; [0058] base station provides demultiplexing, packet reassembly, deciphering, and header decompression for packets; [0084] and [Fig. 4] CSI associated with reference signal(s) (CSI report) and at least one parameter associated with the predictive model (AI-CFI) are transmitted 440 together), and a quantized output of an ML model that corresponds to relevant features of the channel. Regarding claim 17, Bai modified by Chavva teaches The BS of claim 15, as is described above. wherein the ML-assisted configurations include AI-CFI information to configure AI-CFI, wherein the AI-CFI information is one of ([0082] base station configuration information includes indication for periodicity associated with CSI and/or parameter reporting by the UE (configuring value/parameter associated with the predictive model)): a quantization method to be used to quantize an output of the ML model ([0086] value indicative of predicted CSI is obtained through evaluation of the predictive model is quantized (output of the model is quantized)), a number of quantization bits to be used, a compression ratio from original CSI to the AI-CFI, or a total number of CSI feedback bits ([0110] base station can map a set of bits corresponding to the UE response (CSI feedback bits) and indicate it to the UE through control information). Regarding claim 18, Bai modified by Chavva teaches The BS of claim 15, as is described above. wherein the ML-assisted CSI configurations include additional information used for selecting the ML model, the additional information comprising signal-to-noise (SNR) ratio ranges ([0069] and [Fig. 4] the base station instructs the UE that the predictive machine-learning model is to be used (selecting ML model); [0072] UE performs measurement for signal to noise ratio SNR based on values (ranges) corresponding to each of the reference signals). Regarding claim 20, Bai modified by Chavva teaches The BS of claim 15, as is described above. wherein the timing offset for future CSI prediction is transmitted as one of: a fixed value configured via a radio resource control (RRC) message or a set of values ([0069] and [Fig. 4] the base station instructs the UE that the predictive machine-learning model is to be used; [0071] and [Fig. 4] each reference signal is associated with a time offset; [0072] values (set of values) corresponding to each of the reference signals). Claims 5, 7, 12, 14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bai et al (US 20210091838 A1), Chavva et al (US 20210351885 A1), and further in view of Cheng et al (US 20190394758 A1). Regarding claim 5, Bai modified by Chavva teaches The method of claim 1, as is described above. Bai further teaches wherein the ML-assisted CSI configurations configure dynamic switching, based on a trigger, between ([0093] UE and base station negotiate which predictive model (switching between models) to use, in response to a request (trigger)): the ML-assisted CSI reporting and CSI reporting without ML assistance ([0093] UE and base station negotiate which predictive model (switching between models) to use; [0069] predictive model can be linear model, higher-order model (models without ML assistance), and/or neural network/machine-learning model (ML-assisted model)), Bai does not explicitly teach the rest of the limitations wherein the trigger comprises one of: for aperiodic and semi-persistent reporting on a physical uplink shared channel (PUSCH), the trigger is one of a CSI format field within a downlink control information (DCI) format 0_1 or a report configuration identifier within a list of aperiodic or semi- persistent trigger events, and for semi-persistent reporting on a physical uplink control channel (PUCCH), the trigger a dedicated field within a medium access control - control element (MAC-CE). However, Cheng does teach wherein the trigger comprises one of: for aperiodic and semi-persistent reporting on a physical uplink shared channel (PUSCH), the trigger is one of a CSI format field within a downlink control information (DCI) format 0_1 or a report configuration identifier within a list of aperiodic or semi- persistent trigger events, and for semi-persistent reporting on a physical uplink control channel (PUCCH), the trigger a dedicated field within a medium access control - control element (MAC-CE) ([0104] UE receives MAC-CE that triggers PUCCH based semi-persistent CSI reporting). Bai, Chavva, and Cheng are considered to be analogous to the claimed invention, as they are both in the same field of CSI reporting. It would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Bai/Chavva to include the teachings of Cheng where a MAC-CE includes a trigger for PUCCH based semi-persistent CSI reporting. The rationale behind this would be to improve latency and wide coverage area ([0003] Cheng). Regarding claim 7, Bai modified by Chavva teaches The method of claim 1, as is described above. Bai further teaches wherein the ML-assisted CSI configurations dynamically switch between reporting a current instant CSI to ML-based future predicted CSI reporting using a triggering mechanism selected from ([0063] the UE is reporting CSI in real-time, which causes issues with signal degradation, so the UE and base station determine the predicted CSI (switching between instant CSI reporting to future predicted CSI reporting); [0093] the UE and base station negotiate which predictive model to use based on a request to do so (trigger); [0069] predictive model can be machine-learning model): Bai does not explicitly teach the rest of the limitations for aperiodic and semi-persistent reporting on a physical uplink shared channel (PUSCH), the trigger is one of a field within downlink control information (DCI) format 0_1 or a report configuration identifier within one of a list of aperiodic CSI report triggers or an information element for semi-persistent CSI reporting on the PUSCH, or for semi-persistent reporting on a physical uplink control channel (PUCCH), the trigger is a dedicated field within a medium access control - control element (MAC-CE). However, Cheng does teach for aperiodic and semi-persistent reporting on a physical uplink shared channel (PUSCH), the trigger is one of a field within downlink control information (DCI) format 0_1 or a report configuration identifier within one of a list of aperiodic CSI report triggers or an information element for semi-persistent CSI reporting on the PUSCH, or for semi-persistent reporting on a physical uplink control channel (PUCCH), the trigger is a dedicated field within a medium access control - control element (MAC-CE) ([0104] UE receives MAC-CE that triggers PUCCH based semi-persistent CSI reporting). Bai, Chavva, and Cheng are considered to be analogous to the claimed invention, as they are both in the same field of CSI reporting. It would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Bai/Chavva to include the teachings of Cheng where a MAC-CE includes a trigger for PUCCH based semi-persistent CSI reporting. The rationale behind this would be to improve latency and wide coverage area ([0003] Cheng). Regarding claim 12, Bai modified by Chavva teaches The UE of claim 8, as is described above. Bai further teaches wherein the ML-assisted CSI configurations configure dynamic switching, based on a trigger, between ([0093] UE and base station negotiate which predictive model (switching between models) to use, in response to a request (trigger)): the ML-assisted CSI reporting and CSI reporting without ML assistance ([0093] UE and base station negotiate which predictive model (switching between models) to use; [0069] predictive model can be linear model, higher-order model (models without ML assistance), and/or neural network/machine-learning model (ML-assisted model)), Bai does not explicitly teach wherein the trigger comprises one of: for aperiodic and semi-persistent reporting on a physical uplink shared channel (PUSCH), the trigger is one of a CSI format field within a downlink control information (DCI) format 0_1 or a report configuration identifier within a list of aperiodic or semi- persistent trigger events, and for semi-persistent reporting on a physical uplink control channel (PUCCH), the trigger is a dedicated field within a medium access control - control element (MAC-CE). However, Cheng does teach wherein the trigger comprises one of: for aperiodic and semi-persistent reporting on a physical uplink shared channel (PUSCH), the trigger is one of a CSI format field within a downlink control information (DCI) format 0_1 or a report configuration identifier within a list of aperiodic or semi- persistent trigger events, and for semi-persistent reporting on a physical uplink control channel (PUCCH), the trigger is a dedicated field within a medium access control - control element (MAC-CE) ([0104] UE receives MAC-CE that triggers PUCCH based semi-persistent CSI reporting). Bai, Chavva, and Cheng are considered to be analogous to the claimed invention, as they are all in the same field of CSI reporting. It would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Bai/Chavva to include the teachings of Cheng where a MAC-CE includes a trigger for PUCCH based semi-persistent CSI reporting. The rationale behind this would be to improve latency and wide coverage area ([0003] Cheng). Regarding claim 14, Bai modified by Chavva teaches The UE of claim 8, as is described above. Bai further teaches wherein the ML-assisted CSI configurations dynamically switch between reporting a current instant CSI to ML-based future predicted CSI reporting using a triggering mechanism selected from ([0063] the UE is reporting CSI in real-time, which causes issues with signal degradation, so the UE and base station determine the predicted CSI (switching between instant CSI reporting to future predicted CSI reporting); [0093] the UE and base station negotiate which predictive model to use based on a request to do so (trigger); [0069] predictive model can be machine-learning model): Bai does not explicitly teach for aperiodic and semi-persistent reporting on a physical uplink shared channel (PUSCH), the trigger is one of a field within downlink control information (DCI) format 0_1 or a report configuration identifier within one of a list of aperiodic CSI report triggers or an information element for semi-persistent CSI reporting on the PUSCH, or for semi-persistent reporting on a physical uplink control channel (PUCCH), the trigger is a dedicated field within a medium access control - control element (MAC-CE). However, Cheng does teach for aperiodic and semi-persistent reporting on a physical uplink shared channel (PUSCH), the trigger is one of a field within downlink control information (DCI) format 0_1 or a report configuration identifier within one of a list of aperiodic CSI report triggers or an information element for semi-persistent CSI reporting on the PUSCH, or for semi-persistent reporting on a physical uplink control channel (PUCCH), the trigger is a dedicated field within a medium access control - control element (MAC-CE) ([0104] UE receives MAC-CE that triggers PUCCH based semi-persistent CSI reporting). Bai, Chavva, and Cheng are considered to be analogous to the claimed invention, as they are all in the same field of CSI reporting. It would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Bai/Chavva to include the teachings of Cheng where a MAC-CE includes a trigger for PUCCH based semi-persistent CSI reporting. The rationale behind this would be to improve latency and wide coverage area ([0003] Cheng). Regarding claim 19, Bai modified by Chavva teaches The BS of claim 15, as is described above. Bai further teaches wherein the ML-assisted CSI configurations configure dynamic switching, based on a trigger, between ([0093] UE and base station negotiate which predictive model (switching between models) to use, in response to a request (trigger)): the ML-assisted CSI reporting and CST reporting without ML assistance ([0093] UE and base station negotiate which predictive model (switching between models) to use; [0069] predictive model can be linear model, higher-order model (models without ML assistance), and/or neural network/machine-learning model (ML-assisted model)), Bai does not explicitly teach the rest of the limitations wherein the trigger comprises one of: for aperiodic and semi-persistent reporting on a physical uplink shared channel (PUSCH), the trigger is one of a CSI format field within a downlink control information (DCI) format 0_1 or a report configuration identifier within a list of aperiodic or semi- persistent trigger events, and for semi-persistent reporting on a physical uplink control channel (PUCCH), the trigger is a dedicated field within a medium access control - control element (MAC-CE). However, Cheng does teach wherein the trigger comprises one of: for aperiodic and semi-persistent reporting on a physical uplink shared channel (PUSCH), the trigger is one of a CSI format field within a downlink control information (DCI) format 0_1 or a report configuration identifier within a list of aperiodic or semi- persistent trigger events, and for semi-persistent reporting on a physical uplink control channel (PUCCH), the trigger is a dedicated field within a medium access control - control element (MAC-CE) ([0104] UE receives MAC-CE that triggers PUCCH based semi-persistent CSI reporting). Bai, Chavva, and Cheng are considered to be analogous to the claimed invention, as they are both in the same field of CSI reporting. It would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Bai/Chavva to include the teachings of Cheng where a MAC-CE includes a trigger for PUCCH based semi-persistent CSI reporting. The rationale behind this would be to improve latency and wide coverage area ([0003] Cheng). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM JOEL CERLANEK whose telephone number is (703)756-1272. The examiner can normally be reached 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Avellino can be reached at (571) 272-3905. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.J.C./Examiner, Art Unit 2478 /JOSEPH E AVELLINO/Supervisory Patent Examiner, Art Unit 2478
Read full office action

Prosecution Timeline

Apr 12, 2022
Application Filed
Sep 05, 2024
Non-Final Rejection — §103, §112
Dec 09, 2024
Response Filed
Mar 24, 2025
Final Rejection — §103, §112
Jun 30, 2025
Notice of Allowance
Jun 30, 2025
Response after Non-Final Action
Jul 02, 2025
Response after Non-Final Action
Sep 03, 2025
Non-Final Rejection — §103, §112
Dec 08, 2025
Response Filed
Mar 20, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604310
MINIMIZATION OF UL DROPPING DUE TO COLLISION WITH MEASUREMENT GAPS FOR UES
2y 5m to grant Granted Apr 14, 2026
Patent 12593333
Methods, System, User Equipment, and Apparatus for Uplink Resource Muting Operation in Wireless Communication
2y 5m to grant Granted Mar 31, 2026
Patent 12587987
Method and System for Improved Clock Synchronization
2y 5m to grant Granted Mar 24, 2026
Patent 12538197
ELECTRONIC DEVICE, METHOD AND STORAGE MEDIUM FOR RADIO LINK MEASUREMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12520368
APPARATUS, METHOD, AND STORAGE MEDIUM FOR STATE TRANSITION MANAGEMENT IN WIRELESS COMMUNICATION
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+44.6%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month