Prosecution Insights
Last updated: April 19, 2026
Application No. 18/425,999

PERFORMANCE MONITORING FOR ARTIFICIAL INTELLIGENCE (AI)/MACHINE LEARNING (ML) FUNCTIONALITIES AND MODELS

Non-Final OA §102§103§112
Filed
Jan 29, 2024
Examiner
PANNELL, MARK G
Art Unit
2642
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
90%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
298 granted / 405 resolved
+11.6% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
25 currently pending
Career history
430
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
23.8%
-16.2% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 405 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed application, Application No. 63/494,971, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. Application No. 63/494,971 fails to provide adequate support or enablement for at least the following limitations in Applicant's claims: In claim 1, transmit a message based on the first functionality performed using a first ML model selected from the set of ML models based on the performance target. In claim 11, transmitting a message based on the first functionality performed using a first ML model selected from the set of ML models based on the performance target. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the Applicant regards as his invention. Claims 3 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the Applicant), regards as the invention. Regarding claims 3 and 13, the phrase "determine whether at least one ML model of the set of ML models associated with the first functionality satisfies the performance target" renders the claim indefinite because it is unclear how an ML model may satisfy a performance target. For the purpose of this Office Action the phrase will be interpreted as determining whether a performance of at least one ML model of the set of ML models satisfies the performance target. Also regarding claims 3 and 13, the phrase "determine a performance of each ML model … satisfy the performance target" renders the claim indefinite because it is unclear how this step is different than the previous determining step and also because it is grammatically incorrect and the meaning is unclear. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office Action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 4, 5, 8, 11, 12, 14, 15, 18, 21-23, and 26-28 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Leng et al. (U.S. Patent Application Publication No. 2024/0098533 A1) (hereinafter Leng). Regarding claim 1, Leng discloses an apparatus for wireless communications (Paragraph 0046 discloses the terms “user equipment” and “UE” are used in this patent document to refer to remote wireless equipment that wirelessly accesses a BS), the apparatus comprising: at least memory; and at least one processor coupled to the at least memory (Figure 3 and paragraph 0060 and 0064 disclose the UE 116 also includes a speaker 330, a processor 340, an input/output (I/O) interface (IF) 345, an input 350, a display 355, and a memory 360. The memory 360 includes an operating system (OS) 361 and one or more applications 362. The processor 340 is also capable of executing processes and programs resident in the memory 360) and configured to: output, for transmission to a network entity, capability information related to a first functionality supported by a set of machine learning (ML) models of the apparatus (Figure 4 and paragraphs 0069 and 0070 disclose the network 401 can request the UE 402 to provide radio access capability information by sending an UECapabilityEnquiry message 403 after access stratum (AS) security is setup. The UE 402 replies with an UECapabilityInformation message 404. In one embodiment, in information element (IE) UE-NR-Capability within the UECapabilityInformation message 404, the UE 402 can provide the UE's capability information on the supported AI/ML related features, including AI/ML use cases and/or use-case-specific operations, types/structures of AI/ML models, and/or types of training/inference, and/or relevant operations for model managements, etc. The UE capability of supporting AI/ML use cases can be defined per UE, and/or differently in time division duplexing (TDD) and frequency division duplexing (FDD), and/or differently in frequency range 1 (FR1) and frequency range 2 (FR2). In one example, for each use case, e.g., CSI compression, CSI prediction, beam management, and positioning, a one-bit indication is used to indicate whether the UE supports the AI/ML-based operation for the respective use case); receive, from the network entity, a performance target associated with the first functionality (Figure 6 and paragraphs 0090 and 0100 disclose at operation 601 the NW configures the UE to apply the AI/ML model for a certain use case and to perform the related operations, including AI/ML model monitoring. For operation 501/601, the model monitoring configuration can include the target value of KPIs), and transmit a message based on the first functionality performed using a first ML model selected from the set of ML models based on the performance target (Figure 7 and paragraph 0123 disclose at operation 703, the UE sends monitoring report(s), including model switch/update/refinement/transfer indication, and/or performance evaluation results (e.g., KPI gap)). Regarding claim 2, as applied to claim 1 above, Leng further discloses wherein the at least one processor is configured to receive a plurality of performance targets associated with the first functionality, and the plurality of performance targets are based on different complexities associated with different ML models (Paragraph 0101 discloses the KPIs can include one or more of SGCS, GCS, overhead size (e.g., the number of bits), RSRP, RSRQ, SINR, L1-RSRP, L1-SINR, RI, CQI, SLI, offsets in amplitude/phase coefficients, horizontal positioning accuracy in meters. Paragraph 0111 discloses evaluate the AI/ML model performance on spatial/temporal beam prediction, the UE can measure the L1-RSRP and/or L1-SINR for the monitoring RSs. Based on the measurement, the UE evaluates the KPIs by comparing the measured values of L1-RSRP and/or L1-SINR with the predicted values. In one example, the UE can select the best N beams with N highest L1-RSRP/L1-SINR from all measured beams, compare to the N predicted values, and evaluate the offsets and/or prediction accuracy. In one more example, the UE can also evaluate the performance in overhead size, e.g., comparing the estimated overhead size of legacy beam measurement report for the monitoring RS(s) and the estimated overhead size of AI/ML-based beam prediction). Regarding claim 4, as applied to claim 1 above, Leng further discloses monitor a performance of the first ML model of the set of ML models based on information collected at the apparatus or received from the network entity (Figure 5 and paragraph 0108 disclose the UE applies the monitoring configuration if configured to evaluate the performance of the AI/ML model for a certain use case (e.g., CSI compression, CSI prediction, spatial/temporal beam prediction, positioning)). Regarding claim 5, as applied to claim 1 above, Leng further discloses monitor a performance of at least one inactive ML model of the set of ML models associated with the first functionality (Figure 5 and paragraph 0083 disclose the UE applies the AI/ML model on a certain use case and performs related operations according to the NW configuration (operation 501), including AI/ML model monitoring configuration. At operation 502, the UE reports assistance information for AI/ML model monitoring, if configured); and at least one of activate the first functionality, deactivate the first functionality, or switch to a second functionality in place of the first functionality based on the performance target (Figure 5 and paragraphs 0083-0088 disclose at operation 503, the UE receives the signaling/configuration from the NW to trigger fallback to legacy operation of the use case or to trigger AI/ML model adaptation including model switch/update/refinement/transfer, and the UE performs accordingly as follows: Model switch: The UE selects a qualified AI/ML model among well-trained models to be applied at the UE side; Model refinement: The UE refines the current AI/ML model at the UE side by re-training and/or re-validation using new training/validation data; Model update: The UE reconstructs/prepares a new AI/ML model to be applied at the UE side; Model transfer: The UE directly applies the AI/ML model parameters transferred from the NW; Fallback to legacy operation: UE disables the AI/ML model for the use case and enables the legacy operation (e.g., CSI measurement and report using Type-I and/or Type-II codebook, beam management and report, DL-PRS measurement and report)). Regarding claim 8, as applied to claim 1 above, Leng further discloses train at least one ML model of the set of ML models based on one or more ML model parameters, data for training the at least one ML model, and a functionality performance target (Figure 5 and paragraph 0083 disclose at operation 503, the UE receives the signaling/configuration from the NW to trigger fallback to legacy operation of the use case or to trigger AI/ML model adaptation including model switch/update/refinement/transfer, and the UE performs accordingly as follows: Model refinement: The UE refines the current AI/ML model at the UE side by re-training and/or re-validation using new training/validation data). Regarding claim 11, Leng discloses a method of wireless communications at a user equipment (UE) (Figure 4 and paragraph 0069 disclose the UE 402 can provide the UE's capability information on the supported AI/ML related features, including AI/ML use cases and/or use-case-specific operations, types/structures of AI/ML models, and/or types of training/inference, and/or relevant operations for model managements, etc.), the method comprising: transmitting, to a network entity, capability information related to a first functionality supported by a set of machine learning (ML) models of the UE (Figure 4 and paragraphs 0069 and 0070 disclose the network 401 can request the UE 402 to provide radio access capability information by sending an UECapabilityEnquiry message 403 after access stratum (AS) security is setup. The UE 402 replies with an UECapabilityInformation message 404. In one embodiment, in information element (IE) UE-NR-Capability within the UECapabilityInformation message 404, the UE 402 can provide the UE's capability information on the supported AI/ML related features, including AI/ML use cases and/or use-case-specific operations, types/structures of AI/ML models, and/or types of training/inference, and/or relevant operations for model managements, etc. The UE capability of supporting AI/ML use cases can be defined per UE, and/or differently in time division duplexing (TDD) and frequency division duplexing (FDD), and/or differently in frequency range 1 (FR1) and frequency range 2 (FR2). In one example, for each use case, e.g., CSI compression, CSI prediction, beam management, and positioning, a one-bit indication is used to indicate whether the UE supports the AI/ML-based operation for the respective use case); receiving, from the network entity, a performance target associated with the first functionality (Figure 6 and paragraphs 0090 and 0100 disclose at operation 601 the NW configures the UE to apply the AI/ML model for a certain use case and to perform the related operations, including AI/ML model monitoring. For operation 501/601, the model monitoring configuration can include the target value of KPIs); and transmitting a message based on the first functionality performed using a first ML model selected from the set of ML models based on the performance target (Figure 7 and paragraph 0123 disclose at operation 703, the UE sends monitoring report(s), including model switch/update/refinement/transfer indication, and/or performance evaluation results (e.g., KPI gap)). Regarding claim 12, as applied to claim 11 above, Leng further discloses wherein receiving the performance target comprises receiving a plurality of performance targets associated with the first functionality (Paragraph 0101 discloses the KPIs can include one or more of SGCS, GCS, overhead size (e.g., the number of bits), RSRP, RSRQ, SINR, L1-RSRP, L1-SINR, RI, CQI, SLI, offsets in amplitude/phase coefficients, horizontal positioning accuracy in meters. Paragraph 0111 discloses evaluate the AI/ML model performance on spatial/temporal beam prediction, the UE can measure the L1-RSRP and/or L1-SINR for the monitoring RSs. Based on the measurement, the UE evaluates the KPIs by comparing the measured values of L1-RSRP and/or L1-SINR with the predicted values. In one example, the UE can select the best N beams with N highest L1-RSRP/L1-SINR from all measured beams, compare to the N predicted values, and evaluate the offsets and/or prediction accuracy. In one more example, the UE can also evaluate the performance in overhead size, e.g., comparing the estimated overhead size of legacy beam measurement report for the monitoring RS(s) and the estimated overhead size of AI/ML-based beam prediction). Regarding claim 14, as applied to claim 11 above, Leng further discloses monitoring a performance of the first ML model of the set of ML models based on information collected at the UE or received from the network entity (Figure 5 and paragraph 0108 disclose the UE applies the monitoring configuration if configured to evaluate the performance of the AI/ML model for a certain use case (e.g., CSI compression, CSI prediction, spatial/temporal beam prediction, positioning)). Regarding claim 15, as applied to claim 11 above, Leng further discloses monitoring a performance of at least one inactive ML model of the set of ML models associated with the first functionality (Figure 5 and paragraph 0083 disclose the UE applies the AI/ML model on a certain use case and performs related operations according to the NW configuration (operation 501), including AI/ML model monitoring configuration. At operation 502, the UE reports assistance information for AI/ML model monitoring, if configured); and at least one of activating the first functionality, deactivating the first functionality, or switching to a second functionality in place of the first functionality based on the performance target (Figure 5 and paragraphs 0083-0088 disclose at operation 503, the UE receives the signaling/configuration from the NW to trigger fallback to legacy operation of the use case or to trigger AI/ML model adaptation including model switch/update/refinement/transfer, and the UE performs accordingly as follows: Model switch: The UE selects a qualified AI/ML model among well-trained models to be applied at the UE side; Model refinement: The UE refines the current AI/ML model at the UE side by re-training and/or re-validation using new training/validation data; Model update: The UE reconstructs/prepares a new AI/ML model to be applied at the UE side; Model transfer: The UE directly applies the AI/ML model parameters transferred from the NW; Fallback to legacy operation: UE disables the AI/ML model for the use case and enables the legacy operation (e.g., CSI measurement and report using Type-I and/or Type-II codebook, beam management and report, DL-PRS measurement and report)). Regarding claim 18, as applied to claim 11 above, Leng further discloses training at least one ML model of the set of ML models based on one or more ML model parameters, data for training the at least one ML model, and a functionality performance target (Figure 5 and paragraph 0083 disclose at operation 503, the UE receives the signaling/configuration from the NW to trigger fallback to legacy operation of the use case or to trigger AI/ML model adaptation including model switch/update/refinement/transfer, and the UE performs accordingly as follows: Model refinement: The UE refines the current AI/ML model at the UE side by re-training and/or re-validation using new training/validation data). Regarding claim 21, Leng discloses an apparatus for wireless communications (Paragraph 0046 discloses the terms “BS” and “TRP” are used interchangeably in this patent document to refer to network infrastructure components that provide wireless access to remote terminals), the apparatus comprising: one or more memories; and one or more processors coupled to the one or more memories (Figure 2 and paragraphs 0051 and 0055 disclose the gNB 102 includes multiple antennas 205a-205n, multiple transceivers 210a-210n, a controller/processor 225, a memory 230, and a backhaul or network interface 235. The controller/processor 225 is also capable of executing programs and other processes resident in the memory 230, such as processes for supporting AI/ML model management and adaptation operation of one or more of the UEs in a wireless communication system) and configured to: receive, from a user equipment (UE), capability information related to a first functionality supported by a set of machine learning (ML) models of the UE (Figure 4 and paragraphs 0069 and 0070 disclose the network 401 can request the UE 402 to provide radio access capability information by sending an UECapabilityEnquiry message 403 after access stratum (AS) security is setup. The UE 402 replies with an UECapabilityInformation message 404. In one embodiment, in information element (IE) UE-NR-Capability within the UECapabilityInformation message 404, the UE 402 can provide the UE's capability information on the supported AI/ML related features, including AI/ML use cases and/or use-case-specific operations, types/structures of AI/ML models, and/or types of training/inference, and/or relevant operations for model managements, etc. The UE capability of supporting AI/ML use cases can be defined per UE, and/or differently in time division duplexing (TDD) and frequency division duplexing (FDD), and/or differently in frequency range 1 (FR1) and frequency range 2 (FR2). In one example, for each use case, e.g., CSI compression, CSI prediction, beam management, and positioning, a one-bit indication is used to indicate whether the UE supports the AI/ML-based operation for the respective use case); and output, for transmission to the UE, a performance target associated with the first functionality (Figure 6 and paragraphs 0090 and 0100 disclose at operation 601 the NW configures the UE to apply the AI/ML model for a certain use case and to perform the related operations, including AI/ML model monitoring. For operation 501/601, the model monitoring configuration can include the target value of KPIs). Regarding claim 22, as applied to claim 21 above, Leng further discloses wherein the one or more processors are configured to output, for transmission to the UE, a plurality of performance targets associated with the first functionality (Paragraph 0101 discloses the KPIs can include one or more of SGCS, GCS, overhead size (e.g., the number of bits), RSRP, RSRQ, SINR, L1-RSRP, L1-SINR, RI, CQI, SLI, offsets in amplitude/phase coefficients, horizontal positioning accuracy in meters. Paragraph 0111 discloses evaluate the AI/ML model performance on spatial/temporal beam prediction, the UE can measure the L1-RSRP and/or L1-SINR for the monitoring RSs. Based on the measurement, the UE evaluates the KPIs by comparing the measured values of L1-RSRP and/or L1-SINR with the predicted values. In one example, the UE can select the best N beams with N highest L1-RSRP/L1-SINR from all measured beams, compare to the N predicted values, and evaluate the offsets and/or prediction accuracy. In one more example, the UE can also evaluate the performance in overhead size, e.g., comparing the estimated overhead size of legacy beam measurement report for the monitoring RS(s) and the estimated overhead size of AI/ML-based beam prediction). Regarding claim 23, as applied to claim 21 above, Leng further discloses output, for transmission to the UE, information of a first ML model of the set of ML models associated with the first functionality (Figure 6 and paragraph 0090 discloses at operation 601 the NW configures the UE to apply the AI/ML model for a certain use case and to perform the related operations, including AI/ML model monitoring). Regarding claim 26, Leng discloses a method of wireless communications at a network entity (Figure 4 and paragraph 0069 disclose the network 401 can request the UE 402 to provide radio access capability information by sending an UECapabilityEnquiry message 403 after access stratum (AS) security is setup), the method comprising: receiving, from a user equipment (UE), capability information related to a first functionality supported by a set of machine learning (ML) models of the UE (Figure 4 and paragraphs 0069 and 0070 disclose the network 401 can request the UE 402 to provide radio access capability information by sending an UECapabilityEnquiry message 403 after access stratum (AS) security is setup. The UE 402 replies with an UECapabilityInformation message 404. In one embodiment, in information element (IE) UE-NR-Capability within the UECapabilityInformation message 404, the UE 402 can provide the UE's capability information on the supported AI/ML related features, including AI/ML use cases and/or use-case-specific operations, types/structures of AI/ML models, and/or types of training/inference, and/or relevant operations for model managements, etc. The UE capability of supporting AI/ML use cases can be defined per UE, and/or differently in time division duplexing (TDD) and frequency division duplexing (FDD), and/or differently in frequency range 1 (FR1) and frequency range 2 (FR2). In one example, for each use case, e.g., CSI compression, CSI prediction, beam management, and positioning, a one-bit indication is used to indicate whether the UE supports the AI/ML-based operation for the respective use case); and transmitting, to the UE, a performance target associated with the first functionality (Figure 6 and paragraphs 0090 and 0100 disclose at operation 601 the NW configures the UE to apply the AI/ML model for a certain use case and to perform the related operations, including AI/ML model monitoring. For operation 501/601, the model monitoring configuration can include the target value of KPIs). Regarding claim 27, as applied to claim 26 above, Leng further discloses transmitting, to the UE, a plurality of performance targets associated with the first functionality (Paragraph 0101 discloses the KPIs can include one or more of SGCS, GCS, overhead size (e.g., the number of bits), RSRP, RSRQ, SINR, L1-RSRP, L1-SINR, RI, CQI, SLI, offsets in amplitude/phase coefficients, horizontal positioning accuracy in meters. Paragraph 0111 discloses evaluate the AI/ML model performance on spatial/temporal beam prediction, the UE can measure the L1-RSRP and/or L1-SINR for the monitoring RSs. Based on the measurement, the UE evaluates the KPIs by comparing the measured values of L1-RSRP and/or L1-SINR with the predicted values. In one example, the UE can select the best N beams with N highest L1-RSRP/L1-SINR from all measured beams, compare to the N predicted values, and evaluate the offsets and/or prediction accuracy. In one more example, the UE can also evaluate the performance in overhead size, e.g., comparing the estimated overhead size of legacy beam measurement report for the monitoring RS(s) and the estimated overhead size of AI/ML-based beam prediction). Regarding claim 28, as applied to claim 26 above, Leng further discloses transmitting, to the UE, information of a first ML model of the set of ML models associated with the first functionality (Figure 6 and paragraph 0090 discloses at operation 601 the NW configures the UE to apply the AI/ML model for a certain use case and to perform the related operations, including AI/ML model monitoring). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office Action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the Examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Leng in view of Chavva et al. (U.S. Patent Application Publication No. 2021/0351885 A1) (hereinafter Chavva). Regarding claim 6, as applied to claim 1 above, Leng discloses the claimed invention except explicitly disclosing monitor a performance of the first ML model based on information collected at the apparatus based on an expected performance associated with the first ML model. In analogous art, Chavva discloses monitor a performance of the first ML model based on information collected at the apparatus based on an expected performance associated with the first ML model (Paragraph 0033 discloses determining, by a User Equipment (UE) (601), a plurality of radio parameters for a connection between the UE (601) and a Next Generation node B (gNB) (607); computing, by a neural network (602c) in the UE (601), values of CSI feedback parameters at a current time instance based on the determined plurality of radio parameters; predicting, by the neural network (602c), probable values of the CSI feedback parameters at a future time instance; generating, by the neural network (602c), a CSI report, by compiling at least one of the computed values of the CSI feedback parameters and the predicted values the CSI feedback parameters; and transmitting, by the UE (601), the CSI report to the gNB (607)). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to incorporate receiving an ML model and monitoring performance, as described in Chavva, with compiling a report of computed values and predicted values based on a neural network, as described in Leng, because doing so is combining prior art elements according to known methods to yield predictable results. Combining compiling a report of computed values and predicted values based on a neural network of Chavva with receiving an ML model and monitoring performance of Leng was within the ordinary ability of one of ordinary skill in the art based on the teachings of Chavva. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Leng and Chavva to obtain the invention as specified in claim 6. Regarding claim 16, as applied to claim 11 above, Leng discloses the claimed invention except explicitly disclosing monitoring a performance of the first ML model based on information collected at the UE based on an expected performance associated with the first ML model. In analogous art, Chavva discloses monitoring a performance of the first ML model based on information collected at the UE based on an expected performance associated with the first ML model (Paragraph 0033 discloses determining, by a User Equipment (UE) (601), a plurality of radio parameters for a connection between the UE (601) and a Next Generation node B (gNB) (607); computing, by a neural network (602c) in the UE (601), values of CSI feedback parameters at a current time instance based on the determined plurality of radio parameters; predicting, by the neural network (602c), probable values of the CSI feedback parameters at a future time instance; generating, by the neural network (602c), a CSI report, by compiling at least one of the computed values of the CSI feedback parameters and the predicted values the CSI feedback parameters; and transmitting, by the UE (601), the CSI report to the gNB (607)). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to incorporate receiving an ML model and monitoring performance, as described in Chavva, with compiling a report of computed values and predicted values based on a neural network, as described in Leng, because doing so is combining prior art elements according to known methods to yield predictable results. Combining compiling a report of computed values and predicted values based on a neural network of Chavva with receiving an ML model and monitoring performance of Leng was within the ordinary ability of one of ordinary skill in the art based on the teachings of Chavva. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Leng and Chavva to obtain the invention as specified in claim 16. Claims 10, 20, 25, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Leng in view of Bai et al. (U.S. Patent Application Publication No. 2021/0390434 A1) (hereinafter Bai). Regarding claim 10, as applied to claim 1 above, Leng discloses the claimed invention except explicitly disclosing wherein at least one ML model of the set of ML models is trained by at least one of the network entity or another network entity in communication with the apparatus. In analogous art, Bai discloses wherein at least one ML model of the set of ML models is trained by at least one of the network entity or another network entity in communication with the apparatus (Paragraph 0053 discloses the training and updating of the machine learning-based models may occur at the base station). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to incorporate a base station training machine learning-based models, as described in Bai, with a UE receiving ML models from a network, as described in Leng, because doing so is combining prior art elements according to known methods to yield predictable results. Combining a base station training machine learning-based models of Bai with a UE receiving ML models from a network of Leng was within the ordinary ability of one of ordinary skill in the art based on the teachings of Bai. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Leng and Bai to obtain the invention as specified in claim 10. Regarding claim 20, as applied to claim 11 above, Leng discloses the claimed invention except explicitly disclosing wherein at least one ML model of the set of ML models is trained by at least one of the network entity or another network entity in communication with the UE. In analogous art, Bai discloses wherein at least one ML model of the set of ML models is trained by at least one of the network entity or another network entity in communication with the UE (Paragraph 0053 discloses the training and updating of the machine learning-based models may occur at the base station). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to incorporate a base station training machine learning-based models, as described in Bai, with a UE receiving ML models from a network, as described in Leng, because doing so is combining prior art elements according to known methods to yield predictable results. Combining a base station training machine learning-based models of Bai with a UE receiving ML models from a network of Leng was within the ordinary ability of one of ordinary skill in the art based on the teachings of Bai. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Leng and Bai to obtain the invention as specified in claim 20. Regarding claim 25, as applied to claim 21 above, Leng discloses the claimed invention except explicitly disclosing wherein at least one ML model of the set of ML models is trained by at least one of the apparatus or a network entity in communication with the UE. In analogous art, Bai discloses wherein at least one ML model of the set of ML models is trained by at least one of the apparatus or a network entity in communication with the UE (Paragraph 0053 discloses the training and updating of the machine learning-based models may occur at the base station). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to incorporate a base station training machine learning-based models, as described in Bai, with a UE receiving ML models from a network, as described in Leng, because doing so is combining prior art elements according to known methods to yield predictable results. Combining a base station training machine learning-based models of Bai with a UE receiving ML models from a network of Leng was within the ordinary ability of one of ordinary skill in the art based on the teachings of Bai. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Leng and Bai to obtain the invention as specified in claim 25. Regarding claim 30, as applied to claim 26 above, Leng discloses the claimed invention except explicitly disclosing wherein at least one ML model of the set of ML models is trained by at least one of the network entity or another network entity in communication with the UE. In analogous art, Bai discloses wherein at least one ML model of the set of ML models is trained by at least one of the network entity or another network entity in communication with the UE (Paragraph 0053 discloses the training and updating of the machine learning-based models may occur at the base station). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to incorporate a base station training machine learning-based models, as described in Bai, with a UE receiving ML models from a network, as described in Leng, because doing so is combining prior art elements according to known methods to yield predictable results. Combining a base station training machine learning-based models of Bai with a UE receiving ML models from a network of Leng was within the ordinary ability of one of ordinary skill in the art based on the teachings of Bai. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the teachings of Leng and Bai to obtain the invention as specified in claim 30. Allowable Subject Matter Claims 7, 9, 17, 19, 24, and 29 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 3 and 13 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office Action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Considering claims 3 and 13, the best prior art found during the prosecution of the present application, Leng, fails to disclose, teach, or suggest the limitations of determining whether at least one ML model of the set of ML models associated with the first functionality satisfies the performance target related to at least one of training, validation, or test metrics; and determining a performance of each ML model of the set of ML models associated with the first functionality satisfy the performance target in combination with and in the context of all of the other limitations in claims 3 and 13 and all of the limitations of the base claim and any intervening claims. Considering claims 7 and 17, the best prior art found during the prosecution of the present application, Leng, fails to disclose, teach, or suggest the limitations of at least one of activating the first ML model based on the expected performance, deactivating the first ML model based on the performance of the first ML model, selecting a second ML model to achieve the performance target, activating the second ML model in place of the first ML model, or activating a non-ML model to perform the first functionality in combination with and in the context of all of the other limitations in claims 7 and 17 and all of the limitations of the base claim and any intervening claims. Considering claims 9 and 19, the best prior art found during the prosecution of the present application, Leng, fails to disclose, teach, or suggest the limitations of obtaining an expected performance associated with the first functionality based on the training of the at least one ML model, wherein a value of the expected performance is greater than a value of the functionality performance target in combination with and in the context of all of the other limitations in claims 9 and 19 and all of the limitations of the base claim and any intervening claims. Considering claims 24 and 29, the best prior art found during the prosecution of the present application, Leng, fails to disclose, teach, or suggest the limitations of receiving, from the UE, an expected performance associated with the first functionality based on training of at least one ML model of the set of ML models, wherein a value of the expected performance is greater than a value of a functionality performance target in combination with and in the context of all of the other limitations in claims 24 and 29 and all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure. Wang et al. (U.S. Patent Application Publication No. 2020/0367051 A1) discloses a terminal and base station; Calzolari et al. (U.S. Patent Application Publication No. 2020/0412417 A1) discloses dynamic thresholds for antenna switching diversity; Ma et al. (U.S. Patent Application Publication No. 2021/0160149 A1) discloses personalized tailored air interface; Madadi et al. (U.S. Patent Application Publication No. 2022/0286927 A1) discloses a method and apparatus for support of machine learning or artificial intelligence techniques for handover management in communication systems; Kwon et al. (U.S. Patent Application Publication No. 2023/0145844 A1) discloses a method and apparatus for transmitting channel information based on machine learning; Bhamri et al. (U.S. Patent Application Publication No. 2023/0164817 A1) discloses artificial intelligence capability reporting for wireless communication; Zeng et al. (U.S. Patent Application Publication No. 2023/0209390 A1) discloses an intelligent radio access network; Park et al. (U.S. Patent Application Publication No. 2023/0354063 A1) discloses a method and apparatus for configuring artificial neural network for wireless communication in mobile communication system; Jeon (U.S. Patent Application Publication No. 2024/0097764 A1) discloses CSI feedback in cellular systems; Esswie et al. (U.S. Patent Application Publication No. 2024/0187877 A1) discloses artificial intelligence radio function model management in a communication network; Rydén et al. (U.S. Patent Application Publication No. 2024/0292236 A1) discloses methods and apparatuses for provisioning a wireless device with prediction information; Hasegawa et al. (U.S. Patent Application Publication No. 2024/0295625 A1) discloses methods and apparatus for training based positioning in wireless communication systems; Wu et al. (U.S. Patent Application Publication No. 2024/0306119 A1) discloses a communication method and apparatus; Sun et al. (U.S. Patent Application Publication No. 2024/0320488 A1) discloses a communication method and communication apparatus; Wang et al. (U.S. Patent Application Publication No. 2024/0333601 A1) discloses wireless network employing neural networks for channel state feedback; Kela et al. (U.S. Patent Application Publication No. 2024/0340680 A1) discloses selective learning for UE reported values; Shrivastava (U.S. Patent Application Publication No. 2024/0340634 A1) discloses autonomous operation of user equipment with artificial intelligence/machine learning model capability; Hassan et al. (U.S. Patent Application Publication No. 2024/0378488 A1) discloses a ML model policy with difference information for ml model update for wireless networks; Beluri et al. (U.S. Patent Application Publication No. 2025/0038816 A1) discloses pre-processing for CSI compression in wireless systems; Hu et al. (U.S. Patent Application Publication No. 2025/0168716 A1) discloses a method and system for artificial intelligence (ai)-based handover procedure; Kim et al. (U.S. Patent Application Publication No. 2025/0203694 A1) discloses a method and device for transmitting updated information for ml reconfiguration in wireless LAN system; Lee et al. (U.S. Patent Application Publication No. 2025/0219748 A1) discloses a learning-based signal receiving method and device; Echigo et al. (U.S. Patent Application Publication No. 2025/0350539 A1) discloses a terminal, radio communication method, and base station; and Estevez et al. (U.S. Patent Application Publication No. 2025/0392523 A1) discloses artificial intelligence and machine learning models management and/or training. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to MARK G. PANNELL whose telephone number is (303) 297-4245. The Examiner can normally be reached Monday through Friday 8:00 am to 3:00 pm (Mountain Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Rafael Perez-Gutierrez can be reached on (571) 272-7915. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000. /Mark G. Pannell/Primary Examiner, Art Unit 2642
Read full office action

Prosecution Timeline

Jan 29, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598582
PAGING CONFIGURATION METHODS AND APPARATUSES, PAGING METHODS AND APPARATUSES
2y 5m to grant Granted Apr 07, 2026
Patent 12593309
METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR WIRELESS COMMUNICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12587999
METHOD AND APPARATUS FOR DISTINGUISHING PAGING CAPABILITY OF BASE STATION, AND COMMUNICATION DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12580602
ACCESSORY SUPPORT DEVICES FOR ELECTRONIC DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12574732
PROVIDING LOCATION-BASED TELECOMMUNICATIONS RESOURCES TO USERS SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
90%
With Interview (+16.2%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 405 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month