Prosecution Insights
Last updated: April 19, 2026
Application No. 18/408,060

FUNCTIONALITY BASED TWO-SIDED MACHINE LEARNING OPERATIONS

Non-Final OA §103
Filed
Jan 09, 2024
Examiner
LIU, JUNG-JEN
Art Unit
2473
Tech Center
2400 — Computer Networks
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
94%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
1070 granted / 1198 resolved
+31.3% vs TC avg
Minimal +5% lift
Without
With
+4.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
36 currently pending
Career history
1234
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
71.4%
+31.4% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
2.9%
-37.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1198 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 103 2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 2a. Claims 1-26 are rejected under 35 U.S.C. 103 as being unpatentable over Lutchoomun (US 20250350383 A1) in view of Ottrsten (US 20210345134 A1). 2b. Summary of the Cited Prior Art Lutchoomun discloses a method for machine learning channel state reporting. Ottrsten discloses a method for improving network performance by machine learning. 2c. Claim Analysis Regarding Claim 1, Lutchoomun discloses: An apparatus (Fig 1B) for wireless communications, the apparatus comprising: at least one memory (Fig 1B, Memory 130); and at least one processor (Fig 1B, Processor 118) coupled to the at least one memory, the at least one processor being configured to: receive a first set of operations (Fig 4, CSI-RS for online Training) supported by one or more machine learning models (Fig 3 Receiver ML) of a network entity (Fig 4. Model Trainer WTRU), receive a first set of parameters (Fig 4, Data Distribution Statistics 413) associated with the first set of operations (Fig 4, CSI-RS for online Training), wherein the first set of parameters (Fig 4, Data Distribution Statistics 413) are supported by the one or more machine learning models (Fig 3 Receiver ML; see: [0199] … each ML model may comprise any of: performance metric, threshold T; data distribution statistics (e.g., Doppler, SINR); and model parameters (e.g., size, latency, RS overhead) of the network entity (Fig 4. Model Trainer WTRU), select a machine learning model (Fig 5, CSI-RS for Model Selection 515; see: [0211] … the WTRU selects a model whose data distribution most closely matches its environmental parameters) for performing a first operation of the first set of operations (Fig 4, CSI-RS for online Training) based on the first set of parameters (Fig 4, Data Distribution Statistics 413); detect a change in at least one of (see: [0150] … to check for AIML model suitability; configuration, activation and/or deactivation of a cell; change in channel conditions): the first operation; or a parameter associated with the first operation (see [0201] … WTRU determines that no available trained model parameters match its requirements); and transmit an indication (Fig 4, Indicate that no Current Model Fits 417) to change the first operation based on the detected change (see: [0201] … the WTRU may send an indication 417 to the gNB that no current ML model fits its requirements). Lutchoomun does not elaborate about network “change”. However, Ottrsten discloses: detect a change in at least one of (Fig 3, 305 Update the machine learning model; see: [0058] … Refined and reinforcement learning may be used to continuously update the one or more machine learning models based on new inputs. This provides flexibility if something in the network environment changes). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to integrate Lutchoomun’s method for machine learning channel state reporting with Ottrsten’s method for improving network performance by machine learning with the motivation being to improve the performance of a wireless communications network (Ottrsten, [0001]). Regarding Claim 2, Lutchoomun discloses: wherein the at least one processor (Fig 1B, Processor 118) is further configured to: determine a second set of operations (Fig 4, ML Model Transfer Request, 421; Examiner’s Note: ML Model Transfer Request functions as a second operations) supported by one or more machine learning models (Fig 3 Receiver ML) of the apparatus (Fig 1B); determine a second set of parameters (Fig 4, ML Model Transfer Request DCI, 421; Examiner’s Note: the DCI includes the parameters for the transfer request) associated with the second set of operations (Fig 4, ML Model Transfer Request, 421), wherein the second set of parameters (Fig 4, ML Model Transfer Request DCI, 421; Examiner’s Note: the DCI includes the parameters for the transfer request) are supported by the one or more machine learning models (Fig 3 Receiver ML) of the apparatus (Fig 1B); and transmit, to the network entity (Fig 4. Model Trainer WTRU), the second set of operations and the second set of parameters Regarding Claim 3, Lutchoomun discloses: wherein the first set of operations (Fig 4, CSI-RS for online Training) and the first set of parameters (Fig 4, Data Distribution Statistics 413) are based on the second set of operations and the second set of parameters (Fig 4, ML Model Transfer Request and DCI, 421; Examiner’s Note: ML Model Transfer Request functions as a second operator and the DCI includes the parameters for the transfer request. Further, the first operations and parameters are for the second operations). Regarding Claim 4, Lutchoomun discloses: wherein the first set of operations (Fig 4, CSI-RS for online Training) and the first set of parameters (Fig 4, Data Distribution Statistics 413) are received in a unicast, multicast, or broadcast (see: [0099] In an example, the gNB may send a list of one or more trained models to a WTRU (e.g., through broadcast in a System Information Block (SIB)) from the network entity (Fig 4. Model Trainer WTRU). Regarding Claim 5, Lutchoomun discloses: wherein the at least one processor (Fig 1B, Processor 118) is further configured to: receive an activation message (see: [0150] … a WTRU may be triggered with … configuration, activation and/or deactivation of a cell), wherein the activation message is configured to activate (see: [0074] … WTRU reports related measurements only when the resource is activated) a second operation of the first set of operations (Fig 4, CSI-RS for online Training); and select a machine learning model (Fig 5, CSI-RS for Model Selection 515; see: [0211] … the WTRU selects a model whose data distribution most closely matches its environmental parameters) to perform the second operation (Fig 4, ML Model Transfer Request, 421; Examiner’s Note: ML Model Transfer Request functions as a second operations). Regarding Claim 6, Lutchoomun discloses: wherein the at least one processor (Fig 1B, Processor 118) is further configured to: receive a deactivation message (see: [0150] … a WTRU may be triggered with … configuration, activation and/or deactivation of a cell), wherein the deactivation message specifies the first operation of the first set of operations (Fig 4, CSI-RS for online Training); and based on the deactivation message (see: [0150] … a WTRU may be triggered with … configuration, activation and/or deactivation of a cell), stop performing the first operation (see: [0074] … WTRU reports related measurements only when the resource is activated; Examiner’s Note: the disclosures imply that the operation is stop when deactivation message is received or deactivation is configured) using the selected machine learning model (Fig 5, CSI-RS for Model Selection 515; see: [0211] … the WTRU selects a model whose data distribution most closely matches its environmental parameters). Regarding Claim 7, Lutchoomun discloses: wherein the at least one processor (Fig 1B, Processor 118) is further configured to transmit (Fig 4, ML Model Parameters Via PUCCH/PUSCH, 423), to the network entity (Fig 4. Model Trainer WTRU), a message based on an output of the selected machine learning model (see: [0206] … In response, at 423, the WTRU may transfer the ML model parameters to the gNB). Regarding Claim 8, Lutchoomun discloses: An apparatus (Fig 1B) for wireless communications, the apparatus (Fig 1B) comprising: at least one memory (Fig 1B, Memory 130); and at least one processor (Fig 1B, Processor 118) coupled to the at least one memory, the at least one processor (Fig 1B, Processor 118) being configured to: receive an indication of a first set of operations (Fig 4, CSI-RS for online Training) supported by one or more machine learning models (Fig 3 Receiver ML) of a network entity (Fig 4. Model Trainer WTRU); select a machine learning model (Fig 5, CSI-RS for Model Selection 515; see: [0211] … the WTRU selects a model whose data distribution most closely matches its environmental parameters) for performing a first operation of the first set of operations (Fig 4, CSI-RS for online Training); detect a change in at least one of (see: [0150] … to check for AIML model suitability; configuration, activation and/or deactivation of a cell; change in channel conditions): the first operation; or a parameter associated with the first operation (see [0201] … WTRU determines that no available trained model parameters match its requirements); and transmit an indication (Fig 4, Indicate that no Current Model Fits 417) to change the first operation based on the detected change (see: [0201] … the WTRU may send an indication 417 to the gNB that no current ML model fits its requirements). Lutchoomun does not elaborate about network “change”. However, Ottrsten discloses: detect a change in at least one of (Fig 3, 305 Update the machine learning model; see: [0058] … Refined and reinforcement learning may be used to continuously update the one or more machine learning models based on new inputs. This provides flexibility if something in the network environment changes). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to integrate Lutchoomun’s method for machine learning channel state reporting with Ottrsten’s method for improving network performance by machine learning with the motivation being to improve the performance of a wireless communications network (Ottrsten, [0001]). Regarding Claim 9, Lutchoomun discloses: wherein the indication of the first set of operations (Fig 4, CSI-RS for online Training) comprises a set of identifiers, the set of identifiers comprising a respective identifier for each machine learning model (see: [0099] … The models may have model IDs that uniquely identify them, assigned either by the WTRU that trained them or by the network) of the one or more machine learning models (Fig 3 Receiver ML) of the network entity (Fig 4. Model Trainer WTRU). Regarding Claim 10, Lutchoomun discloses: wherein the at least one processor (Fig 1B, Processor 118) is further configured to determine a first set of parameters (Fig 4, Data Distribution Statistics 413) associated with the one or more machine learning models (Fig 3 Receiver ML) identified by the set of identifiers (see: [0099] … The models may have model IDs that uniquely identify them, assigned either by the WTRU that trained them or by the network), the first set of parameters (Fig 4, Data Distribution Statistics 413) indicating parameters (see [0109] … The first part may carry a WTRU-specific identifier and a second part may be a model-specific identifier) supported by the one or more machine learning models (Fig 3 Receiver ML) of the network entity (Fig 4. Model Trainer WTRU). Regarding Claim 11, Lutchoomun discloses: wherein the set of identifiers (see: [0209] … a list of (one of more) models WTRU is configured with (i.e., model IDs) indicates an operation applicability (Fig 5A, Assistance info, e.g. Capabilities, 511) for operations of the first set of operations (Fig 4, CSI-RS for online Training; Fig 5A, CSI-RS for Model Selection) and parameters associated with operations of the first set of operations (Fig 4, CSI-RS for online Training). Regarding Claim 12, Lutchoomun discloses: wherein the set of identifiers (see: [0209] … a list of (one of more) models WTRU is configured with (i.e., model IDs) indicates configurations for using operations of the first set of operations (Fig 4, CSI-RS for online Training). Regarding Claim 13, Lutchoomun discloses: wherein the at least one processor (Fig 1B, Processor 118) is configured to receive the indication of the first set of operations (Fig 4, CSI-RS for online Training) in a unicast, multicast, or broadcast (see: [0099] In an example, the gNB may send a list of one or more trained models to a WTRU (e.g., through broadcast in a System Information Block (SIB)) from the network entity (Fig 4. Model Trainer WTRU). Regarding Claim 14, Lutchoomun discloses: wherein the at least one processor (Fig 1B, Processor 118) is further configured to: receive an assistance message (Fig 4, ML Model Transfer Request in DCI, 421; Examiner’s Note: DCI may function as an assistance message); and select a second operation (Fig 4, ML Model Transfer Request, 421; Examiner’s Note: ML Model Transfer Request functions as a second operations) from the first set of operations (Fig 4, CSI-RS for online Training) based on the assistance message (Fig 4, ML Model Transfer Request in DCI, 421; Examiner’s Note: DCI may function as an assistance message). Regarding Claim 15, Lutchoomun discloses: wherein the at least one processor (Fig 1B, Processor 118) is further configured to transmit (Fig 4, Indication of Selection as Basis, 425), to the network entity (Fig 4. Model Trainer WTRU), a message based on an output of the selected machine learning model (see: [0207] Finally, the WTRU may receive, at 425, an indication from the gNB that it has been selected as the basis for transfer learning (designated anchor WTRU)). Regarding Claims 16-21, the apparatus claims disclose similar features as of Claims 1, 1, 2, 5-6 and 1, and are rejected accordingly. Further, Claims 16-21 disclose the same operations of Claims 1-7, but are performed by a transmitter. Regarding Claims 22-26, the apparatus claims disclose similar features as of Claims 1, 4, 2, and 5-6, and are rejected accordingly. Further, Claims 16-21 disclose the same operations of Claims 1-7, but are performed by a transmitter. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jung-Jen Liu whose telephone number is 571-270-7643. The examiner can normally be reached on Monday to Friday, 9:00 AM to 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kwang B. Yao can be reached on 571-272-3182. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUNG LIU/Primary Examiner, Art Unit 2473
Read full office action

Prosecution Timeline

Jan 09, 2024
Application Filed
Jan 12, 2026
Non-Final Rejection — §103
Mar 30, 2026
Interview Requested
Apr 02, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604207
APPARATUSES AND METHODS FOR FACILITATING AN INTEGRATED NETWORK AND SYSTEM PLANNING TOOL
2y 5m to grant Granted Apr 14, 2026
Patent 12604332
TECHNIQUES FOR SENDING ASSISTANCE INFORMATION FOR CANCELLING INTERFERENCE IN WIRELESS COMMUNICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12603695
Reconfigurable Wireless Radio System for Providing Highly Sensitive Nationwide Radar Functionality Using a Limited Number of Frequencies and Adaptable Hardware
2y 5m to grant Granted Apr 14, 2026
Patent 12593241
COMPUTERIZED SYSTEMS AND METHODS FOR NON-DISRUPTIVE CAC ON A NETWORK VIA MLO FUNCTIONALITY
2y 5m to grant Granted Mar 31, 2026
Patent 12593375
METHOD OF DETECTING STATUS OF USER OUTSIDE VEHICLE, SYSTEM, STORAGE MEDIUM AND VEHICLE THEREOF
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
94%
With Interview (+4.7%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1198 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month