Prosecution Insights
Last updated: April 19, 2026
Application No. 18/572,152

THE COMBINED ML STRUCTURE PARAMETERS CONFIGURATION

Non-Final OA §103
Filed
Dec 19, 2023
Examiner
LO, DIANE LEE
Art Unit
2466
Tech Center
2400 — Computer Networks
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
98%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
842 granted / 941 resolved
+31.5% vs TC avg
Moderate +8% lift
Without
With
+8.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
25 currently pending
Career history
966
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
50.4%
+10.4% vs TC avg
§102
32.8%
-7.2% vs TC avg
§112
3.0%
-37.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 941 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is response to Application 18/572,152 filed on 12/19/2023 in which claims 1-30 are presented for examination. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, 9-19, and 21-30 are rejected under 35 U.S.C. 103 as being unpatentable over Bao et al. (US 2021/0185515 A1) in view of Challita et al. (WO 2021/089568 A1). 1. Regarding claims 1 and 27, Bao teaches a method and an apparatus for wireless communication at a user equipment (UE) (Fig. 1 115), comprising: a memory; and at least one processor coupled to the memory, the memory and the at least one processor configured to: receive a first configuration for at least one first machine learning (ML) block, the at least one first ML block configured with at least one first parameter for a first procedure of the at least one first ML block (Paragraph [0017] configuration information for the first neural network block); receive a second configuration for at least one second ML block, the at least one second ML block configured with at least one second parameter for a second procedure of the at least one second ML block, the at least one second ML block dedicated to a task included in a plurality of tasks associated with the at least one first ML block (Paragraph [0235] to [0239] second neural network block; configured to perform channel estimation for one or more base signals, channel state information compression, or a combination thereof). Bao does not explicitly disclose activate an ML model based on an association of the at least one second ML block configured with the at least one second parameter with the at least one first ML block configured with the at least one first parameter. Challita teaches activate an ML model based on an association of the at least one second ML block configured with the at least one second parameter with the at least one first ML block configured with the at least one first parameter (Figure 3a, page 27 and 35 lines 31-36 combination of modules). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to provide activate an ML model based on an association of the at least one second ML block configured with the at least one second parameter with the at least one first ML block configured with the at least one first parameter as taught by Challita in the system of Bao for full end to end machine learning based air-interfaces see abstract of Challita. 2. Regarding claims 14 and 29, Bao teaches a method and an apparatus for wireless communication at a base station, comprising: a memory; and at least one processor coupled to the memory, the memory and the at least one processor configured to: receive an indication of a user equipment (UE) capability for associating at least one second machine learning (ML) block with at least one first ML block (Bao, Paragraphs [0081], [0082], [0122] capability information); transmit, based on the UE capability, a first configuration for the at least one first ML block, the at least one first ML block configured with at least one first parameter for a first procedure of the at least one first ML block (Bao, Paragraphs [0081], [0082], [0122] capability information); and transmit, based on the UE capability, a second configuration for at least one second ML block, the at least one second ML block configured with at least one second parameter for a second procedure of the at least one second ML block (Bao, Paragraphs [0081], [0082], [0122] capability information0, Bao does not explicitly disclose the at least one second ML block dedicated to a task included in a plurality of tasks associated with the at least one first ML block. Challita teaches the at least one second ML block dedicated to a task included in a plurality of tasks associated with the at least one first ML block.(Figure 3a, page 27 and 35 lines 31-36 combination of modules). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to provide the at least one second ML block dedicated to a task included in a plurality of tasks associated with the at least one first ML block as taught by Challita in the system of Bao for full end to end machine learning based air-interfaces see abstract of Challita. 3. Regarding claims 2, 15, 28, and 30, Bao in view of Challita teaches wherein the at least one first ML block corresponds to a backbone block and the at least one second ML block corresponds to a dedicated block (Bao, Paragraph [0017] configuration information for the first neural network block; Paragraph [0235] to [0239] second neural network block; configured to perform channel estimation for one or more base signals, channel state information compression, or a combination thereof; naming). 4. Regarding claims 3 and 16, Bao in view of Challita teaches wherein the at least one first parameter corresponds to one or more of a backbone block identifier (ID), a timer, an input format, or a bandwidth part (BWP) ID (Bao, Paragraph [0185] BWP; [0190] timer; [0269] indication or identifier of a selected neural network block). 5. Regarding claims 4 and 17, Bao in view of Challita teaches wherein the at least one second parameter corresponds to one or more of a dedicated block identifier (ID), a timer, a backbone block ID, a task ID, an output format, a dedicated block type, a condition ID, a performance level granularity, or an index to the at least one first parameter (Bao, Paragraphs [0081], [0082] [0190] timer; [0269] indication or identifier of a selected neural network block). 6. Regarding claims 5 and 18, Bao in view of Challita teaches wherein the memory and the at least one processor are further configured to associate, based on the at least one second parameter, the at least one second ML block with the at least one first ML block configured with the at least one first parameter (Bao, Paragraphs [0081] and [0082]; Challita, Figure 3a, page 27 and 35 lines 31-36 combination of modules). 7. Regarding claim 6, Bao in view of Challita teaches further comprising an antenna coupled to the at least one processor, wherein the memory and the at least one processor are further configured to report a UE capability for associating the at least one second ML block with the at least one first ML block (Bao, Paragraphs [0081], [0082], [0122] antennas; Challita, Figure 3a, pages 27, 28, and 35 lines 31-36 combination of modules). 8. Regarding claims 7 and 19, Bao in view of Challita teaches wherein the UE capability is indicative of at least one of a first maximum number of first ML blocks, a second maximum number of second ML blocks per bandwidth part (BWP), a third maximum number of second ML blocks per slot, or a fourth maximum number of simultaneously activate ML models ( Bao, Paragraphs [0081], [0082], [0122] antennas; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules). 9. Regarding claim 9, Bao in view of Challita teaches wherein the ML model is included in a plurality of ML models configured to the UE based on at least one of an ML model complexity or a performance level of the UE (Bao, Paragraphs [0081], [0082], [0122] capability information; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules based on capabilities). 10. Regarding claim 10, Bao in view of Challita teaches wherein the memory and the at least one processor are further configured to switch from the ML model to a different ML model of the plurality of ML models configured to the UE based on at least one of a model switching indication, a model switching index, a predefined protocol, the ML model complexity, or the performance level of the UE (Bao, Paragraphs [0081], [0082], [0122] capability information; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules based on capabilities). 11. Regarding claim 11, Bao in view of Challita teaches wherein the plurality of ML models is configured to the UE based on at least one of one or more tasks of the UE or one or more conditions of the UE (Bao, Paragraphs [0081], [0082], [0122] capability information; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules based on capabilities). 12. Regarding claim 12, Bao in view of Challita teaches wherein the at least one first ML block and the at least one second ML block each include one or more layers, the one or more layers including at least one of a convolution layer, a fully connected (FC) layer, a pooling layer, or an activation layer (Bao, Paragraphs [0178] activation indication of one or more layers; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules based on capabilities). 13. Regarding claim 13, Bao in view of Challita teaches, wherein the association of the at least one second ML block with the at least one first ML block corresponds to one of a plurality of association combinations between the at least one second ML block and the at least one first ML block (Bao, Paragraphs [0081], [0082], [0122] capability information; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules based on capabilities). 14. Regarding claim 21, Bao in view of Challita teaches, wherein an ML model is activated based on the association of the at least one second ML block with the at least one first ML block (Challita, Figure 3a, page 27 and 35 lines 31-36 combination of modules). 15. Regarding claim 22, Bao in view of Challita teaches wherein the ML model is included in a plurality of ML models configured to the UE based on at least one of an ML model complexity or a performance level of the UE (Bao, Paragraphs [0081], [0082], [0122] capability information; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules based on capabilities). 16. Regarding claim 23, Bao in view of Challita teaches wherein the memory and the at least one processor are further configured to switch from the ML model to a different ML model of the plurality of ML models configured to the UE based on at least one of a model switching indication, a model switching index, a predefined protocol, the ML model complexity, or the performance level of the UE (Bao, Paragraphs [0081], [0082], [0122] capability information; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules based on capabilities). 17. Regarding claim 24, Bao in view of Challita teaches wherein the plurality of ML models is configured to the UE in association with at least one of one or more tasks of the UE or one or more conditions of the UE (Bao, Paragraphs [0081], [0082], [0122] capability information; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules based on capabilities). 18. Regarding claim 25, Bao in view of Challita teaches wherein the at least one first ML block and the at least one second ML block each include one or more layers, the one or more layers including at least one of a convolution layer, a fully connected (FC) layer, a pooling layer, or an activation layer (Bao, Paragraphs [0178] activation indication of one or more layers; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules based on capabilities). 19. Regarding claim 26, Bao in view of Challita teaches, wherein the association of the at least one second ML block with the at least one first ML block corresponds to one of a plurality of association combinations between the at least one second ML block and the at least one first ML block (Bao, Paragraphs [0081], [0082], [0122] capability information; Challita, Figure 3a, pages 27, 28 lines 4-9, and 35 lines 31-36 combination of modules based on capabilities). Claims 8 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bao et al. (US 2021/0185515 A1) in view of Challita et al. (WO 2021/089568 A1) in further view of Larish et al. (USPN 10,039,016 B1). 20. Regarding claims 8 and 20, Bao in view of Challita does not explicitly disclose wherein the association of the at least one second ML block with the at least one first ML block is based on at least one of a predefined protocol, a first indication of the at least one first ML block, a second indication of the at least one second ML block, a first index to the at least one first ML block, or a second index to the at least one second ML block. Larish teaches wherein the association of the at least one second ML block with the at least one first ML block is based on at least one of a predefined protocol, a first indication of the at least one first ML block, a second indication of the at least one second ML block, a first index to the at least one first ML block, or a second index to the at least one second ML block (col. 5 line 63 to col. 6 line 29 identify protocols for exchanging data for machine learning operations). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to provide wherein the association of the at least one second ML block with the at least one first ML block is based on at least one of a predefined protocol, a first indication of the at least one first ML block, a second indication of the at least one second ML block, a first index to the at least one first ML block, or a second index to the at least one second ML block as taught by Larish in the system of Bao in view of Challita for RF optimization via machine learning see abstract of Larish. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: Alakuijala et al. (US 2024/0152809 A1) Das et al. (USPN 10,524,145 B1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIANE LEE LO whose telephone number is (571)270-1952. The examiner can normally be reached Monday - Friday 8 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faruk Hamza can be reached at (571)272-7969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DIANE L LO/Primary Examiner, Art Unit 2466
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Jan 30, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598517
COMMUNICATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12593343
APPARATUS AND METHOD FOR PROCESSING SIDELINK RESOURCE REQUIRED FOR SIDELINK DRX OPERATION IN WIRELESS COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12592797
WAKE-UP SIGNAL WAVEFORM DESIGN
2y 5m to grant Granted Mar 31, 2026
Patent 12587872
USING ORCHESTRATORS FOR FALSE POSITIVE DETECTION AND ROOT CAUSE ANALYSIS
2y 5m to grant Granted Mar 24, 2026
Patent 12580710
METHOD AND APPARATUS FOR RESOURCE ALLOCATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
98%
With Interview (+8.5%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 941 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month