Prosecution Insights
Last updated: April 19, 2026
Application No. 17/214,583

ARCHITECTURE FOR MACHINE LEARNING (ML) ASSISTED COMMUNICATIONS NETWORKS

Non-Final OA §102§103
Filed
Mar 26, 2021
Examiner
ALGHAZZY, SHAMCY
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Qualcomm Incorporated
OA Round
3 (Non-Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
3y 11m
To Grant
49%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
30 granted / 62 resolved
-6.6% vs TC avg
Minimal +1% lift
Without
With
+0.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
25 currently pending
Career history
87
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
39.3%
-0.7% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 62 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submissions filed on 12/04th/2025 (amendment) and 01/06th/2026 (RCE) have been entered. Information Disclosure Statement The information disclosure statement (IDS) was submitted on 12/04th/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Examiner's Note The Examiner respectfully requests of the Applicant in preparing responses, to fully consider the entirety of the reference(s) as potentially teaching all or part of the claimed invention. It is noted, REFERENCES ARE RELEVANT AS PRIOR ART FOR ALL THEY CONTAIN. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain.” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). A reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art, including non-preferred embodiments (see MPEP 2123). The Examiner has cited particular locations in the reference(s) as applied to the claim(s) above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claim(s), typically other passages and figures will apply as well. Response to Arguments The amendments dated 12/04th/2025 have been entered and considered by the examiner. The amendments to overcome the rejections of claims 1-14 have been fully considered and are moot in light of the new rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-8, 10-12, and 14 are rejected under 35 U.S.C. 102 as being unpatentable over Luo (US20190065951A1) in view of ALABBASI (US20230010095A1). Regarding claim 1, Luo teaches: a second component within the application layer and configured to control data flow between the apparatus and the different nodes ([0009] Cooperative learning neural network may employ machine learning techniques to make decisions based on information gathered from multiple sources that communicate with one another wirelessly. The systems described herein may leverage advanced wireless communication protocols to communicate directly between devices, in machine-to-machine (M2M) fashion, or via a wireless communication network using machine-type communication (MTC). Such advanced wireless communication protocols may allow for high bandwidth or low latency communication that will allow physically separate or remote sensors to gather information and train a neural network. The examiner notes that Luo teaches using advanced wireless communication protocols to control nodes [Fig. 1], their machine learning models, and the traffic between them). However, Luo is not relied upon to explicitly teach a first component within an application layer of a communication protocol stack configured to control a plurality of machine learning modules in different nodes of a wireless network and configured to control at least one machine learning module in the apparatus. On the other hand, ALABBASI teaches a first component within an application layer of a communication protocol stack configured to control a plurality of machine learning modules in different nodes of a wireless network and configured to control at least one machine learning module in the apparatus ([0133] enabling an application layer for sending the aggregated machine learning model to the plurality of client computing devices. The examiner notes that Luo and ALABBASI are both directed to machine learning and both are reasonably analogous to each other. Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Luo’s control module to incorporate a first component within an application layer of a communication protocol stack configured to control a plurality of machine learning modules in different nodes of a wireless network and configured to control at least one machine learning module in the apparatus as taught by ALABBASI [0133] to allow exchanging models and/or outputs with the plurality of client computing devices [0134] ). Regarding claim 2, Luo teaches: The data flow is between the different nodes within the application layer ([0011] FIG. 1 is a schematic illustration of a system arranged in accordance with examples described herein. The system 100 includes vehicle 102, vehicle 104, and other computing system(s) 116. The vehicle 102 may include sensor(s) 108, transceiver 110, controller 132, and cooperative learning neural network 126, or some combination of such devices. Some or all of sensor(s) 108, transceiver 110, controller 132, and cooperative learning neural network 126 may be components of a subsystem of vehicle 102. The cooperative learning neural network 126 may be implemented using processing unit(s) 112 and memory 114, which may store weights 106. The vehicle 104 may include sensor (s) 118, transceiver 120, controller 134, and cooperative learning neural network 128, or some combination of such devices. Some or all of sensor(s) 118, transceiver 120, controller 134, and cooperative learning neural network 128 may be components of a subsystem of vehicle 104. The cooperative learning neural network 128 may be implemented using processing unit(s) 122 and memory 124, which may store weights 130. While described as vehicles in FIG. 1, vehicles 102 and/or 104 may be generally implemented by one or more wireless communication devices). PNG media_image1.png 776 1124 media_image1.png Greyscale Regarding claim 3, Luo teaches: a communications component configured to cooperate with a software application in a different node to control at least some of the plurality of machine learning modules in the different node. ([0025] Other examples of other computing system(s) 116 include computing resources located remotely from one or more vehicles but in electronic communication with the vehicle (e.g., one or more computing resources accessible over a network, from a cloud computing provider, or located in the environment of one or more vehicles). The examiner notes that Luo teaches a node that can request predictions from another node based on the observation made by the cooperative machine learning model running on that node). Regarding claim 5, Luo teaches: a training component configured to train machine learning modules for the different nodes. ([0028] In block 204, which may follow block 202, a cooperative learning neural network may be trained with data from these related vehicles such as Vehicle 102 and/or Vehicle 104 of FIG. 1). Regarding claim 6, Luo teaches: an executing component configured to execute machine learning modules of the different nodes ([0087] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The examiner notes the Luo teaches the use of different hardware components to execute programs such as cooperative machine learning). Regarding claim 7, Luo teaches: the first component is configured to control based on an output of at least one of the plurality of machine learning modules ([0027] As another example, the controller 132 may provide control signals to change direction and/or speed to avoid a rear-end collision responsive to cooperative learning neural network 126 identifying a precursor to a rear-end collision). Regarding claim 8, Luo teaches: an output of at least one of the machine learning modules controls another module ([0035] For example, real-time may refer to occurring with an amount of time where an action may be taken to avoid a crash condition described herein. By allowing for higher bandwidth communication of sensor data. 5G communication systems described herein may facilitate cooperative learning neural networks which may make use of sensor data communicated over the 5G communication system. The examiner notes that Luo teaches multiple machine learning models cooperating to resolve an issue. Said cooperation is achieved by one model feeding data to another). Regarding claim 10, Luo teaches: an updating component configured to update parameters and/or algorithms for at least one of the plurality of machine learning modules ([0033] Accordingly, supervised training (e.g., learning) may refer to all three kinds of variables ( e.g., input signals, targeted outputs, and initial weights) are used to update the weights). Regarding claim 11, Luo teaches: the updating component is configured to update in response to a user equipment (UE) moving outside a particular region ([0037] For example, vehicles who may have data relevant to a condition and/or image to be identified may be used in block 204. Considering the case of a cooperative learning neural network which may identify a precursor to a rear-end collision, for example, data from one or more vehicles behind a vehicle receiving the data may be used. To train in block 204. A location of various vehicles may be provided to a vehicle and the location of the vehicle may be used to determine whether other data from that vehicle will be used in training in block 204. The examiner interprets a vehicle moving out of a safe position and into a position that could cause a rear-end collision to be the claimed moving outside a particular region). Regarding claim 12, Luo teaches: different machine learning modules are associated with different regions ([0017] Examples of vehicles described herein may implement one or more cooperative learning neural networks, such as cooperative learning neural network 126 and cooperative learning neural network 128 of FIG. 1. The examiner interprets each separate vehicle location to be a region of its own). Regarding claim 14, Luo teaches: the different nodes comprise at least one of a base station, a user equipment (UE), a chip of the base station, a chip of the UE, a central controller, or a server ([0042] FIG. 3 is a schematic illustration of an example system arranged in accordance with examples described herein. The system 300 includes satellite 302, monitoring infrastructure 304, monitoring infrastructure 306, computing system 308, vehicle 310, vehicle 312, vehicle 314, vehicle 316, vehicle 318, and vehicle 320. Other, fewer, and/or different components may be used in other examples. Any of the vehicles 310-320 may be implemented by and/or used to implement vehicles described herein, such as vehicle 102 and vehicle 104 of FIG. 1. Any of the vehicles 310-320 may implement cooperative learning neural networks as described herein, such as with reference to FIG. 2). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Luo (US20190065951A1), in view of ALABBASI (US20230010095A1), in view of Ikeuchi (US20140067882A1). Regarding claim 4, Luo teaches: The apparatus of claim 1. However, Luo is not relied upon to explicitly teach further comprising a third component within the application layer and configured to control data flow between different layers of the communication protocol stack of the apparatus. On the other hand, Ikeuchi teaches further comprising a third component within the application layer and configured to control data flow between different layers of the communication protocol stack of the apparatus. On the other hand, Ikeuchi teaches ([0061] Also, the service communication unit 511 analyzes a response from a service communication unit 561, and transmits a request result to the request processing unit 512. Assume that the service communication unit 511 controls communication processes in different application layers for respective services. The examiner notes that Luo and Ikeuchi are both directed to data processing and communication and both are reasonably analogous to each other. Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Luo’s control modules to incorporate further comprising a third component within the application layer and configured to control data flow between different layers of the communication protocol stack of the apparatus. On the other hand, Ikeuchi teaches as taught by Ikeuchi [0061] in order to process requests, analyzes responses, and transmit request results [0061]). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Luo (US20190065951A1), in view of ALABBASI (US20230010095A1), in view of Koshy (US20210051465A1). Regarding claim 9, Luo teaches the apparatus of claim 8. However, Luo is not relied upon to explicitly teach the other module comprises a radio frequency (RF) module for beam selection. On the other hand, Koshy teaches the other module comprises a radio frequency (RF) module for beam selection ([0064] Again, the extent to which the transmission power of the antenna 350 and the beam steering of the RF signal of the antenna 350 are changed is based on the data received by the CPU SoC 320 and evaluated by the sensor fusion machine learning service 315 communicatively coupled to the CPU SoC 320. The examiner notes that Luo and Kosher are both directed to machine learning and both are reasonably analogous to each other. Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Luo’s control modules to incorporate the other module comprises a radio frequency (RF) module for beam selection as taught by Koshy [0064] to implement an advanced configuration and power interface (ACPI) that changes the transmission power of the antenna via interaction with a wireless application programming interface (API) [0064]). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Luo (US20190065951A1), in view of ALABBASI (US20230010095A1), in view of Yan (US20190303980A1). Regarding claim 13, Luo teaches the apparatus of claim 10. However, Luo is not relied upon to explicitly teach the updating component is configured to update in response to a time duration expiring, in which different machine learning modules are associated with different time durations. On the other hand, Yan teaches the updating component is configured to update in response to a time duration expiring, in which different machine learning modules are associated with different time durations ([0110] For example, in one embodiment, the ensemble performance modeling system 102 can determine that the parent performance learning model needs to be re-trained every time a threshold amount of time expires (e.g., every 6 hours). The examiner notes that Yan teaches updating an ensemble model when a time threshold has expired. The examiner notes that Luo and Yan are both directed to machine learning and both are reasonably analogous to each other. Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Luo’s model training to incorporate the updating component is configured to update in response to a time duration expiring, in which different machine learning modules are associated with different time durations as taught by Yan [0110] so that the ensemble performance modeling system can determine whether the parent performance learning model needs to be re-trained [0110]). Conclusion The following references have been determined to be related to the application, but were not applied in any specific rejection. They are nonetheless listed below for reference. MALLADI (US10379842) “MALLADI teaches a method for enabling intelligence at the edge” BEDEKAR (US20200106536A1) “BEDEKAR teaches a method for predicting achievable channel quality for specific UE for multiple Scell(s)” Smith (US20190373472A1) “Smith teaches a method for configuring, monitoring, updating and validating Internet of Things (IoT) software code and configuration using blockchain smart contract technology” Hameleers (US20020062190A1) “Hameleers teaches a traffic management system including a layered structure of management layers” Vaughn (US20190196471A1) “Vaughn teaches a method to enhance AI in autonomous vehicles by introducing a remote viewer's, e.g., a human's, reaction to a possibly complex environment surrounding a vehicle that includes a potential threat to the vehicle” Abhishek (US11727271) “Abhishek teaches a vehicle telematics system that obtains vehicle bus data for a time period, determines identification information regarding a vehicle platform using a machine learning process on the vehicle bus data, and obtains a set of communication data for communicating with at least one vehicle module on the vehicle bus based on the identified vehicle platform” Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAMCY ALGHAZZY whose telephone number is (571) 272-8824. The examiner can normally be reached Monday-Friday 8:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, OMAR FERNANDEZ RIVAS can be reached on (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAMCY ALGHAZZY/Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Mar 26, 2021
Application Filed
Jun 07, 2025
Non-Final Rejection — §102, §103
Aug 18, 2025
Examiner Interview Summary
Aug 18, 2025
Applicant Interview (Telephonic)
Sep 10, 2025
Response Filed
Sep 29, 2025
Final Rejection — §102, §103
Dec 04, 2025
Response after Non-Final Action
Jan 06, 2026
Request for Continued Examination
Jan 21, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596925
SINGLE-STAGE MODEL TRAINING FOR NEURAL ARCHITECTURE SEARCH
2y 5m to grant Granted Apr 07, 2026
Patent 12596922
ACCELERATING NEURAL NETWORKS IN HARDWARE USING INTERCONNECTED CROSSBARS
2y 5m to grant Granted Apr 07, 2026
Patent 12579408
ADAPTIVELY TRAINING OF NEURAL NETWORKS VIA AN INTELLIGENT LEARNING MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572847
SYSTEMS AND METHODS FOR RESOURCE-AWARE MODEL RECALIBRATION
2y 5m to grant Granted Mar 10, 2026
Patent 12566966
TRAINING ADAPTABLE NEURAL NETWORKS BASED ON EVOLVABILITY SEARCH
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
49%
With Interview (+0.7%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 62 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month