Prosecution Insights
Last updated: April 19, 2026
Application No. 18/356,059

METHODS AND APPARATUSES FOR RADIO COMMUNICATION

Non-Final OA §102
Filed
Jul 20, 2023
Examiner
JAIN, RAJ K
Art Unit
2411
Tech Center
2400 — Computer Networks
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
95%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
717 granted / 818 resolved
+29.7% vs TC avg
Moderate +8% lift
Without
With
+7.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
43 currently pending
Career history
861
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
18.7%
-21.3% vs TC avg
§112
16.2%
-23.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 818 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102(a)1 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States. Claim(s) 1-22 is/are rejected under 35 U.S.C. 102(a)1 as being unpatentable over Hu et al (US 20210204300 A1) hereinafter as Hu. Regarding claim(s) 1,11, Hu discloses a method and apparatus for operating a first apparatus (See Fig(s). 1 with apparatuses 105a-c say 105a is first apparatus, which has a processor and memory for storage of computer instructions See ¶ 119), comprising the following steps: collecting at least one first radio traffic data set that is associated with a utilization of at least one radio channel (See Fig(s). 1 for first apparatus 105a with radio traffic data set within one channel via TDMA, See ¶ 53, to be able to train the machine learning model 124 to perform the function of the algorithms 126, data indicating actual situations experienced by gateways and the actual results of the algorithms 126 can be collected.); training a machine-trainable function based on the collected at least one first radio traffic data set, to obtain a machine-trained function (See Fig(s). 1, machine learning 124, See ¶ 5, 53, After collecting many sets of examples (e.g., algorithm inputs and outputs), that data can be used to train a machine learning model….train the machine learning model 124 to perform the function of the algorithms 126.) ; collecting at least one second radio traffic data set, which is associated with the utilization of the at least one radio channel (See Fig(s). 1,105b is second apparatus with second radio traffic data set, within the same frequency channel within one TDMA band), wherein the second traffic data set is different from the first data set (Again See ¶ 75 75 differing traffic patterns that users require based on data being transmitted requires collecting additional data traffic set this set is different than the initial data set acquired first); determining, using the machine-trained function, at least one utilization scheme associated with the at least one radio channel based on the at least one second radio traffic data set (See Fig(s). 1, second radio traffic data set can be 105b, See ¶ 18, 114, determining a number of terminals or a processor utilization,… Allocating bandwidth to the terminal based on the one or more outputs from the machine learning model is performed at least in part based on determining that the number of terminals or the processor utilization exceeds a threshold….(for machine learning See ¶ 5,53); allocating at least one radio resource on the at least one radio channel according to the determined utilization scheme (See Fig(s). 1 one radio resource (frequency band or channel) is the use of having multiple users within one TDMA frequency band (channel), See ¶ 7, 53-54, model 124 provides the allocation scheme to be utilized by the users, See Fig(s). 1 users 105a-c…. Key inputs to the algorithms 126 that are collected include (1) the terminals' backlog, (2) total available bandwidth, (3) the predefined priority weights, and (4) the scaling factors. The output of the algorithms 126 is the overall bandwidth allocation for each terminal); and transmitting data via the at least one allocated radio resource of the at least one radio channel (See Fig(s). 1 users 105a-c transfer of data between users and gateway 105 is determined by allocation of bandwidth within one radio channel, See ¶ 30-32,). Regarding claim(s) 2, Hu discloses wherein the training is based on first traffic information of monitored transmissions via the at least one radio channel and based on state information associated with the monitored data transmissions (See ¶ 47, That is, the bandwidth manager can monitor bandwidth usage for each TG across multiple IGMs, and determine whether or not the bandwidth usage remains within the parameters/limits of the subscription rate plan associated with each TG.) Regarding claim(s) 3, Hu discloses wherein the determining of the at least one utilization scheme is based at least on second traffic information, different from the first traffic information See Fig(s). 53-54,75 data collection for different priority levels and/or patterns would be two different sets of traffic data to by analyzed by the ML). Regarding claim(s) 4, Hu discloses wherein the method further comprises: receiving at least one rectification indicator indicating at least one change associated with at least one of: a configuration of the training, a configuration of the collecting, a configuration of a function execution, and a configuration of a utilization scheme type; and applying the at least one indicated change (See ¶ 83, For each instance of the model 124, the main inputs will be those directly related to the allocation, such as backlog amounts for a particular terminal (e.g., backlog amounts for each of multiple different priority levels), available bandwidth (e.g., for the inroute as a whole), priority weights (e.g., indicating relative weighting among traffic for different priority levels), and scaling factors ). Regarding claim(s) 5, Hu discloses transmitting, in a configuration mode, at least one capability indicator (See ¶ 84, different indicators are illustrated). Regarding claim(s) 6, Hu discloses further comprising: receiving, as a response to the transmitted at least one capability indicator, a training configuration; wherein the training of the machine-trainable function is configured according to the received training configuration (See ¶ 84, 86, the model 124 has been trained to approximate the results of the allocation algorithms 126. The training uses actual results given by the algorithms 126 in previous situations as the examples used to teach the model 124 how to make allocation predictions. As a result, the trained model 124 can provide output that is very similar to the output of the algorithms 126.). Regarding claim(s) 7, Hu discloses further comprising: receiving, as a response to the transmitted at least one capability indicator, a collecting configuration; wherein the collecting is configured according to the received collecting configuration (See ¶ 75, the system can analyze and prune the data collected. For example, data sets with backlog indications of zero (e.g., no data to be transferred) can be pruned because the output of the algorithms 126 and the model 124 will be zero slots allocated to address backlog. An analysis of the data can also determine if the set of collected examples can adequately represent various traffic patterns, and if not, can be used to determine the types of additional data collection to perform to address additional traffic patterns).. Regarding claim(s) 8, Hu discloses, further comprising: receiving, as a response to the transmitted at least one capability indicator, a function execution configuration; wherein the determining of the at least one utilization scheme is configured according to the received function execution configuration (See ¶ 5, 53, After collecting many sets of examples (e.g., algorithm inputs and outputs), that data can be used to train a machine learning model (e.g., a neural network, a classifier, a decision tree, etc.) to perform the same function as the algorithms.). Regarding claim(s) 9, Hu discloses further comprising: receiving, as a response to the transmitted at least one capability indicator, at least one utilization scheme type configuration; and wherein the training is conducted based on the at least one received utilization scheme configuration ( See ¶ 53,55, training can cause the model 124 to learn relationships between the inputs and outputs to the algorithms 126, to guide the model 124 to produce outputs that match or are close to outputs provided by the algorithms 126 in the same scenario. Ultimately, the model 124 provides an allocation technique that can be run in multiple instances concurrently, allowing an IGM to calculate the bandwidth to allocate to multiple terminals in parallel.). Regarding claim(s) 10, Hu discloses further comprising: receiving the machine-trainable function or at least one parameter including a weight of the machine-trainable function, as a response to the transmitted at least one capability indicator (See ¶ 54, Key inputs to the algorithms 126 that are collected include (1) the terminals' backlog, (2) total available bandwidth, (3) the predefined priority weights). Regarding claim(s) 12,22, Hu discloses a method for operating a second apparatus (See Fig(s). 1 with apparatuses 105a-c, say 105b as second apparatus, which has a processor and memory for storage of computer instructions See ¶ 119),, comprising the following steps: collecting at least one first radio traffic data set that is associated with a utilization of at least one radio channel (See Fig(s). 1 for first apparatus 105a with radio traffic data set within one channel via TDMA, See ¶ 53, to be able to train the machine learning model 124 to perform the function of the algorithms 126, data indicating actual situations experienced by gateways and the actual results of the algorithms 126 can be collected.); training a machine-trainable function based on the collected at least one first radio traffic data set, to obtain a machine-trained function (See Fig(s). 1, machine learning 124, See ¶ 5, 53, After collecting many sets of examples (e.g., algorithm inputs and outputs), that data can be used to train a machine learning model….train the machine learning model 124 to perform the function of the algorithms 126.) ; collecting at least one second radio traffic data set, which is associated with the utilization of the at least one radio channel (See Fig(s). 1,105b is second apparatus with second radio traffic data set, within the same frequency channel within one TDMA band), wherein the second traffic data set is different from the first data set (Again See ¶ 75 differing traffic patterns that users require based on data being transmitted requires collecting additional data traffic set this set is different than the initial data set acquired first); determining, using the machine-trained function, at least one utilization scheme associated with the at least one radio channel based on the at least one second radio traffic data set (See Fig(s). 1, second radio traffic data set can be 105b, See ¶ 18, 114, determining a number of terminals or a processor utilization,… Allocating bandwidth to the terminal based on the one or more outputs from the machine learning model is performed at least in part based on determining that the number of terminals or the processor utilization exceeds a threshold….(for machine learning See ¶ 5,53); monitoring the at least one radio channel according to the determined utilization scheme for receipt of data (See ¶ 47, 55, a scaling factor for the service provider for the terminal, ..based on of flow control (e.g., a measure that can be used to monitor overall usage of a service provider as a whole,….the bandwidth manager can monitor bandwidth usage for each TG across multiple IGMs, and determine whether or not the bandwidth usage remains within the parameters/limits of the subscription rate plan associated with each TG.). Regarding claim(s) 13, Hu discloses wherein the training is based on first traffic information of monitored transmissions via the at least one radio channel and based on state information associated with the monitored data transmissions (See ¶ 47, That is, the bandwidth manager can monitor bandwidth usage for each TG across multiple IGMs, and determine whether or not the bandwidth usage remains within the parameters/limits of the subscription rate plan associated with each TG.) Regarding claim(s) 14, Hu discloses wherein the determining of the at least one utilization scheme is based at least on second traffic information, different from the first traffic information See Fig(s). 53-54,75 data collection for different priority levels and/or patterns would be two different sets of traffic data to by analyzed by the ML). Regarding claim(s) 15, Hu discloses further comprising: determining, using the trained function, at least one quality indicator that characterizes a quality of the determined utilization scheme associated with the at least one radio channel based on at least one third radio traffic data set including at least third traffic information of monitored transmissions via the at least one radio channel (See Fig(s). 1, with multiple data sets based on users 105a-c); determining at least one rectification indicator when the quality indicator passes a quality threshold; and transmitting the at least one rectification indicator indicating at least one change associated with at least one of: a configuration of the training, a configuration of the collecting, a function execution configuration, and a utilization scheme type configuration See ¶ 83, For each instance of the model 124, the main inputs will be those directly related to the allocation, such as backlog amounts for a particular terminal (e.g., backlog amounts for each of multiple different priority levels), available bandwidth (e.g., for the inroute as a whole), priority weights (e.g., indicating relative weighting among traffic for different priority levels), and scaling factors ). Regarding claim(s) 16, Hu discloses further comprising: receiving, as a response to the transmitted at least one capability indicator, a training configuration; wherein the training of the machine-trainable function is configured according to the received training configuration (See ¶ 84, 86, the model 124 has been trained to approximate the results of the allocation algorithms 126. The training uses actual results given by the algorithms 126 in previous situations as the examples used to teach the model 124 how to make allocation predictions. As a result, the trained model 124 can provide output that is very similar to the output of the algorithms 126.). Regarding claim(s) 17, Hu discloses further comprising: transmitting, as a response to the received at least one capability indicator, a training configuration; wherein the training of the machine-trainable function is configured according to the transmitted training configuration (See ¶ 75, the system can analyze and prune the data collected. For example, data sets with backlog indications of zero (e.g., no data to be transferred) can be pruned because the output of the algorithms 126 and the model 124 will be zero slots allocated to address backlog. An analysis of the data can also determine if the set of collected examples can adequately represent various traffic patterns, and if not, can be used to determine the types of additional data collection to perform to address additional traffic patterns).. Regarding claim(s) 18, Hu discloses further comprising: transmitting, as a response to the received at least one capability indicator, a collecting configuration; wherein the collecting is configured according to the transmitted collecting configuration (See ¶ 75, the system can analyze and prune the data collected. For example, data sets with backlog indications of zero (e.g., no data to be transferred) can be pruned because the output of the algorithms 126 and the model 124 will be zero slots allocated to address backlog. An analysis of the data can also determine if the set of collected examples can adequately represent various traffic patterns, and if not, can be used to determine the types of additional data collection to perform to address additional traffic patterns).. . Regarding claim(s) 19, Hu discloses further comprising: transmitting, as a response to the received at least one capability indicator, a function execution configuration; wherein the determining of the at least one utilization scheme is configured according to the transmitted function execution configuration (See ¶ 5, 53, After collecting many sets of examples (e.g., algorithm inputs and outputs), that data can be used to train a machine learning model (e.g., a neural network, a classifier, a decision tree, etc.) to perform the same function as the algorithms.). Regarding claim(s) 20, Hu discloses further comprising: transmitting, as a response to the received at least one capability indicator, at least one utilization scheme type configuration; and wherein the training is conducted based on the at least one transmitted utilization scheme configuration ( See ¶ 53,55, training can cause the model 124 to learn relationships between the inputs and outputs to the algorithms 126, to guide the model 124 to produce outputs that match or are close to outputs provided by the algorithms 126 in the same scenario. Ultimately, the model 124 provides an allocation technique that can be run in multiple instances concurrently, allowing an IGM to calculate the bandwidth to allocate to multiple terminals in parallel.). Regarding claim(s) 21, Hu discloses further comprising: receiving the machine-trainable function or at least one parameter including a weight of the machine-trainable function, as a response to the transmitted at least one capability indicator (See ¶ 54, Key inputs to the algorithms 126 that are collected include (1) the terminals' backlog, (2) total available bandwidth, (3) the predefined priority weights). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Raj Jain whose telephone number is (571) 272-3145. The examiner can normally be reached on M-Th ~8 ~6. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Derrick Ferris can be reached on 571-272-3123. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /RAJ JAIN/Primary Examiner, Art Unit 2411
Read full office action

Prosecution Timeline

Jul 20, 2023
Application Filed
Nov 13, 2025
Non-Final Rejection — §102
Feb 12, 2026
Response after Non-Final Action
Feb 12, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12568472
CHANNEL STATE INFORMATION REPORTING BASED ON SUB-RESOURCE POOLS FOR SIDELINK COMMUNICATIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12568514
WIRELESS COMMUNICATION MANAGEMENT APPARATUS, WIRELESS COMMUNICATION MANAGEMENT METHOD, AND WIRELESS COMMUNICATION MANAGEMENT PROGRAM
2y 5m to grant Granted Mar 03, 2026
Patent 12568512
Method and Apparatus for Determining Sidelink Transmission Resource
2y 5m to grant Granted Mar 03, 2026
Patent 12563561
SIGNALING ASPECTS OF APERIODIC CSI REPORTING TRIGGERED BY A DOWNLINK GRANT
2y 5m to grant Granted Feb 24, 2026
Patent 12563578
METHOD AND DEVICE FOR RESERVING RESOURCES IN NR V2X
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
95%
With Interview (+7.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 818 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month