Prosecution Insights
Last updated: April 19, 2026
Application No. 19/349,559

PHY Assistance Signaling - Adaptive Inference Times for AI/ML on the physical layer

Non-Final OA §102§103§112
Filed
Oct 03, 2025
Examiner
VIANA DI PRISCO, GERMAN
Art Unit
2642
Tech Center
2600 — Communications
Assignee
Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
441 granted / 664 resolved
+4.4% vs TC avg
Strong +24% interview lift
Without
With
+24.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
26 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
3.1%
-36.9% vs TC avg
§103
55.0%
+15.0% vs TC avg
§102
26.9%
-13.1% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 664 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions The restriction requirement as set forth in the Office action mailed on 12/9/2025 , has been reconsidered, and the restriction requirement is hereby withdrawn. Claims 1-30 are pending. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3, 7, 10, 11, 17-19, 25 and 30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 3, 7, 10 ,11,17, 19, 25 and 30, the phrase "for example (e.g.)" renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Regarding claims 11 and 18 the phrase "like" renders the claim(s) indefinite because the claim(s) include(s) elements not actually disclosed (those encompassed by "or the like"), thereby rendering the scope of the claim(s) unascertainable. See MPEP § 2173.05(d). Regarding claim 3, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 11-13, 15, 17-19, 24, and 26-29 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Gundogan et al (US 20230095981 A!, hereinafter Gundogan). Consider claim 1, Gundogan discloses an apparatus of a wireless communication network, the wireless communication network using one or more Artificial Intelligence / Machine Learning, AI/ML, models for one or more use cases, wherein the apparatus is to determine an inference time for one or more of the AI/ML models to be used in one or more network entities of the wireless communication network (Subsequent to receiving the machine learning model and/or benchmarking data, at 3013, the UE 102 runs the benchmarking data on the machine learning model (e.g., executes the machine learning model) and generates machine learning model performance data and/or a benchmarking report…subsequent to generating all the outputs for the sample dataset, the UE 102 prepares a report of machine learning model performance data. The benchmarking report may include details with regard to model accuracy, model inference time, energy consumption of the machine learning model and/or the like, paragraph 51). Consider claim 11, and as applied to claim 1, Gundogan discloses wherein the apparatus comprises a network entity using the AI/ML model, e.g., - a user device, UE, or - a remote UE, or - a relay UE, or - a Radio Access Network, RAN, entity, like a gNB or Road Side Unit, RSU, or - a Core Network, CN, entity, like an Access and Mobility Function, AMF, or a Location Management Function, LMF, and/or the apparatus is separate from one or more network entities using the AI/ML model, e.g., the apparatus comprises a further network entity of the wireless communication network or an entity of a network different from the wireless communication network, like the Internet (UE 102, Fig. 3). Consider claim 12, and as applied to claim 1 above, Gundogan discloses wherein the apparatus is to indicate that a certain AI/ML model is usable or not usable on a certain network entity and/or fallback to a default procedure if a determined inference time for the certain AI/ML model is equal to or less than a predefined or (pre-)configured processing time of one or more operations for the use case for which the certain AI/ML model is used (the NWDAF/MDAS 304 may decide, based at least in part on the benchmarking report, whether to provide a new machine learning model. By way of example, the NWDAF/MDAS 304 may determine that the machine learning model accuracy is better than the service requirements but the inference time duration is higher. Accordingly, in the above example, the NWDAF/MDAS 304 may provide a less complex model to reduce the inference time duration, paragraph 52). Consider claim 13, and as applied to claim 1 above, Gundogan discloses wherein the apparatus is to indicate the inference time of a certain AI/ML model or AI/ML functionality to the network and/or network entity and/or a gNB (At 3015, the UE 102 then sends the report directly to the NWDAF/MDAS 304 or via the UPF, Fig. 3 and paragraph 51). Consider claim 15, Gundogan discloses a user device, UE, of a wireless communication network, the wireless communication network using one or more Artificial Intelligence / Machine Learning, AI/ML, models for one or more use cases, wherein the UE is to use one or more of the AI/ML models, and wherein the UE is to signal to the wireless communication network an inference time the UE requires for executing the one or more of the AI/ML models (Subsequent to receiving the machine learning model and/or benchmarking data, at 3013, the UE 102 runs the benchmarking data on the machine learning model (e.g., executes the machine learning model) and generates machine learning model performance data and/or a benchmarking report…subsequent to generating all the outputs for the sample dataset, the UE 102 prepares a report of machine learning model performance data. The benchmarking report may include details with regard to model accuracy, model inference time, energy consumption of the machine learning model and/or the like, paragraph 51). Consider claim 17, and as applied to claim 15 above, Gundogan discloses wherein the UE is to signal the inference time - in response to a transfer of the one or more of the AI/ML models from a network entity of the wireless communication network to the UE, or - in response to an activation of the one or more of the AI/ML models and/or AI/ML functionality from a network entity of the wireless communication network to the UE, or- in response to a request from a network entity of the wireless communication network, e.g., in case the UE is preconfigured with the one or more AI/ML models or after the one or more AI/ML model is transferred to the UE, or - when accessing the wireless communication network, in case the UE is preconfigured with the one or more AI/ML models, e.g., together with a signaling of the UE capabilities (Subsequent to receiving the machine learning model and/or benchmarking data, at 3013, the UE 102 runs the benchmarking data on the machine learning model (e.g., executes the machine learning model) and generates machine learning model performance data and/or a benchmarking report…subsequent to generating all the outputs for the sample dataset, the UE 102 prepares a report of machine learning model performance data. The benchmarking report may include details with regard to model accuracy, model inference time, energy consumption of the machine learning model and/or the like, Fig. 3 and paragraph 51). Consider claim 18, and as applied to claim 17 above, Gundogan discloses 18. (Original) The user device, UE, of claim 17, wherein the network entity of the wireless communication network transferring the AI/ML model or requesting the inference time comprises one or more of the following: - a further UE, or a Relay UE, or a Remote UE, - a Radio Access Network, RAN, entity, like a gNB or Road Side Unit, RSU, - a Core Network, CN, entity, like an Access and Mobility Function, AMF, or a Location Management Function, LMF (see Fig. 3). Consider claim 19, and as applied to claim 15 above, Gundogan discloses wherein the UE is to - determine the inference time, e.g., using an inference time model using at least one or more properties of the AI/ML model and one or more properties of the UE, or - receive the inference time from the wireless communication network (Subsequent to receiving the UE capability data, at 3009, the NWDAF/MDAS 304 selects and/or tunes a machine learning model and benchmarking data associated therewith based at least in part on the UE capability data. For example, a size of the benchmarking data should align with the UE hardware capabilities (e.g. memory and CPU/GPU), paragraph 48). Consider claim 24, Gundogan discloses a user device, UE, of a wireless communication network, the wireless communication network using one or more Artificial Intelligence / Machine Learning, AI/ML, models for one or more use cases, wherein the UE is to execute one or more of the AI/ML models to be used for performing one or more certain operations, wherein the UE is to signal to the wireless communication network a complexity or capacity the UE is able to execute such that the certain operation is performed using a certain AI/ML model within a predefined processing time associated with the certain operation, and wherein, responsive to the signaling, the UE is to receive from the wireless communication network one or more of the AI/ML models the UE is able to execute for performing the certain operation in accordance with the predefined processing time (Subsequent to receiving the machine learning model and/or benchmarking data, at 3013, the UE 102 runs the benchmarking data on the machine learning model (e.g., executes the machine learning model) and generates machine learning model performance data and/or a benchmarking report…subsequent to generating all the outputs for the sample dataset, the UE 102 prepares a report of machine learning model performance data. The benchmarking report may include details with regard to model accuracy, model inference time, energy consumption of the machine learning model and/or the like. The benchmarking report may mark/indicate a particular subset benchmarking data/the sample dataset (e.g., image data) that did not perform well at the UE 102, paragraph 51;the NWDAF/MDAS 304 may decide, based at least in part on the benchmarking report, whether to provide a new machine learning model. By way of example, the NWDAF/MDAS 304 may determine that the machine learning model accuracy is better than the service requirements but the inference time duration is higher. Accordingly, in the above example, the NWDAF/MDAS 304 may provide a less complex model to reduce the inference time duration, paragraph 52). Consider claim 26, and as applied to claim 15 above, Gundogan discloses wherein the UE is to receive from the wireless communication network a fall-back AI/ML model or information indicating to proceed according to a fall-back procedure to be used if the predefined processing time cannot be met by a currently used or requested to be used AI/ML model, or wherein the UE is (pre-)configured to use a fall-back procedure in case the processing time cannot be met by a currently used or requested to be used AI/ML model (the NWDAF/MDAS 304 may decide, based at least in part on the benchmarking report, whether to provide a new machine learning model. By way of example, the NWDAF/MDAS 304 may determine that the machine learning model accuracy is better than the service requirements but the inference time duration is higher. Accordingly, in the above example, the NWDAF/MDAS 304 may provide a less complex model to reduce the inference time duration, paragraph 52). Consider claim 27, and as applied to claim 24 above, Gundogan discloses wherein the UE is to receive from the wireless communication network a fall-back AI/ML model or information indicating to proceed according to a fall-back procedure to be used if the predefined processing time cannot be met by a currently used or requested to be used AI/ML model, or wherein the UE is (pre-)configured to use a fall-back procedure in case the processing time cannot be met by a currently used or requested to be used AI/ML model (the NWDAF/MDAS 304 may decide, based at least in part on the benchmarking report, whether to provide a new machine learning model. By way of example, the NWDAF/MDAS 304 may determine that the machine learning model accuracy is better than the service requirements but the inference time duration is higher. Accordingly, in the above example, the NWDAF/MDAS 304 may provide a less complex model to reduce the inference time duration, paragraph 52). Consider claim 28, Gundogan discloses a user device, UE, of a wireless communication network, the wireless communication network using one or more Artificial Intelligence / Machine Learning, AI/ML, models for one or more use cases, wherein the UE is configured or preconfigured with one or more AI/ML models for performing one or more certain operations, and wherein the UE is to train the AI/ML model using a training set (the NWDAF/MDAS 304 sends the machine learning model and/or selected benchmarking data to the UE 102,paragraph 50; Subsequent to receiving the machine learning model and/or benchmarking data, at 3013, the UE 102 runs the benchmarking data on the machine learning model (e.g., executes the machine learning model) and generates machine learning model performance data and/or a benchmarking report, paragraph 51… the benchmarking data comprises a sample dataset (e.g., images) that the UE 102 utilizes as input to the machine learning model and generates an output associated therewith…the machine learning model performance data is associated with a target function…, the target function may include a machine learning model training speed (e.g., a number of samples per second that a platform can process during training; the UE 102 may provide additional training data in conjunction with the machine learning model performance data in order to facilitate retraining the machine learning model… the network node 106 and the UE 102 may iteratively work in tandem in order to train a machine learning model for use by the UE 102 by testing operational parameters of a retrained machine learning model to determine whether it satisfies one or more target parameters specified in the benchmarking data and/or target application. In some embodiments, the network node 106 may determine, paragraph 70). Consider claim 29, and as applied to claim 28 above, Gundogan discloses wherein the UE is to train the AI/ML model while being connected to the wireless communication network (the network node 106 and the UE 102 may iteratively work in tandem in order to train a machine learning model for use by the UE 102, paragraph 70). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2, 3, 10, 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gundogan un view of Lee et al (US 20240040420 A1, hereinafter Lee). Consider claim 2, and as applied to claim, 1 above, Gundogan does not expressly disclose wherein the inference time comprises a time required for processing the AI/ML model completely or in part, the inference time being provided in terms of an absolute time or an offset value. In the same field of endeavor, Lee discloses wherein the inference time comprises a time required for processing the AI/ML model completely or in part, the inference time being provided in terms of an absolute time or an offset value (the information on the change in the inference time, transmitted by the UE, may include a difference value (or an offset) with respect to the computation time for generating the AI-based CSI (or a time for generating a non-AI-based CSI) required previous to the report, or may include an absolute value of the computation time for generating the AI-based CSI, paragraph 87). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Lee with the teachings of Gundogan for developing 6G and internalizing end-to-end AI support functions. Consider claim 3, and as applied to claim 2 above, Lee discloses wherein the inference time is provided in terms of one or more of the following: - s, ms, ps, ns; a multiple of these time units, number of slots, subframes, number of OFDM symbols, a number of cycles,- an offset value indicating at least one of the group of an offset time with reference to a reference time, e.g., provided by a navigation system, e.g., GPS, reference time; an offset with respect to a frame start; or an offset with respect to a frame structure such as a Physical Downlink Control Channel, PDCCH, or a synchronization signal, e.g., primary synchronization sequence, PSS, or secondary synchronization sequence, SSS or a sidelink synchronization sequence send via sidelink broadcast channel, PSBCH (the inference time change is 2 ms to 5 ms, paragraph 99). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Lee with the teachings of Gundogan for developing 6G and internalizing end-to-end AI support functions. Consider claim 10, and as applied to claim 1 above, Gundogan does not expressly disclose wherein a particular AI/ML model to be used in a network entity is inferred from an identification of a certain feature or functionality supported by the network entity, e.g., a n-bit CSI feedback infers to use a particular AI/ML model implementing a precoding engine, or a n-bit SINR-feedback infers a certain AI/ML model implementing a handover function. In the same field of endeavor, Lee discloses wherein a particular AI/ML model to be used in a network entity is inferred from an identification of a certain feature or functionality supported by the network entity, e.g., a n-bit CSI feedback infers to use a particular AI/ML model implementing a precoding engine, or a n-bit SINR-feedback infers a certain AI/ML model implementing a handover function (transmitting, to a base station, capability information indicating whether the UE supports generation of AI-based CSI, paragraph 11). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Lee with the teachings of Gundogan for developing 6G and internalizing end-to-end AI support functions. Consider claim 14, and as applied to claim 1 above, Gundogan does not expressly disclose wherein the use cases comprise one or more of the following: - a Channel State Information, CSI, prediction,- a CSI compression,- a Hybrid Automatic Repeat Request, HARQ, prediction,- positioning of user devices,- beam management,- beam prediction,- beam adaption,- mobility enhancements,- SINR prediction,- SL resource allocation,- SL sensing,- Handover, HO, or conditional, CHO,- Discovery. In the same field of endeavor Lee discloses wherein the use cases comprise one or more of the following: - a Channel State Information, CSI, prediction,- a CSI compression,- a Hybrid Automatic Repeat Request, HARQ, prediction,- positioning of user devices,- beam management,- beam prediction,- beam adaption,- mobility enhancements,- SINR prediction,- SL resource allocation,- SL sensing,- Handover, HO, or conditional, CHO,- Discovery (the UE supports generation of Artificial Intelligence (AI)-based channel state information (CSI), paragraph 10). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Lee with the teachings of Gundogan for developing 6G and internalizing end-to-end AI support functions. Consider claim 16, and as applied to claim 15, Gundogan does not expressly disclose wherein the UE is to signal the inference time to at least one of a gNB, a UE and a relay UE. In the same field of endeavor, Lee discloses wherein the UE is to signal the inference time to at least one of a gNB, a UE and a relay UE (the BS may receive information on inference time from the UE, Fig. 7 and paragraph 126). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Lee with the teachings of Gundogan for developing 6G and internalizing end-to-end AI support functions. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Gundogan un view of Lee, and further in view of Zhang et al (US 20250037426 A1, hereinafter Zhang). Consider claim 4, and as applied to claim 2 above, the combination of Gundogan and Lee does not expressly disclose wherein the inference time comprises a time required for processing the AI/ML model in part, wherein the part is a part of the AI/ML model to be processed; wherein the AI/ML model comprises a not to be processed part. In the same field of endeavor, Zhang discloses wherein the inference time comprises a time required for processing the AI/ML model in part, wherein the part is a part of the AI/ML model to be processed; wherein the AI/ML model comprises a not to be processed part (In some cases, action classification head 310 may have been used while training action recognition model 300, but may be discarded thereafter, and thus might not form part of action recognition model 300 at inference time, paragraph 53). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Zhang with the teachings of Gundogan and Lee to improve machine learning models and/or the training processes to allow the models to carry out the processing of data faster and/or utilize fewer computing resources for the processing. Claims 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Gundogan un view of Vemulapalli et al (US 20230214656 A1, hereinafter Vemulapalli). Consider claim 5, and as applied to claim 1 above, Gundogan does not expressly disclose wherein the inference time for an AI/ML model is determined using an inference time model, the inference time model using, for calculating the inference time, at least one or more first properties of the AI/ML model and/or one or more second properties of the network entity that is to use at least a part of the AI/ML model. In the same field of endeavor, Vemulapalli discloses wherein the inference time for an AI/ML model is determined using an inference time model, the inference time model using, for calculating the inference time, at least one or more first properties of the AI/ML model and/or one or more second properties of the network entity that is to use at least a part of the AI/ML model (the present disclosure provide a significantly more efficient inference-time model… the present disclosure provides techniques which generate a pruning mask which is used to prune the base neural network into a smaller combined-subtask-specific network that performs only the subset of tasks included in the specified combined subtask, paragraph 25). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Vemulapalli with the teachings of Gundogan to conserve computing resources such as processor usage, memory usage, network bandwidth, etc. Consider claim 6, and as applied to claim 5 above, Vemulapalli discloses wherein each of the AI/ML models comprise a certain neural network, and the network entity comprises a certain hardware for implementing the certain neural network, and the one or more first properties of the AI/ML model comprises one or more properties of the neural network, and the one or more second properties of the network entity comprises one or more properties of the hardware (the user computing device 102 can store or include one or more machine-learned models 120. For example, the machine-learned models 120 can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models. Neural networks can include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks or other forms of neural networks, Fig. 6A and paragraph 85; The smaller combined-subtask-specific network may therefore be suitable for use on resource-constrained devices (such as portable computing devices including smartphones, tablets, wearables etc.) on which it is not possible or appropriate to utilize the full base neural network., paragraph 25). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Vemulapalli with the teachings of Gundogan to conserve computing resources such as processor usage, memory usage, network bandwidth, etc. Consider claim 7, and as applied to claim 6 above, Vemulapalli discloses wherein the properties of the neural network comprise one or more of the following: - a number of layers of the neural network, - a depth of the neural network, e.g., a number of layers that have to be executed sequentially, - a number of certain operations, e.g. floating point operations, multiplications, additions, integer operations, Boolean operations, exponential functions,- a width of the layers of the neural network, e.g., an input size, IS, and/ or an output size, OS, - a type of the layers of the neural network, e.g., a convolutional layer, activation layer, batch-norm, or a fully-connected layer, and the properties of the hardware comprise one or more of the following: - a number of hardware accelerator units, e.g., a number of Graphics Processing Units, GPUs, or a number of Tensor Processing Units, TPUs, or a number of Tensor cores,- a processor speed, e.g., a number of Floating Point Operations Per Second, FLOPS, a number of additions per second, multiplications per second, integer operations per second,- a number of processor cores,- a type of processing cores,- a combination of processing cores, e.g., x number of GPU cores and y number of tensor cores,- a memory size,- a memory speed, - a type of memory,- a memory architecture (convolution layers, paragraph 38; batch normalization layers, paragraph 52; hardware memory limitations, paragraph 51). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Vemulapalli with the teachings of Gundogan to conserve computing resources such as processor usage, memory usage, network bandwidth, etc. Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Gundogan in view of Dong et al (US 20260046218, hereinafter Dong). Consider claim 8, and as applied to claim 1 above, Gundogan does not expressly disclose wherein the AI/ML models used in the wireless communication network are uniquely numbered and identifiable, and the apparatus is to determine the inference time for supported AI/ML model identifications, IDs, using one or more of the following: - processing times for supported AI/ML model IDs, - a number of or a group of supported AI/ML models to be processed in parallel or sequentially. In the same field of endeavor, Dong wherein the AI/ML models used in the wireless communication network are uniquely numbered and identifiable, and the apparatus is to determine the inference time for supported AI/ML model identifications, IDs, using one or more of the following: - processing times for supported AI/ML model IDs, - a number of or a group of supported AI/ML models to be processed in parallel or sequentially (For above message type, the following information may be contained: 1. The AI-based function information such as the AI-based function identifier, to indicate what the AI-based function is for request, which in this case, may be the AI-based beam management is requested. 2. The AI model information such as: a) the AI model identifier b) the processing time for AI model inference, paragraph 282). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Dong with the teachings of Gundogan to improve machine learning models and/or the training processes to allow the models to provide a mechanism for provisioning a lifecycle of various AI models and their applications in assisting in adaptively determining network configurations. Consider claim 9, and as applied to claim 1 above, Gundogan does not expressly disclose wherein the AI/ML models used in the wireless communication network are uniquely numbered and identifiable, wherein the apparatus is to determine the inference time for at least a specific supported AI/ML model that may be operated as an individual AI/ML in the use case model; and/or wherein the apparatus is to determine the inference time for at least a group of supported AI/ML models that may be operated simultaneously for the use case. In the same field of endeavor, Dong discloses wherein the AI/ML models used in the wireless communication network are uniquely numbered and identifiable, wherein the apparatus is to determine the inference time for at least a specific supported AI/ML model that may be operated as an individual AI/ML in the use case model; and/or wherein the apparatus is to determine the inference time for at least a group of supported AI/ML models that may be operated simultaneously for the use case (For above message type, the following information may be contained: 1. The AI-based function information such as the AI-based function identifier, to indicate what the AI-based function is for request, which in this case, may be the AI-based beam management is requested. 2. The AI model information such as: a) the AI model identifier b) the processing time for AI model inference, paragraph 282). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Dong with the teachings of Gundogan to improve machine learning models and/or the training processes to allow the models to provide a mechanism for provisioning a lifecycle of various AI models and their applications in assisting in adaptively determining network configurations. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Gundogan un view of Laddu et al (US 20260032427 A1, hereinafter Laddu). Consider claim 20, and as applied to claim 15 above, Gundogan does not expressly disclose wherein the UE is to signal a number of instances of a certain AI/ML model and/or a number of AI/ML models the UE is able to handle in parallel. In the same field of endeavor, Laddu discloses wherein the UE is to signal a number of instances of a certain AI/ML model and/or a number of AI/ML models the UE is able to handle in parallel (the UE further reports the number of models supported in parallel, which model IDs can be parallel supported, and any associated considerations/restrictions for applying a parallel operation of the ML models, paragraph 71). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Laddu with the teachings of Gundogan in order for the network to use UE capability information to configure ML model parameters, decide/support model switching, or consider activating more than one model at a given time. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Gundogan un view of Filoche et al (US 20230275812 A1, hereinafter Filoche). Consider claim 21, and as applied to claim15 above, Gundogan does not expressly disclose wherein the UE is to select the inference time for a certain AI/ML model to be signaled from a set of configured or pre- configured inference times which the UE is able to achieve when executing the certain AI/ML model. In the same field of endeavor, Filoche discloses wherein the UE is to select the inference time for a certain AI/ML model to be signaled from a set of configured or pre- configured inference times which the UE is able to achieve when executing the certain AI/ML model (The server sends information regarding model split (number of chunks, size and ID of each chunk, expected inference time of each chunk on the target device, or reference inference time), paragraph 188). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Filoche with the teachings of Gundogan to adapt downloading of an AI/ML models to UE memory status. Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Gundogan un view of Bai et al (US 20220330012 A1, hereinafter Bai). Consider claim 25 and as applied to claim 24 above, Gundogan does not expressly disclose wherein the complexity or capacity relates to at least one of the following: - a number of layers of a neural network of the AI/ML model, - a depth of the neural network of the AI/ML model, e.g., a number of layers that have to be executed sequentially,- a number of certain operations, e.g. floating point operations, multiplications, additions, integer operations, Boolean operations, exponential functions - a width of the layers of the neural network of the AI/ML model, e.g., an input size, IS, and/ or an output size, OS,- a type of the layers of the neural network of the AI/ML model, e.g., a convolutional layer, activation layer, batch-norm, or a fully-connected layer, and - a number of hardware accelerator units of the UE, e.g., a number of Graphics Processing Units, GPUs, or a number of Tensor Processing Units, TPUs, or a number of Tensor cores,- a processor speed of the UE, e.g., a number of Floating Point Operations Per Second, FLOPS, a number of additions per second, multiplications per second, integer operations per second, - a number of processor cores, - a type of processing cores, - a combination of processing cores, e.g., x number of GPU cores and y number of tensor cores, - a memory size of the UE, - a memory speed of the UE, - a type of memory of the UE, - a memory architecture of the UE. In the same field of endeavor, Bai discloses wherein the complexity or capacity relates to at least one of the following: - a number of layers of a neural network of the AI/ML model, - a depth of the neural network of the AI/ML model, e.g., a number of layers that have to be executed sequentially,- a number of certain operations, e.g. floating point operations, multiplications, additions, integer operations, Boolean operations, exponential functions - a width of the layers of the neural network of the AI/ML model, e.g., an input size, IS, and/ or an output size, OS,- a type of the layers of the neural network of the AI/ML model, e.g., a convolutional layer, activation layer, batch-norm, or a fully-connected layer, and - a number of hardware accelerator units of the UE, e.g., a number of Graphics Processing Units, GPUs, or a number of Tensor Processing Units, TPUs, or a number of Tensor cores,- a processor speed of the UE, e.g., a number of Floating Point Operations Per Second, FLOPS, a number of additions per second, multiplications per second, integer operations per second, - a number of processor cores, - a type of processing cores, - a combination of processing cores, e.g., x number of GPU cores and y number of tensor cores, - a memory size of the UE, - a memory speed of the UE, - a type of memory of the UE, - a memory architecture of the UE (These parameters and/or ranges for the UE capability may include at least one of: a hardware acceleration method (e.g., at a GPU or CPU), a maximum model size (e.g., a number of layers of a neural network (NN) or a number of hidden units per layer), a buffer size or memory size for a ML procedure, an operation frequency at the UE (e.g., a number of operations per second at the UE), paragraph 72). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of Bai with the teachings of Gundogan to improve reporting of machine learning (ML) capability in wireless communication systems. Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over Gundogan in view of Kumar et al (US 20220377844, hereinafter Kumar). Consider claim 30, and as applied to claim 28 above, Gundogan does not expressly disclose wherein the UE is to change its connectivity mode, - To a training mode or evaluation mode e.g. an RRC_TRAINING or RRC_EVALUATION mode, or - A different RRC mode such as e.g., will go into RRC INACTIVE or RRC_IDLE mode, while training the AI/ML model, or - another connectivity mode e.g., DRX mode, PAGING mode. In the same field of endeavor, Kumar discloses wherein the UE is to change its connectivity mode, - To a training mode or evaluation mode e.g. an RRC_TRAINING or RRC_EVALUATION mode, or - A different RRC mode such as e.g., will go into RRC INACTIVE or RRC_IDLE mode, while training the AI/ML model, or - another connectivity mode e.g., DRX mode, PAGING mode (A UE capability may be indicative of whether the UE 1206 may perform model training or inference in the RRC idle/inactive states. In configurations, separate UE capabilities may be indicated for model training or inference in the RRC idle/inactive states and the RRC connected state. The separate UE capabilities for the RRC idle/inactive states may be a subset or a reduced set of UE capabilities for model training or inference in the RRC connected state…The UE 1206 may perform, at 1212, an ML model query procedure with the RAN-based ML controller 1204 to determine a location of the model. For example, the model may be stored at the model repository 1208 or the CU-CP 1202. The UE 1206 may either download, at 1214, the model from the model repository 1208 over the u-plane or download, at 1216, the model from the CU-CP 1202 over the c-plane based on the determined location of the model. The UE 1206 may store, at 1218, the model and the model training configuration for training procedures in the idle/inactive state, see Fig, 12 and paragraphs 107-109). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to improve machine learning (ML) model training procedures. Allowable Subject Matter Claims 22-23 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GERMAN VIANA DI PRISCO whose telephone number is (571)270-1781. The examiner can normally be reached Monday through Friday 8:30-5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RAFAEL PEREZ-GUTIERREZ can be reached at (571) 272-7915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GERMAN VIANA DI PRISCO/ Primary Examiner, Art Unit 2642
Read full office action

Prosecution Timeline

Oct 03, 2025
Application Filed
Feb 25, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604348
COMMUNICATION APPARATUS AND COMMUNICATION METHOD FOR MULTI-LINK PEER TO PEER COMMUNICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12598664
PACKET CAPTURE FOR MULTI-LINK DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12588089
METHODS FOR ENABLING MULTI-LINK WLANS
2y 5m to grant Granted Mar 24, 2026
Patent 12587980
FFT WINDOW ADJUSTMENT BASED ON PRS PEAK PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12556207
RADIO FREQUENCY FRONT END MODULE WITH INTEGRATED RESONATOR AND ANTENNA
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
90%
With Interview (+24.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 664 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month