DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objection
Claim 15 is objected to because of the following informalities: the limitation “decoded channel state information associated a communication link” should be amended to recite: “decoded channel state information associated with a communication link”. Accordingly, appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate subsection(s) of 35 U.S.C. § 102 that forms the basis for the rejections under this section made in the Office Action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim 28 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2023/0075276 (hereinafter, “ZHU ‘276”).
Regarding claim 28, ZHU ‘276 discloses:
A method of wireless communication by an apparatus (other network nodes 315), comprising:
providing, to a network entity (¶ 0087: [T]he network (e.g., CU-XP 310 or CU-CP 305); ¶ 0020: a central unit-machine learning plane (CU-XP) entity included in the base station), an indication of cross-node machine learning information used for a cross-node machine learning session between the apparatus and a user equipment (UE); (¶ 0081: To support machine learning at the UE 115-a as described herein, the base station 105-a may also include a CU-XP 215. The CU-XP 215 may host the machine learning control (MLC) protocol as shown in FIG. 2B; see ¶¶ 0086-0088, e.g., UE 115-b sends a machine learning (ML) request message to the CU-CP 305, at step 335 . . . UE 115-b may include the machine learning request in a UE assistance information message; [T]he network (e.g., CU-XP 310 or CU-CP 305) selects a neural network function [from those] indicated in the ML request message, and an ML model . . . indicated in the ML request message, and configures UE 115-b with the ML model as well as a corresponding set of parameters, at 340))
obtaining machine learning information associated with the UE; and (¶ 0089: At 345, the network may activate the neural network model. To activate the neural network model at the UE 115-b, the UE 115-b may transmit a model activation request message to the other network nodes 315 via the CU-CP 305 requesting activation of machine learning and the other network nodes may send a model activation response message to the UE 115-b via a MAC-CE or RRC signaling activating the machine learning at the UE 115-b. To activate the neural network model at the network, the CU-CP 305 may send a model activation message to the CU-XP 310 and the CU-XP 310 may send the model activation message to the other network nodes 315 activating machine learning at the other network nodes 315)
controlling the cross-node machine learning session based at least in part on the machine learning information. (¶ 0089: At 345, the network may activate the neural network model. To activate the neural network model at the UE 115-b, the UE 115-b may transmit a model activation request message to the other network nodes 315 via the CU-CP 305 requesting activation of machine learning and the other network nodes may send a model activation response message to the UE 115-b via a MAC-CE or RRC signaling activating the machine learning at the UE 115-b. To activate the neural network model at the network, the CU-CP 305 may send a model activation message to the CU-XP 310 and the CU-XP 310 may send the model activation message to the other network nodes 315 activating machine learning at the other network nodes 315)
Claim 29 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2023/0100253 (hereinafter, “ZHU ‘253”).
Regarding claim 29, ZHU ‘253 discloses:
A method of wireless communication by an apparatus (other network nodes 315), comprising:
obtaining, from a network entity (¶ 0087: [T]he network (e.g., CU-XP 310 or CU-CP 305); ¶ 0020: a central unit-machine learning plane (CU-XP) entity included in the base station), an indication of cross-node machine learning information used for a cross-node machine learning session between the network entity and a user equipment (UE); (¶ 0106: After setting up the context for the UE 120 associated with the at least one NNF and selecting the at least one machine learning model (e.g., the first machine learning model) corresponding to the at least one NNF, the CU-XP 716 may additionally determine a second machine learning model for the base station 110 to perform at least a portion of the machine learning-based wireless communications management procedure. In some cases, the CU-XP 716 may determine the other machine learning model for the base station 110 based on machine learning capability information associated with a network entity 802 executing the second machine learning model. The network entity 802 may be a DU (such as the DU 708 of FIG. 8), CU-UP, (such as the CU-UP 714 of FIG. 7) or a radio access network (RAN) intelligent controller (RIC). The CU-XP 716 may send a machine learning model setup request message to the network entity 802, requesting that the network entity 802 set up the second machine learning model for performing at least a portion of the machine learning-based wireless communications management procedure. Thereafter, once the second machine learning model has been set up, the network entity 802 sends a machine learning model setup response message to the CU-XP 716, indicating that the setup of the second machine learning model is complete)
providing, to the UE, a configuration for the cross-node machine learning session based at least in part on the cross-node machine learning information; (¶¶ 0095-0096: At time 1, the UE 120 transmits, to the base station 110, UE capability information indicating at least one radio capability of the UE and at least one machine learning capability of the UE. In some cases, the UE 120 may transmit the UE capability information during a radio resource control (RRC) setup procedure in an RRC connection setup message. In some cases, the UE capability information may be received by the CU-CP 712 of the base station 110, which may share the information with the CU-XP 716 as a container; In some cases, the radio capability of the UE 120 may indicate a capability of the UE 120 to perform one or more wireless communications management procedures, which may be machine learning-based. For example, the radio capability of the UE 120 may indicate at least one of a capability to perform a (machine learning-based) cell reselection procedure, a capability to perform a (machine learning-based) idle or inactive mode measurement procedure, a capability to perform a (machine-learning based) radio resource management (RRM) measurement procedure, a capability to perform a (machine learning-based) radio link monitoring (RLM) procedure, a capability to perform a (machine learning-based) channel state information (CSI) measurement procedure, a capability to perform a (machine learning-based) precoding matrix indicator (PMI), rank indicator (RI), and channel quality indicator (CQI) feedback procedure, a capability to perform a (machine learning-based) radio link failure (RLF) and beam failure recovery (BFR) procedure, and/or a capability to perform a (machine learning-based) RRM relaxation procedure)
obtaining machine learning information associated with the UE; (¶ 0087: Artificial intelligence solutions, such as machine learning implemented with neural network models may improve wireless communications. For example, machine learning may be employed to improve channel estimates, cell selection, or other wireless functions. The neural network models may run on the UE, a network entity, or on both devices together to execute one or more neural network functions (NNFs). For example, compression and decompression of channel state feedback may be implemented with neural network models running on both the UE and the network entity, which may be, for example a base station. The neural network models may also be referred to as machine learning models; ¶ 0113: [A] centralized unit control plane (CU-CP) and/or centralized unit machine learning plane (CU-XP) may decide to configure a network model for inference and/or training. The configuration may be initiated either by the network or in response to a UE request. The configured model may run in a network entity, such as a distributed unit (DU), RAN intelligent controller (MC), centralized unit user plane (CU-UP), CU-CP, CU-XP, or any other network entity. If the model and parameter set are not locally cached in the running host such as DU/RIC/CU-UP, etc., the model and parameter set will be downloaded. When the model and parameter set are ready, the CU-CP and/or CU-XP activates the model. A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback (CSF) transmitted across the wireless interface)
providing, to the network entity, the machine learning information; (¶ 0087: Artificial intelligence solutions, such as machine learning implemented with neural network models may improve wireless communications. For example, machine learning may be employed to improve channel estimates, cell selection, or other wireless functions. The neural network models may run on the UE, a network entity, or on both devices together to execute one or more neural network functions (NNFs). For example, compression and decompression of channel state feedback may be implemented with neural network models running on both the UE and the network entity, which may be, for example a base station. The neural network models may also be referred to as machine learning models; ¶ 0113: [A] centralized unit control plane (CU-CP) and/or centralized unit machine learning plane (CU-XP) may decide to configure a network model for inference and/or training. The configuration may be initiated either by the network or in response to a UE request. The configured model may run in a network entity, such as a distributed unit (DU), RAN intelligent controller (MC), centralized unit user plane (CU-UP), CU-CP, CU-XP, or any other network entity. If the model and parameter set are not locally cached in the running host such as DU/RIC/CU-UP, etc., the model and parameter set will be downloaded. When the model and parameter set are ready, the CU-CP and/or CU-XP activates the model. A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback (CSF) transmitted across the wireless interface)
obtaining, from the network entity, output data generated from the machine learning information; and (¶ 0087: Artificial intelligence solutions, such as machine learning implemented with neural network models may improve wireless communications. For example, machine learning may be employed to improve channel estimates, cell selection, or other wireless functions. The neural network models may run on the UE, a network entity, or on both devices together to execute one or more neural network functions (NNFs). For example, compression and decompression of channel state feedback may be implemented with neural network models running on both the UE and the network entity, which may be, for example a base station. The neural network models may also be referred to as machine learning models; ¶ 0113: [A] centralized unit control plane (CU-CP) and/or centralized unit machine learning plane (CU-XP) may decide to configure a network model for inference and/or training. The configuration may be initiated either by the network or in response to a UE request. The configured model may run in a network entity, such as a distributed unit (DU), RAN intelligent controller (MC), centralized unit user plane (CU-UP), CU-CP, CU-XP, or any other network entity. If the model and parameter set are not locally cached in the running host such as DU/RIC/CU-UP, etc., the model and parameter set will be downloaded. When the model and parameter set are ready, the CU-CP and/or CU-XP activates the model. A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback (CSF) transmitted across the wireless interface)
communicating with the UE based at least in part on the output data. (¶ 0087: Artificial intelligence solutions, such as machine learning implemented with neural network models may improve wireless communications. For example, machine learning may be employed to improve channel estimates, cell selection, or other wireless functions. The neural network models may run on the UE, a network entity, or on both devices together to execute one or more neural network functions (NNFs). For example, compression and decompression of channel state feedback may be implemented with neural network models running on both the UE and the network entity, which may be, for example a base station. The neural network models may also be referred to as machine learning models; ¶ 0113: [A] centralized unit control plane (CU-CP) and/or centralized unit machine learning plane (CU-XP) may decide to configure a network model for inference and/or training. The configuration may be initiated either by the network or in response to a UE request. The configured model may run in a network entity, such as a distributed unit (DU), RAN intelligent controller (MC), centralized unit user plane (CU-UP), CU-CP, CU-XP, or any other network entity. If the model and parameter set are not locally cached in the running host such as DU/RIC/CU-UP, etc., the model and parameter set will be downloaded. When the model and parameter set are ready, the CU-CP and/or CU-XP activates the model. A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback (CSF) transmitted across the wireless interface)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in the Office Action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 5, 7, 9, 12, and 13 are rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘276 in view of US 2024/0422650 (hereinafter, “HASHMI”).
Regarding claim 1, ZHU ‘276 comprises:
An apparatus (other network nodes 315) configured for wireless communications, comprising:
. . .
provide, to a network entity (¶ 0087: [T]he network (e.g., CU-XP 310 or CU-CP 305); ¶ 0020: a central unit-machine learning plane (CU-XP) entity included in the base station), an indication of cross-node machine learning information used for a cross-node machine learning session between the apparatus and a user equipment (UE); (¶ 0081: To support machine learning at the UE 115-a as described herein, the base station 105-a may also include a CU-XP 215. The CU-XP 215 may host the machine learning control (MLC) protocol as shown in FIG. 2B; see ¶¶ 0086-0088, e.g., UE 115-b sends a machine learning (ML) request message to the CU-CP 305, at step 335 . . . UE 115-b may include the machine learning request in a UE assistance information message; [T]he network (e.g., CU-XP 310 or CU-CP 305) selects a neural network function [from those] indicated in the ML request message, and an ML model . . . indicated in the ML request message, and configures UE 115-b with the ML model as well as a corresponding set of parameters, at 340))
obtain machine learning information associated with the UE; and (¶ 0089: At 345, the network may activate the neural network model. To activate the neural network model at the UE 115-b, the UE 115-b may transmit a model activation request message to the other network nodes 315 via the CU-CP 305 requesting activation of machine learning and the other network nodes may send a model activation response message to the UE 115-b via a MAC-CE or RRC signaling activating the machine learning at the UE 115-b. To activate the neural network model at the network, the CU-CP 305 may send a model activation message to the CU-XP 310 and the CU-XP 310 may send the model activation message to the other network nodes 315 activating machine learning at the other network nodes 315)
control the cross-node machine learning session based at least in part on the machine learning information. (¶ 0089: At 345, the network may activate the neural network model. To activate the neural network model at the UE 115-b, the UE 115-b may transmit a model activation request message to the other network nodes 315 via the CU-CP 305 requesting activation of machine learning and the other network nodes may send a model activation response message to the UE 115-b via a MAC-CE or RRC signaling activating the machine learning at the UE 115-b. To activate the neural network model at the network, the CU-CP 305 may send a model activation message to the CU-XP 310 and the CU-XP 310 may send the model activation message to the other network nodes 315 activating machine learning at the other network nodes 315)
ZHU ‘276 does not explicitly disclose:
one or more memories; and one or more processors coupled to the one or more memories, the one or more processors being configured to cause the apparatus to:
In the same field of endeavor, however, HASHMI teaches:
one or more memories; and one or more processors coupled to the one or more memories, the one or more processors being configured to cause the apparatus to: (¶ 0010: The system can also include a near-real-time-radio access network intelligent controller (near-RT-RIC) comprising a memory and a processor)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘276’s network nodes to provide memory and processing elements as taught by HASHMI to provide federated learning capability in performing ML-based wireless communication management procedures. See HASHMI, at ¶ 0010.
Regarding claim 2, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 1. ZHU ‘276 further discloses:
wherein the cross-node machine learning information comprises one or more parameters supported by the apparatus in association with the cross-node machine learning session. (¶ 0088: [O]ther network nodes 315 (e.g., distributed unit, CU-UP, or RIC) may be configured with the selected machine learning model and corresponding set of parameters)
Regarding claim 5, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 1. ZHU ‘276 further discloses:
wherein the one or more processors are configured to cause the apparatus to:
obtain capability information associated with the UE; and (¶ 0098: [T]he network (e.g., a CU-CP 505 or a CU-XP 510) may select a neural network function, a neural network model, and a corresponding set of parameters (e.g., based on a capability of the UE 115-d or based on a request message from the UE 115-d) and indicate the neural network function, the neural network model, and the corresponding set of parameters to the UE 115-d such that the UE 115-d may perform machine learning. For example, the network may transmit message (e.g., an RRC reconfiguration message) including a neural network function ID, a model ID, and a corresponding parameter set ID. The UE 115-d may then perform the following procedure to obtain the indicated neural network model and the corresponding set of parameters)
in response to obtaining the capability information, provide, to the network entity, an indication of a configuration associated with the cross-node machine learning session for the UE. (¶ 0098: [T]he network (e.g., a CU-CP 505 or a CU-XP 510) may select a neural network function, a neural network model, and a corresponding set of parameters (e.g., based on a capability of the UE 115-d or based on a request message from the UE 115-d) and indicate the neural network function, the neural network model, and the corresponding set of parameters to the UE 115-d such that the UE 115-d may perform machine learning. For example, the network may transmit message (e.g., an RRC reconfiguration message) including a neural network function ID, a model ID, and a corresponding parameter set ID. The UE 115-d may then perform the following procedure to obtain the indicated neural network model and the corresponding set of parameters)
Regarding claim 7, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 5. ZHU ‘276 further discloses:
wherein the one or more processors are configured to cause the apparatus to select a machine learning function or model for the UE to use for the cross-node machine learning session based at least in part on the capability information, (¶ 0098: [T]he network (e.g., a CU-CP 505 or a CU-XP 510) may select a neural network function, a neural network model, and a corresponding set of parameters (e.g., based on a capability of the UE 115-d or based on a request message from the UE 115-d) and indicate the neural network function, the neural network model, and the corresponding set of parameters to the UE 115-d such that the UE 115-d may perform machine learning. For example, the network may transmit message (e.g., an RRC reconfiguration message) including a neural network function ID, a model ID, and a corresponding parameter set ID. The UE 115-d may then perform the following procedure to obtain the indicated neural network model and the corresponding set of parameters)
wherein the indication of the configuration comprises an indication of the selected machine learning function or model. (¶ 0098: [T]he network (e.g., a CU-CP 505 or a CU-XP 510) may select a neural network function, a neural network model, and a corresponding set of parameters (e.g., based on a capability of the UE 115-d or based on a request message from the UE 115-d) and indicate the neural network function, the neural network model, and the corresponding set of parameters to the UE 115-d such that the UE 115-d may perform machine learning. For example, the network may transmit message (e.g., an RRC reconfiguration message) including a neural network function ID, a model ID, and a corresponding parameter set ID. The UE 115-d may then perform the following procedure to obtain the indicated neural network model and the corresponding set of parameters)
Regarding claim 9, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 1. ZHU ‘276 further discloses:
wherein:
the one or more processors are configured to cause the apparatus to obtain, from the network entity, an indication of the cross-node machine learning session between the apparatus and the UE, to control the cross-node machine learning session, (¶ 0089: At 345, the network may activate the neural network model. To activate the neural network model at the UE 115-b, the UE 115-b may transmit a model activation request message to the other network nodes 315 via the CU-CP 305 requesting activation of machine learning and the other network nodes may send a model activation response message to the UE 115-b via a MAC-CE or RRC signaling activating the machine learning at the UE 115-b. To activate the neural network model at the network, the CU-CP 305 may send a model activation message to the CU-XP 310 and the CU-XP 310 may send the model activation message to the other network nodes 315 activating machine learning at the other network nodes 315)
wherein the one or more processors are configured to cause the apparatus to control the cross-node machine learning session based at least in part on the indication of the cross-node machine learning session between the UE and the apparatus. (¶ 0089: At 345, the network may activate the neural network model. To activate the neural network model at the UE 115-b, the UE 115-b may transmit a model activation request message to the other network nodes 315 via the CU-CP 305 requesting activation of machine learning and the other network nodes may send a model activation response message to the UE 115-b via a MAC-CE or RRC signaling activating the machine learning at the UE 115-b. To activate the neural network model at the network, the CU-CP 305 may send a model activation message to the CU-XP 310 and the CU-XP 310 may send the model activation message to the other network nodes 315 activating machine learning at the other network nodes 315)
Regarding claim 12, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 1. ZHU ‘276 further discloses:
wherein:
the apparatus comprises a radio access network intelligent controller (RIC) configured to communication with the network entity via an E2 interface; and (¶ 0088: [O]ther network nodes 315 (e.g., distributed unit, CU-UP, or RIC) may be configured with the selected machine learning model and corresponding set of parameters)
the network entity comprises a central unit (CU). (¶ 0087: [T]he network (e.g., CU-XP 310 or CU-CP 305; ¶ 0081: [B]ase station 105 may include different network entities [such as] a CU-UP 205, a CU-CP 210, a DU 220, and a radio unit (RU) 225)
Regarding claim 13, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 1. ZHU ‘276 further discloses:
wherein to control the cross-node machine learning session, the one or more processors are configured to cause the apparatus to:
determine a model structure based at least in part on the machine learning information; and (¶ 0087: Upon receiving the machine learning request from the UE 115-b, the network (e.g., CU-XP 310 or CU-CP 305) may select a neural network function (e.g., from the one or more neural network functions indicated in the machine learning request message received at 335) and a machine learning model (e.g., select a machine learning model corresponding to the model ID indicated in the machine learning request message received at 335) and configure the UE 115-b with the machine learning model as well as a corresponding set of parameters at 340)
provide, to the network entity, an indication of the determined model structure to be used by the UE. (¶ 0087: Upon receiving the machine learning request from the UE 115-b, the network (e.g., CU-XP 310 or CU-CP 305) may select a neural network function (e.g., from the one or more neural network functions indicated in the machine learning request message received at 335) and a machine learning model (e.g., select a machine learning model corresponding to the model ID indicated in the machine learning request message received at 335) and configure the UE 115-b with the machine learning model as well as a corresponding set of parameters at 340)
Claims 3, 4, and 6 are rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘276 in view of HASHMI, as applied above, and further in view of O-RAN Working Group 3; Near-RT RIC Architecture, Technical Specification R003-v04.00 (hereinafter, “O-RAN WG3”) (copy appended to Applicant’s IDS of 5 Mar. 2025).
Regarding claim 3, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 1. ZHU ‘276 does not explicitly disclose:
wherein the one or more processors are configured to cause the apparatus to:
obtain a registration request associated with an application, the registration request comprising an indication of one or more parameters supported by the application in association with the cross-node machine learning session; and
in response to the registration request, provide a registration response indicating the application is registered.
In the same field of endeavor, however, O-RAN WG3 teaches:
obtain a registration request associated with an application, the registration request comprising an indication of one or more parameters supported by the application in association with the cross-node machine learning session; and (¶ 9.4.1: Step 1 (M): xAPP sends xAPP registration request to the Management Function Component in the Near-RT-RIC platform. passing relevant information needed to manage the xAPP)
in response to the registration request, provide a registration response indicating the application is registered. (¶ 9.4.1: Step 3 (M): The Management Function send the registration response to the xAPP)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘276’s federated ML procedure to provide application-specific registration as taught by O-RAN WG3 to provide registration with a management function component in the Near-RT-RIC platform. See O-RAN WG3, at ¶ 0010.
Regarding claim 4, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 1. ZHU ‘276 does not explicitly disclose:
wherein to provide the indication of the cross-node machine learning information, the one or more processors are configured to cause the apparatus to provide the indication of the cross-node machine learning information via a radio access network (RAN) intelligent controller (RIC) . . . . (¶ 0088: [O]ther network nodes 315 (e.g., . . . RIC) may be configured with the selected machine learning model and corresponding set of parameters. To configure the other network nodes 315, the CU-XP 310 may send a model setup request message to the other network nodes 315, where the model setup request message may include a model ID of the selected neural network model and corresponding parameter set ID. The other network nodes 315 may send the model ID and the parameter set ID to the MDAC via a model querying request message and the MDAC may transmit a model querying response to the other network nodes including an address (e.g., a web address or a URL) corresponding to the model ID and an address corresponding to the parameter ID)
ZHU ‘276 does not explicitly disclose:
a radio access network (RAN) intelligent controller (RIC) subscription request.
In the same field of endeavor, however, O-RAN WG3 teaches:
a radio access network (RAN) intelligent controller (RIC) subscription request. (¶ 9.3.2.1: Step 2 (M): xAPP sends E2 related API: E2 Subscription request with message contents . . . for a specific E2 Node)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘276’s federated ML procedure to provide an E2 subscription request as taught by O-RAN WG3 to provide registration with a management function component in the Near-RT-RIC platform. See O-RAN WG3, at ¶ 0010.
Regarding claim 6, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 5. ZHU ‘276 does not explicitly disclose:
wherein to provide the indication of the configuration for the UE, the one or more processors are configured to cause the apparatus to provide the indication of the configuration via a RIC control request.
In the same field of endeavor, however, O-RAN WG3 teaches:
wherein to provide the indication of the configuration for the UE, the one or more processors are configured to cause the apparatus to provide the indication of the configuration via a RIC control request. (¶ 9.3.2.4: Step a2 (M): xAPP sends E2 related API: E2 Control request with message contents . . . for a E2 Node, to E2 Termination)
Claim 8 is rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘276 in view of HASHMI, as applied above, and further in view of US 2022/0012645 (hereinafter, “YING”).
Regarding claim 8, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 1. ZHU ‘276 does not explicitly disclose:
wherein:
the one or more processors are configured to cause the apparatus to obtain a radio access network (RAN) intelligent controller (RIC) query message requesting to initiate the cross-node machine learning session between the UE and the apparatus,
wherein to provide the indication of the cross-node machine learning information, the one or more processors are configured to cause the apparatus to provide the indication of the cross-node machine learning information via a RIC query response in response to the RIC query message.
In the same field of endeavor, however, YING teaches:
the one or more processors are configured to cause the apparatus to obtain a radio access network (RAN) intelligent controller (RIC) query message requesting to initiate the cross-node machine learning session between the UE and the apparatus, (¶¶ 0065-0066: The method 800 begins at operation 802 with the A1-ML consumer 306 of the non-RT RIC 212 sending a “get . . . /mlCaps” query with data 806. The data 806 is the query for ML capabilities (Caps). [0066] The ML capabilities query queries the A1-ML producer 312 of the Near-RT RIC 214 for the capabilities of the A1-ML services of the Near-RT RIC 214. The Non-RT RIC 212 can query for all supported ML capabilities in the Near-RT RICs. or it can query a specific ML capability (e.g., support of FL). The A1-ML consumer 306 uses HTTP GET request, in some embodiments, to solicit a get response from A1-ML producer 312)
wherein to provide the indication of the cross-node machine learning information, the one or more processors are configured to cause the apparatus to provide the indication of the cross-node machine learning information via a RIC query response in response to the RIC query message. (¶¶ 0065-0066: The method 800 begins at operation 802 with the A1-ML consumer 306 of the non-RT RIC 212 sending a “get . . . /mlCaps” query with data 806. The data 806 is the query for ML capabilities (Caps). [0066] The ML capabilities query queries the A1-ML producer 312 of the Near-RT RIC 214 for the capabilities of the A1-ML services of the Near-RT RIC 214. The Non-RT RIC 212 can query for all supported ML capabilities in the Near-RT RICs. or it can query a specific ML capability (e.g., support of FL). The A1-ML consumer 306 uses HTTP GET request, in some embodiments, to solicit a get response from A1-ML producer 312)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘276’s ML capabilities query to identify supported ML capabilities in the Near-RT RICs as taught by YING such that A1-ML consumer 306 uses HTTP GET request to solicit a get response from A1-ML producer 312. See YING, at ¶ 0066.
Claim 10 is rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘276 in view of HASHMI, as applied above, and further in view of US 2025/0287249 (hereinafter, “FILIN”).
Regarding claim 10, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 9. ZHU ‘276 does not explicitly disclose:
wherein the indication of the cross-node machine learning session between the UE and the apparatus comprises a UE identifier associated with the UE and one or more machine learning models used at the UE for the cross-node machine learning session.
In the same field of endeavor, however, FILIN teaches:
wherein the indication of the cross-node machine learning session between the UE and the apparatus comprises a UE identifier associated with the UE and one or more machine learning models used at the UE for the cross-node machine learning session. (¶¶ 0315-0316: The identifier of the UE in the BS 2101 may comprise an AI/ML model UE identifier. [0316] The AI/ML model UE identifier and AI/ML model BS identifier may be identifiers that may be configured, assigned, and/or allocated to any element in an AI/ML system that sends and/or receives training data, feedback, and/or other AI/ML modeling related information)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘276’s ML capabilities query to provide a UE identifier as taught by FILIN to provide an AI/ML model UE identifier so that such identifiers may be configured, assigned, and/or allocated to any element in an AI/ML system that sends and/or receives training data, feedback, and/or other AI/ML modeling related information. See FILIN, at ¶ 0316.
Claim 11 is rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘276 in view of HASHMI, as applied above, and further in view of US 2024/0129759 (hereinafter, “REN”).
Regarding claim 11, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 1. ZHU ‘276 does not explicitly disclose:
wherein the one or more processors are configured to cause the apparatus to:
provide, to the network entity, an indication to report status information associated with the UE; obtain, from the network entity, the status information associated with the UE; and in response to obtaining the status information, provide, to the network entity, an indication of a configuration associated with the cross-node machine learning session for the UE.
In the same field of endeavor, however, REN teaches:
provide, to the network entity, an indication to report status information associated with the UE; obtain, from the network entity, the status information associated with the UE; and in response to obtaining the status information, provide, to the network entity, an indication of a configuration associated with the cross-node machine learning session for the UE. (¶ 0021: [T]ransmitting, to a UE, ML model information defining an ML model for the UE, transmitting, to the UE, a configuration for the UE to report a status of the ML model, and receiving, from the UE, a report message indicating the status of the ML model based on the configuration)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘276’s federated ML procedure to provide status information reporting as taught by REN to provide a report message indicating the status of the ML model based on the configuration, so as to effectively determine if an ML model is performing relatively poorly (e.g., below a performance threshold), report status information related to the ML model, and determine whether to fallback from operating using the ML model to operating in a different (e.g., default) mode. See REN, at ¶ 0005.
Claims 14 and 15 are rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘276 in view of HASHMI, and further in view of ZHU ‘253.
Regarding claim 14, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 1. ZHU ‘276 does not explicitly disclose:
wherein the one or more processors are configured to cause the apparatus to:
perform a cross-node machine learning inference that is based at least in part on the machine learning information to generate output data; and
provide the output data to the network entity.
In the same field of endeavor, however, ZHU ‘253 teaches:
perform a cross-node machine learning inference that is based at least in part on the machine learning information to generate output data; and (¶ 0037: RAN side model activation may be achieved by the base station informing the inference and/or training nodes to start running the model, once the model and parameter set are ready; ¶ 0117: Radio access network (RAN) side model activation may be achieved by the base station 110 informing the inference and/or training nodes of the network entity 802 to start running the model, once the model and parameter set are ready. More specifically, at time 1, the CU-CP 712 transmits a model activation message to the CU-XP 716. In response, at time 2, the CU-XP 716 transmits a model activation message to the network entity 802 that performs training and/or inference; ¶ 0113: [A] centralized unit control plane (CU-CP) and/or centralized unit machine learning plane (CU-XP) may decide to configure a network model for inference and/or training)
provide the output data to the network entity. (¶ 0110: UE 120 and/or network entity 802 may perform the machine learning-based wireless communications management procedure using the at least one machine learning model based on the activation signal[, which] may include inputting one or more input variables to the at least one machine learning model and obtaining an output from the at least one machine learning model based on the one or more input variables)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘276’s federated ML procedure to provide an ML inference as taught by ZHU ‘253 to provide output data such that a neural network model for training or inference is stored for use at UE 120 or network entities, such as a centralized unit (CU) 706, a distributed unit (DU) 708, or radio access network (RAN) intelligent controller (RIC). See ZHU ‘253, at ¶ 0093.
Regarding claim 15, the combination of ZHU ‘276 and HASHMI, as applied above, renders obvious the apparatus of claim 14. ZHU ‘276 does not explicitly disclose:
wherein:
the machine learning information comprises encoded channel state information generated at the UE; and
the output data comprises decoded channel state information associated [with] a communication link between the UE and the network entity.
In the same field of endeavor, however, ZHU ‘253 teaches:
the machine learning information comprises encoded channel state information generated at the UE; and (¶ 0054: At the base station 110, a transmit processor 220 may receive data from a data source 212 for one or more UEs, select one or more modulation and coding schemes (MCS) for each UE based at least in part on channel quality indicators (CQIs) received from the UE, process (e.g., encode and modulate) the data for each UE based at least in part on the MCS(s) selected for the UE, and provide data symbols for all UEs; ¶ 0090: [M]achine learning-based wireless communications management procedures may include cell reselection procedures, idle or inactive mode measurement procedures, radio resource management (RRM) measurement procedures, channel state feedback, compression)
the output data comprises decoded channel state information associated [with] a communication link between the UE and the network entity. (¶ 0035: A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback; ¶ 0087: [C]ompression and decompression of channel state feedback may be implemented with neural network models running on both the UE and the network entity, which may be, for example a base station; ¶ 0113: A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback (CSF) transmitted across the wireless interface; ¶ 0096: [A] capability to perform a (machine learning-based) channel state information (CSI) measurement procedure)
Claims 16, 17, 19, 20, 22, and 25-27 are rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘253 in view of HASHMI.
Regarding claim 16, ZHU ‘253 discloses:
An apparatus (other network nodes 315) configured for wireless communications, comprising:
. . .
obtain, from a network entity (¶ 0087: [T]he network (e.g., CU-XP 310 or CU-CP 305); ¶ 0020: a central unit-machine learning plane (CU-XP) entity included in the base station), an indication of cross-node machine learning information used for a cross-node machine learning session between the network entity and a user equipment (UE); (¶ 0106: After setting up the context for the UE 120 associated with the at least one NNF and selecting the at least one machine learning model (e.g., the first machine learning model) corresponding to the at least one NNF, the CU-XP 716 may additionally determine a second machine learning model for the base station 110 to perform at least a portion of the machine learning-based wireless communications management procedure. In some cases, the CU-XP 716 may determine the other machine learning model for the base station 110 based on machine learning capability information associated with a network entity 802 executing the second machine learning model. The network entity 802 may be a DU (such as the DU 708 of FIG. 8), CU-UP, (such as the CU-UP 714 of FIG. 7) or a radio access network (RAN) intelligent controller (RIC). The CU-XP 716 may send a machine learning model setup request message to the network entity 802, requesting that the network entity 802 set up the second machine learning model for performing at least a portion of the machine learning-based wireless communications management procedure. Thereafter, once the second machine learning model has been set up, the network entity 802 sends a machine learning model setup response message to the CU-XP 716, indicating that the setup of the second machine learning model is complete)
provide, to the UE, a configuration for the cross-node machine learning session based at least in part on the cross-node machine learning information; (¶¶ 0095-0096: At time 1, the UE 120 transmits, to the base station 110, UE capability information indicating at least one radio capability of the UE and at least one machine learning capability of the UE. In some cases, the UE 120 may transmit the UE capability information during a radio resource control (RRC) setup procedure in an RRC connection setup message. In some cases, the UE capability information may be received by the CU-CP 712 of the base station 110, which may share the information with the CU-XP 716 as a container; In some cases, the radio capability of the UE 120 may indicate a capability of the UE 120 to perform one or more wireless communications management procedures, which may be machine learning-based. For example, the radio capability of the UE 120 may indicate at least one of a capability to perform a (machine learning-based) cell reselection procedure, a capability to perform a (machine learning-based) idle or inactive mode measurement procedure, a capability to perform a (machine-learning based) radio resource management (RRM) measurement procedure, a capability to perform a (machine learning-based) radio link monitoring (RLM) procedure, a capability to perform a (machine learning-based) channel state information (CSI) measurement procedure, a capability to perform a (machine learning-based) precoding matrix indicator (PMI), rank indicator (RI), and channel quality indicator (CQI) feedback procedure, a capability to perform a (machine learning-based) radio link failure (RLF) and beam failure recovery (BFR) procedure, and/or a capability to perform a (machine learning-based) RRM relaxation procedure)
obtain machine learning information associated with the UE; (¶ 0087: Artificial intelligence solutions, such as machine learning implemented with neural network models may improve wireless communications. For example, machine learning may be employed to improve channel estimates, cell selection, or other wireless functions. The neural network models may run on the UE, a network entity, or on both devices together to execute one or more neural network functions (NNFs). For example, compression and decompression of channel state feedback may be implemented with neural network models running on both the UE and the network entity, which may be, for example a base station. The neural network models may also be referred to as machine learning models; ¶ 0113: [A] centralized unit control plane (CU-CP) and/or centralized unit machine learning plane (CU-XP) may decide to configure a network model for inference and/or training. The configuration may be initiated either by the network or in response to a UE request. The configured model may run in a network entity, such as a distributed unit (DU), RAN intelligent controller (MC), centralized unit user plane (CU-UP), CU-CP, CU-XP, or any other network entity. If the model and parameter set are not locally cached in the running host such as DU/RIC/CU-UP, etc., the model and parameter set will be downloaded. When the model and parameter set are ready, the CU-CP and/or CU-XP activates the model. A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback (CSF) transmitted across the wireless interface)
provide, to the network entity, the machine learning information; (¶ 0087: Artificial intelligence solutions, such as machine learning implemented with neural network models may improve wireless communications. For example, machine learning may be employed to improve channel estimates, cell selection, or other wireless functions. The neural network models may run on the UE, a network entity, or on both devices together to execute one or more neural network functions (NNFs). For example, compression and decompression of channel state feedback may be implemented with neural network models running on both the UE and the network entity, which may be, for example a base station. The neural network models may also be referred to as machine learning models; ¶ 0113: [A] centralized unit control plane (CU-CP) and/or centralized unit machine learning plane (CU-XP) may decide to configure a network model for inference and/or training. The configuration may be initiated either by the network or in response to a UE request. The configured model may run in a network entity, such as a distributed unit (DU), RAN intelligent controller (MC), centralized unit user plane (CU-UP), CU-CP, CU-XP, or any other network entity. If the model and parameter set are not locally cached in the running host such as DU/RIC/CU-UP, etc., the model and parameter set will be downloaded. When the model and parameter set are ready, the CU-CP and/or CU-XP activates the model. A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback (CSF) transmitted across the wireless interface)
obtain, from the network entity, output data generated from the machine learning information; and (¶ 0087: Artificial intelligence solutions, such as machine learning implemented with neural network models may improve wireless communications. For example, machine learning may be employed to improve channel estimates, cell selection, or other wireless functions. The neural network models may run on the UE, a network entity, or on both devices together to execute one or more neural network functions (NNFs). For example, compression and decompression of channel state feedback may be implemented with neural network models running on both the UE and the network entity, which may be, for example a base station. The neural network models may also be referred to as machine learning models; ¶ 0113: [A] centralized unit control plane (CU-CP) and/or centralized unit machine learning plane (CU-XP) may decide to configure a network model for inference and/or training. The configuration may be initiated either by the network or in response to a UE request. The configured model may run in a network entity, such as a distributed unit (DU), RAN intelligent controller (MC), centralized unit user plane (CU-UP), CU-CP, CU-XP, or any other network entity. If the model and parameter set are not locally cached in the running host such as DU/RIC/CU-UP, etc., the model and parameter set will be downloaded. When the model and parameter set are ready, the CU-CP and/or CU-XP activates the model. A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback (CSF) transmitted across the wireless interface)
communicate with the UE based at least in part on the output data. (¶ 0087: Artificial intelligence solutions, such as machine learning implemented with neural network models may improve wireless communications. For example, machine learning may be employed to improve channel estimates, cell selection, or other wireless functions. The neural network models may run on the UE, a network entity, or on both devices together to execute one or more neural network functions (NNFs). For example, compression and decompression of channel state feedback may be implemented with neural network models running on both the UE and the network entity, which may be, for example a base station. The neural network models may also be referred to as machine learning models; ¶ 0113: [A] centralized unit control plane (CU-CP) and/or centralized unit machine learning plane (CU-XP) may decide to configure a network model for inference and/or training. The configuration may be initiated either by the network or in response to a UE request. The configured model may run in a network entity, such as a distributed unit (DU), RAN intelligent controller (MC), centralized unit user plane (CU-UP), CU-CP, CU-XP, or any other network entity. If the model and parameter set are not locally cached in the running host such as DU/RIC/CU-UP, etc., the model and parameter set will be downloaded. When the model and parameter set are ready, the CU-CP and/or CU-XP activates the model. A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback (CSF) transmitted across the wireless interface)
ZHU ‘253 does not explicitly disclose:
one or more memories; and one or more processors coupled to the one or more memories, the one or more processors being configured to cause the apparatus to:
In the same field of endeavor, however, HASHMI teaches:
one or more memories; and one or more processors coupled to the one or more memories, the one or more processors being configured to cause the apparatus to: (¶ 0010: The system can also include a near-real-time-radio access network intelligent controller (near-RT-RIC) comprising a memory and a processor)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘253’s network nodes to provide memory and processing elements as taught by HASHMI to provide federated learning capability in performing ML-based wireless communication management procedures. See HASHMI, at ¶ 0010.
Regarding claim 17, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 16. ZHU ‘253 further discloses:
wherein the cross-node machine learning information comprises one or more parameters supported by the network entity in association with the cross-node machine learning session. (¶ 0117: Radio access network (RAN) side model activation may be achieved by the base station 110 informing the inference and/or training nodes of the network entity 802 to start running the model, once the model and parameter set are ready. More specifically, at time 1, the CU-CP 712 transmits a model activation message to the CU-XP 716. In response, at time 2, the CU-XP 716 transmits a model activation message to the network entity 802 that performs training and/or inference. In the case of distributed unit (DU) model activation, F1 signaling may be employed. For other network node model activation, E2 signaling may occur. The CU-XP 716 may send model activations to multiple other network nodes in parallel. Although not shown, UE-side model activation may be achieved by media access control-control element (MAC-CE), or radio resource control (RRC) signaling)
Regarding claim 19, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 16. ZHU ‘253 further discloses:
wherein the one or more processors are configured to cause the apparatus to:
provide, to the network entity, capability information associated with the UE; and (¶ 0095: At time 1, the UE 120 transmits, to the base station 110, UE capability information indicating at least one radio capability of the UE and at least one machine learning capability of the UE. In some cases, the UE 120 may transmit the UE capability information during a radio resource control (RRC) setup procedure in an RRC connection setup message. In some cases, the UE capability information may be received by the CU-CP 712 of the base station 110, which may share the information with the CU-XP 716)
in response to providing the capability information, obtain, from the network entity, an indication of the configuration associated with the cross-node machine learning session for the UE. (¶ 0098: At time 2, the CU-CP 712 determines whether to use machine learning functionality to perform one or more wireless communications management procedures. For example, in some cases, the CU-CP 712 may select a machine learning-based wireless communications management procedure to be used at the UE 120; ¶ 0102: [B]ase station 110 (e.g., via the CU-CP 712) transmits, to the UE 120, machine learning configuration information based on the UE capability information received at time [1])
Regarding claim 20, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 19. ZHU ‘253 further discloses:
wherein to obtain the indication of the configuration for the UE, the one or more processors are configured to cause the apparatus to obtain the indication of the configuration via a RIC control request. (¶ 0106: [N]etwork entity 802 may be . . . a radio access network (RAN) intelligent controller (RIC). The CU-XP 716 may send a machine learning model setup request message to the network entity 802, requesting that the network entity 802 set up the second machine learning model for performing at least a portion of the machine learning-based wireless communications management procedure)
Regarding claim 22, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 16. ZHU ‘253 does not explicitly disclose:
wherein the one or more processors are configured to cause the apparatus to:
select the configuration for the cross-node machine learning information based at least in part on the indication of the cross-node machine learning information. (¶ 0100: CU-XP 716 may then select at least one machine learning model for use in the at least one NNF to perform at least the portion of the machine learning-based wireless communications management procedure. In some cases, the CU-XP 716 may select the at least one machine learning model based, at least in part, on the at least one machine learning capability of the UE. Additionally, in some cases, the CU-XP 716 may select the at least one machine learning model based on at least one of a cell ID, gNB ID, or UE context information. In some cases, the UE context information may indicate such information as a UE type, a data radio bearer (DRB) configuration, and/or an antenna switching (AS) configuration)
Regarding claim 25, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 16. ZHU ‘253 further discloses:
wherein:
the apparatus comprises a central unit (CU) configured to communicate with the network entity via an E2 interface; and (¶ 0117: For other network node model activation, E2 signaling may occur; ¶ 0093: CU-MR 702a, 702b stores a neural network model for training or inference for use at the UE 120 or network entities, such as a centralized unit (CU) 706, a distributed unit (DU) 708, or radio access network (RAN) intelligent controller (RIC))
the network entity comprises a radio access network intelligent controller (RIC). (¶ 0106: [N]etwork entity 802 may be a DU (such as the DU 708 of FIG. 8), CU-UP, (such as the CU-UP 714 of FIG. 7) or a radio access network (RAN) intelligent controller (RIC); ¶ 0130: The network entity may be one or more units of the base station, including a distributed unit (DU), a centralized unit control plane (CU-CP), a centralized unit user plane (CU-UP), or a centralized unit machine learning plane (CU-XP). In other aspects, the network entity is another network device including a radio access network intelligent controller (RIC))
Regarding claim 26, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 16. ZHU ‘253 further discloses:
wherein the output data comprises decoded channel state information associated a communication link between the UE and the apparatus. (¶ 0087: [C]ompression and decompression of channel state feedback may be implemented with neural network models running on both the UE and the network entity, which may be, for example a base station. The neural network models may also be referred to as machine learning models; ¶ 0096: [T]he radio capability of the UE 120 may indicate . . . a capability to perform a (machine learning-based) channel state information (CSI) measurement procedure)
Regarding claim 27, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 26. ZHU ‘253 further discloses:
wherein the one or more processors are configured to cause the apparatus to control the communication link between the UE and the apparatus based at least in part on the decoded channel state information. (¶ 0113: A UE model may be configured together with the network model, for example, to compress and decompress channel state feedback (CSF) transmitted across the wireless interface)
Claims 18 are rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘253 in view of HASHMI, and further in view of O-RAN WG3.
Regarding claim 18, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 16. ZHU ‘253 further discloses:
wherein to obtain the indication of the cross-node machine learning information, the one or more processors are configured to cause the apparatus to obtain the indication of the cross-node machine learning information via a radio access network (RAN) intelligent controller (RIC) . . . . (¶ 0106: [N]etwork entity 802 may be a DU (such as the DU 708 of FIG. 8), CU-UP, (such as the CU-UP 714 of FIG. 7) or a radio access network (RAN) intelligent controller (RIC). The CU-XP 716 may send a machine learning model setup request message to the network entity 802, requesting that the network entity 802 set up the second machine learning model for performing at least a portion of the machine learning-based wireless communications management procedure)
ZHU ‘253 does not explicitly disclose:
a radio access network (RAN) intelligent controller (RIC) subscription request.
In the same field of endeavor, however, O-RAN WG3 teaches:
a radio access network (RAN) intelligent controller (RIC) subscription request. (¶ 9.3.2.1: Step 2 (M): xAPP sends E2 related API: E2 Subscription request with message contents . . . for a specific E2 Node)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘253’s federated ML procedure to provide an E2 subscription request as taught by O-RAN WG3 to provide registration with a management function component in the Near-RT-RIC platform. See O-RAN WG3, at ¶ 0010.
Claim 21 is rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘253 in view of HASHMI, as applied above, and further in view of YING.
Regarding claim 21, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 16. ZHU ‘253 does not explicitly disclose:
wherein:
the one or more processors are configured to cause the apparatus to provide, to the network entity, a radio access network (RAN) intelligent controller (RIC) query message requesting to initiate the cross-node machine learning session between the UE and the network entity,
wherein to obtain the indication of the cross-node machine learning information, the one or more processors are configured to cause the apparatus to obtain the indication of the cross-node machine learning information via a RIC query response in response to the RIC query message.
In the same field of endeavor, however, YING teaches:
the one or more processors are configured to cause the apparatus to provide, to the network entity, a radio access network (RAN) intelligent controller (RIC) query message requesting to initiate the cross-node machine learning session between the UE and the network entity, (¶¶ 0065-0066: The method 800 begins at operation 802 with the A1-ML consumer 306 of the non-RT RIC 212 sending a “get . . . /mlCaps” query with data 806. The data 806 is the query for ML capabilities (Caps). [0066] The ML capabilities query queries the A1-ML producer 312 of the Near-RT RIC 214 for the capabilities of the A1-ML services of the Near-RT RIC 214. The Non-RT RIC 212 can query for all supported ML capabilities in the Near-RT RICs. or it can query a specific ML capability (e.g., support of FL). The A1-ML consumer 306 uses HTTP GET request, in some embodiments, to solicit a get response from A1-ML producer 312)
wherein to obtain the indication of the cross-node machine learning information, the one or more processors are configured to cause the apparatus to obtain the indication of the cross-node machine learning information via a RIC query response in response to the RIC query message. (¶¶ 0065-0066: The method 800 begins at operation 802 with the A1-ML consumer 306 of the non-RT RIC 212 sending a “get . . . /mlCaps” query with data 806. The data 806 is the query for ML capabilities (Caps). [0066] The ML capabilities query queries the A1-ML producer 312 of the Near-RT RIC 214 for the capabilities of the A1-ML services of the Near-RT RIC 214. The Non-RT RIC 212 can query for all supported ML capabilities in the Near-RT RICs. or it can query a specific ML capability (e.g., support of FL). The A1-ML consumer 306 uses HTTP GET request, in some embodiments, to solicit a get response from A1-ML producer 312)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘253’s ML capabilities query to identify supported ML capabilities in the Near-RT RICs as taught by YING such that A1-ML consumer 306 uses HTTP GET request to solicit a get response from A1-ML producer 312. See YING, at ¶ 0066.
Claim 23 is rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘253 in view of HASHMI, as applied above, and further in view of FILIN.
Regarding claim 23, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 22. ZHU ‘253 does not explicitly disclose:
wherein the indication of the cross-node machine learning session between the UE and the network entity comprises a UE identifier associated with the UE and one or more machine learning functions or models used at the UE for the cross-node machine learning session.
In the same field of endeavor, however, FILIN teaches:
wherein the indication of the cross-node machine learning session between the UE and the network entity comprises a UE identifier associated with the UE and one or more machine learning functions or models used at the UE for the cross-node machine learning session. (¶¶ 0315-0316: The identifier of the UE in the BS 2101 may comprise an AI/ML model UE identifier. [0316] The AI/ML model UE identifier and AI/ML model BS identifier may be identifiers that may be configured, assigned, and/or allocated to any element in an AI/ML system that sends and/or receives training data, feedback, and/or other AI/ML modeling related information)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘253’s ML capabilities query to provide a UE identifier as taught by FILIN to provide an AI/ML model UE identifier so that such identifiers may be configured, assigned, and/or allocated to any element in an AI/ML system that sends and/or receives training data, feedback, and/or other AI/ML modeling related information. See FILIN, at ¶ 0316.
Claim 24 is rejected under 35 U.S.C. § 103 as being unpatentable over ZHU ‘253 in view of HASHMI, as applied above, and further in view of REN.
Regarding claim 24, the combination of ZHU ‘253 and HASHMI, as applied above, renders obvious the apparatus of claim 16. ZHU ‘253 does not explicitly disclose:
wherein the one or more processors are configured to cause the apparatus to:
obtain, from the network entity, an indication to report status information associated with the UE; provide, to the network entity, the status information associated with the UE; and in response to providing the status information, obtain, from the network entity, an indication of the configuration associated with the cross-node machine learning session for the UE.
In the same field of endeavor, however, REN teaches:
obtain, from the network entity, an indication to report status information associated with the UE; provide, to the network entity, the status information associated with the UE; and in response to providing the status information, obtain, from the network entity, an indication of the configuration associated with the cross-node machine learning session for the UE. (¶ 0021: [T]ransmitting, to a UE, ML model information defining an ML model for the UE, transmitting, to the UE, a configuration for the UE to report a status of the ML model, and receiving, from the UE, a report message indicating the status of the ML model based on the configuration)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify ZHU ‘253’s federated ML procedure to provide status information reporting as taught by REN to provide a report message indicating the status of the ML model based on the configuration, so as to effectively determine if an ML model is performing relatively poorly (e.g., below a performance threshold), report status information related to the ML model, and determine whether to fallback from operating using the ML model to operating in a different (e.g., default) mode. See REN, at ¶ 0005.
Conclusion
Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Garth D Richmond whose telephone number is (703)756-4559. The Examiner can normally be reached M-F 8 a.m. - 5 p.m. ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Kathy Wang-Hurst can be reached at 571-270-5371. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GARTH D RICHMOND/Examiner, Art Unit 2644
/KATHY W WANG-HURST/Supervisory Patent Examiner, Art Unit 2644