DETAILED ACTION
This office action is in response to the application filed on 07/26/2024.
Claims 1-30 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/26/2024, 11/11/2025, 12/04/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 5-9, 12, 16-17, 19, 22-24 29-30 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shen et al. (hereinafter Shen, US 2022/0342713 A1).
Regarding Claim 1, Shen discloses An apparatus for wireless communications at a user equipment (UE) (Shen: Fig. 6, 8, 9 Terminal, para.0208 “The computer device may be a terminal, and a diagram of its internal structure may be as shown in FIG. 15…The communication interface of the computer device is used to communicate with external terminals in a wired or wireless manner” the terminal is set up for wireless communication), comprising:
a processor (Shen: Fig. 15, para.0208 processor); and memory coupled to the processor (Shen: Fig. 15, para.0208 memory coupled to processer via system bus), the processor configured to:
transmit, to a core network entity (Shen: Fig. 6, 8, 9 Network device), an indication of a first set of one or more machine learning models supported at the UE (Shen: para.0079 “The AI/ML capability information indicates the resource information used by the terminal to process a certain AI/ML service. For example, the AI/ML capability information may directly include … may further include a performance index requirement on wireless transmission of a network side by an AI/ML operation of a certain AI/ML service of the terminal, etc. … the AI/ML capability information may further include a serial number of a currently stored AI/ML model” the terminals capabilities related to machine learning model compatibility in view of Fig. 8 for obtaining a ML model, and the actual machine learning models on the terminal are sent to the network device, first step of Fig. 6 and 7);
receive, from the core network entity, an indication of a second set of one or more machine learning models supported at the core network entity (Shen: para.0052 “According to the information reporting methods, apparatuses, devices and storage medium provided in implementations of the present disclosure, a terminal reports AI/ML capability information to a network device. Since the AI/ML capability information indicates the resource information used by the terminal to process the AI/ML service, the network device can flexibly switch an AI/ML model run by the terminal, distribute a suitable AI/ML model to the terminal, adjust the AI/ML training parameters and so on, according to the AI/ML capability information reported by the terminal.” In response to capability information obtained from the terminal, an AI/ML model may be distributed to the terminal by the network device. Seen in Fig. 6 and 7 second step. Para.0138 “As shown in FIG. 6, It's assumed that a terminal computing power needed for running AI/ML Model 1 is great, a communication transmission rate needed is low and a transmission delay requirement is low (i.e., a very low delay is not needed), therefore, when the terminal reports at least one of a high available computing power, a low achievable transmission rate and a low delay requirement, the network device can allocate to the terminal the AI/ML model 1, and the network device adopts a network-side AI/ML model adapted to the AI/ML model 1.” The terminal receives from the network device an AI/ML model supported at the network device, in this case model 1. Model 1 is supported at the network device via a corresponding network side model. The Model itself is an indication of a second set of one more machine learning models.);
receive, from the core network entity, control signaling indicating a configuration for a machine learning model, the first set of one or more machine learning models comprising the machine learning model (Shen: para.0087 “In act S302, the terminal receives AI/ML task configuration information sent by the network device; wherein, the AI/ML task configuration information is used to indicate an AI/ML task configuration allocated by the network device to the terminal according to the AI/ML capability information.” the terminal receives from the network device AI/ML configuration information. see para.0088 for examples of configuration information that may be received for the machine learning model indicated by the UE. The second step of Fig. 6 and 7 also performs this step in addition to sending the model itself, as seen in para.0088. See also para.0138-140. Para.0027-para0036 “the AI/ML task configuration information includes at least one piece of the following information:… an identity of an AI/ML task to be performed by the terminal…an AI/ML model needed by the terminal for processing the AI/ML service;…a training parameter needed by the terminal” one or more of the information in this section is provided to the terminal in the configuration information.); and
perform analytics based at least in part on the machine learning model (Shen: para.0140 “In this implementation, according to the AI/ML capability information reported by the terminal, it is to determine which AI/ML acts of the AI/ML task are performed by the terminal and which AI/ML acts are performed by the network device, as shown in FIG. 7.” In Fig. 6 and 7, after the configuration is obtained by the terminal device, the ML model is run, and the acts in the configuration information received are performed by the terminal device using the ML model, third step of Fig. 6 and 7 shows the actual running of the AI/ML model, i.e. performing analytics.)
Regarding Claim 2, Shen discloses Claim 1 as set forth above;
Shen further discloses wherein the processor is further configured to: transmit, to the core network entity, a request for the machine learning model (Shen: para.0148 “ The terminal reports the stored AI/ML model list, the available storage space, the available computing power and the communication performance index requirement and the like for this AI/ML task to the network device. The AI/ML models that the network device can use to distribute to the terminal for this AI/ML task include model 1, model 2, model 3, model 4, model 5, wherein, model 1, model 4 and model 5 conform to the computing power and the communication performance index requirement reported by the terminal” the terminal sends a request to the network device of its current models and its capabilities to obtain a new compatible model.);
and wherein, to receive the control signaling, the processor is further configured to: receive the control signaling in response to transmitting the request (Shen: para.0149 “According to the method for information reporting provided in an implementation of the present disclosure, based on the AI/ML capability information reported by the terminal, and the network device can flexibly distribute the needed AI/ML model which can be stored and used to the terminal, according to the requirements for the AI/ML model, available computing power and storage space, etc. by the terminal, thereby ensuring that the terminal has the most suitable AI/ML model in case of the limited AI/ML capability.” In response the network device distributes a compatible model 5 to the terminal device.).
Regarding Claim 3, Shen discloses claim 2 as set forth above.
Shen further discloses wherein, to transmit the request, the processor is configured to: transmit a service request message (Shen: para.0148 “ The terminal reports the stored AI/ML model list, the available storage space, the available computing power and the communication performance index requirement and the like for this AI/ML task to the network device. The AI/ML models that the network device can use to distribute to the terminal for this AI/ML task include model 1, model 2, model 3, model 4, model 5, wherein, model 1, model 4 and model 5 conform to the computing power and the communication performance index requirement reported by the terminal” the terminal sends a request to the network device of its current models and its capabilities to obtain a new compatible model. The act of obtaining a machine learning model is a service provided by the network device to the terminal, therefore this is a service request.); and
wherein, to receive the control signaling, the processor is further configured to: receive the control signaling via a service response message (Shen: para.0149 “According to the method for information reporting provided in an implementation of the present disclosure, based on the AI/ML capability information reported by the terminal, and the network device can flexibly distribute the needed AI/ML model which can be stored and used to the terminal, according to the requirements for the AI/ML model, available computing power and storage space, etc. by the terminal, thereby ensuring that the terminal has the most suitable AI/ML model in case of the limited AI/ML capability.” In response the network device distributes a compatible model 5 to the terminal device. This step distributes a machine learning model to the terminal in response to a service request of requesting a model, therefore is a service response message.).
Regarding Claim 5, Shen discloses claim 2 as set forth above.
Shen further discloses the request comprising an identifier for the machine learning model (Shen: para.0148 “ The terminal reports the stored AI/ML model list, the available storage space, the available computing power and the communication performance index requirement and the like for this AI/ML task to the network device. The AI/ML models that the network device can use to distribute to the terminal for this AI/ML task include model 1, model 2, model 3, model 4, model 5, wherein, model 1, model 4 and model 5 conform to the computing power and the communication performance index requirement reported by the terminal” the request provides a list of the current stored AI/ML model list, therefore provides identifiers for each of these models such that the network device is able to identify which models are currently at the terminal device.).
Regarding Claim 6, Shen discloses claim 1 as set forth above.
Shen further discloses wherein the processor is further configured to: transmit, to the core network entity, a completion message based at least in part on the control signaling indicating the configuration for the machine learning model (Shen: para.0157 “In an implementation, the terminal sends a training result of the AI/ML training task to the network device. After completing the training according to the AI/ML task configuration information sent by the network device, the terminal can also send the training result to the network device, so that the network device combines the training results reported by individual terminals to obtain a trained AI/ML model, or further distributes the training related AI/ML task configuration information according to the results.” In response to the completion of tasks according to the configuration information from the network device, the terminal device sends the results to the network device.).
Regarding Claim 7, Shen discloses claim 1 as set forth above.
Shen further discloses wherein, to receive the control signaling, the processor is configured to: receive the control signaling indicating the configuration for the machine learning model (Shen: para.0091 “In this implementation, after receiving the AI/ML capability information reported by the terminal, the network device can send the AI/ML task configuration information to the terminal” the configuration information is received by the terminal),
the control signaling indicating a machine learning model file address, a machine learning model training request (Shen: para.0088 “The AI/ML task configuration information may include the amount of AI/ML tasks allocated by the network device to the terminal, an AI/ML model distributed by the network device to the terminal, an AI/ML training parameter arranged by the network device for the terminal, etc. ” instructions to train.), a machine learning model inference request, a machine learning model identifier (Shen: para.0135 “Correspondingly, the AI/ML task configuration information includes the identity of the AI/ML model needed by the terminal to process the AI/ML service” an identify of the ML model), a machine learning model location, a machine learning model version, a duration of time for performing the analytics, an activation event for reporting the analytics, or any combination thereof.
Regarding Claim 8, Shen discloses claim 1 as set forth above.
Shen further discloses wherein, to receive the control signaling, the processor is configured to: receive a UE configuration update command indicating the configuration for the machine learning model (Shen: para.0088 “The AI/ML task configuration information may include the amount of AI/ML tasks allocated by the network device to the terminal, an AI/ML model distributed by the network device to the terminal, an AI/ML training parameter arranged by the network device for the terminal, etc. … The AI/ML training parameter arranged by the network device for the terminal may include an AI/ML model to be trained by the terminal, a training period, the amount of data trained in each round, etc., and this is not limited in implementations of the present disclosure.” The configuration information sent by the network device to the terminal itself is a configuration update command, as it provides configuration to implement by the terminal device, alternatively, the configuration information includes instructions to train, which updates the configuration of the model, which is also an update command. Lastly, Fig. 6 and 7 show multiple iterations of the receiving of configurations from the network device, a second configuration command from the network device can also be considered an update. See also Para.0116-0124).
Regarding Claim 9 Shen discloses claim 1 as set forth above.
Shen further discloses wherein, to transmit, the processor and is configured to: transmit a registration request indicating the first set of one or more machine learning models supported at the UE (Shen: para.0148 “ The terminal reports the stored AI/ML model list, the available storage space, the available computing power and the communication performance index requirement and the like for this AI/ML task to the network device. The AI/ML models that the network device can use to distribute to the terminal for this AI/ML task include model 1, model 2, model 3, model 4, model 5, wherein, model 1, model 4 and model 5 conform to the computing power and the communication performance index requirement reported by the terminal” the terminal sends a request to the network device of its current models and its capabilities to obtain a new compatible model. By informing the network device of the UEs current models, this constitutes a registration request as it registers to the network device this information.);
and wherein, to receive the indication of the second set of one or more machine learning models, the processor is configured to: receive a registration response message indicating the second set of one or more machine learning models (Shen: para.0149 “According to the method for information reporting provided in an implementation of the present disclosure, based on the AI/ML capability information reported by the terminal, and the network device can flexibly distribute the needed AI/ML model which can be stored and used to the terminal, according to the requirements for the AI/ML model, available computing power and storage space, etc. by the terminal, thereby ensuring that the terminal has the most suitable AI/ML model in case of the limited AI/ML capability.” In response the network device distributes a compatible model 5 to the terminal device. This step distributes a machine learning model to the terminal in response to a registration request of requesting a model.).
Regarding Claim 12, Shen discloses claim 1 as set forth above.
Shen further discloses wherein the processor is further configured to: transmit a session establishment message or a modification request message indicating the first set of one or more machine learning models supported at the UE (Shen: para.0079 “The AI/ML capability information indicates the resource information used by the terminal to process a certain AI/ML service. For example, the AI/ML capability information may directly include … may further include a performance index requirement on wireless transmission of a network side by an AI/ML operation of a certain AI/ML service of the terminal, etc. … the AI/ML capability information may further include a serial number of a currently stored AI/ML model” the terminals capabilities are sent to the network device, first step of Fig. 6 and 7 to obtain configuration changes to be made. Fig. 8-9); and,
wherein to receive the indication of the second set of one or more machine learning models, the processor is configured to: receive the indication of the second set of one or more machine learning models via a session establishment response message or a modification response message (Shen: para.0088 “The AI/ML task configuration information may include the amount of AI/ML tasks allocated by the network device to the terminal, an AI/ML model distributed by the network device to the terminal, an AI/ML training parameter arranged by the network device for the terminal, etc. … The AI/ML training parameter arranged by the network device for the terminal may include an AI/ML model to be trained by the terminal, a training period, the amount of data trained in each round, etc., and this is not limited in implementations of the present disclosure.” The configuration information sent by the network device to the terminal itself is modification response message, as it provides machine learning models to be implemented. See also Para.0116-0124 Fig. 8-9).
Regarding Claim 16, Shen discloses An apparatus for wireless communications at a first core network entity (Shen: Fig., 6 and 7 network device, para.0209, para.0208 “The communication interface of the computer device is used to communicate with external terminals in a wired or wireless manner, and the wireless manner can be realized by WIFI, operator network, NFC (Near Field Communication) or other technologies.” The terminal device is set to communicate via wireless methods, therefore the network device similarly is for wireless communications, para.0134 “performance index requirement for wireless transmission between the terminal and the network device” The network device has a wireless transmission requirement.), comprising:
a processor; and memory coupled to the processor (Shen: Fig. 16, para.0209 memory and processor), the processor configured to:
obtain an indication of a first set of one or more machine learning models supported at a user equipment (UE) (Shen: para.0079 “The AI/ML capability information indicates the resource information used by the terminal to process a certain AI/ML service. For example, the AI/ML capability information may directly include … may further include a performance index requirement on wireless transmission of a network side by an AI/ML operation of a certain AI/ML service of the terminal, etc. … the AI/ML capability information may further include a serial number of a currently stored AI/ML model” the terminals capabilities are obtained by the network device, first step of Fig. 6 and 7. The terminal is the UE, and the network device is the first core network entity);
output an indication of a second set of one or more machine learning models supported at the first core network entity (Shen: para.0052 “According to the information reporting methods, apparatuses, devices and storage medium provided in implementations of the present disclosure, a terminal reports AI/ML capability information to a network device. Since the AI/ML capability information indicates the resource information used by the terminal to process the AI/ML service, the network device can flexibly switch an AI/ML model run by the terminal, distribute a suitable AI/ML model to the terminal, adjust the AI/ML training parameters and so on, according to the AI/ML capability information reported by the terminal.” In response to capability information obtained from the terminal, an AI/ML model may be distributed to the terminal by the network device. Seen in Fig. 6 and 7 second step. Para.0138 “As shown in FIG. 6, It's assumed that a terminal computing power needed for running AI/ML Model 1 is great, a communication transmission rate needed is low and a transmission delay requirement is low (i.e., a very low delay is not needed), therefore, when the terminal reports at least one of a high available computing power, a low achievable transmission rate and a low delay requirement, the network device can allocate to the terminal the AI/ML model 1, and the network device adopts a network-side AI/ML model adapted to the AI/ML model 1.” The network device sends to a terminal device, an AI/ML model supported at the network device, in this case model 1. Model 1 is supported at the network device via a corresponding network side model. The Model itself is an indication of a second set of one more machine learning models.)
or a second core network entity, or both; and
output control signaling indicating a configuration for a machine learning model at the UE, the first set of one or more machine learning models comprising the machine learning model (Shen: para.0087 “In act S302, the terminal receives AI/ML task configuration information sent by the network device; wherein, the AI/ML task configuration information is used to indicate an AI/ML task configuration allocated by the network device to the terminal according to the AI/ML capability information.” para.0088 “The AI/ML task configuration information may include the amount of AI/ML tasks allocated by the network device to the terminal, an AI/ML model distributed by the network device to the terminal, an AI/ML training parameter arranged by the network device for the terminal, etc.” the terminal receives from the network device AI/ML configuration information. see para.0088 for examples of configuration information that may be received for the machine learning model indicated by the UE. Any of the examples in para.0088 that are sent to the terminal device, are control signaling from the core network entity to the terminal device that indicates a configuration, i.e. the tasks to be performed and/or training parameter, for the machine learning models already at the terminal. The second step of Fig. 6 and 7 also performs this step in addition to sending the model itself, as seen in para.0088. See also para.0138-140.).
Regarding Claim 17, Shen discloses claim 16 as set forth above.
Shen further discloses wherein the processor is further configured to: obtain a service request message requesting the machine learning model (Shen: para.0148 “ The terminal reports the stored AI/ML model list, the available storage space, the available computing power and the communication performance index requirement and the like for this AI/ML task to the network device. The AI/ML models that the network device can use to distribute to the terminal for this AI/ML task include model 1, model 2, model 3, model 4, model 5, wherein, model 1, model 4 and model 5 conform to the computing power and the communication performance index requirement reported by the terminal” the terminal sends a request to the network device of its current models and its capabilities to obtain a new compatible model.); and
wherein, to output the control signaling, the processor is configured to: output the control signaling via a service response message in response to the service request message (Shen: para.0149 “According to the method for information reporting provided in an implementation of the present disclosure, based on the AI/ML capability information reported by the terminal, and the network device can flexibly distribute the needed AI/ML model which can be stored and used to the terminal, according to the requirements for the AI/ML model, available computing power and storage space, etc. by the terminal, thereby ensuring that the terminal has the most suitable AI/ML model in case of the limited AI/ML capability.” In response the network device distributes a compatible model 5 to the terminal device.).
Regarding Claim 19, Shen discloses Claim 16 as set forth above.
Shen further discloses wherein the processor is further configured to: obtain a UE configuration update complete message in response to the control signaling indicating the configuration for the machine learning model (Shen: para.0157 “In an implementation, the terminal sends a training result of the AI/ML training task to the network device. After completing the training according to the AI/ML task configuration information sent by the network device, the terminal can also send the training result to the network device, so that the network device combines the training results reported by individual terminals to obtain a trained AI/ML model, or further distributes the training related AI/ML task configuration information according to the results.” In response to the completion of tasks according to the configuration information from the network device, the terminal device sends the results to the network device.); and
wherein, to output the control signaling, the processor is further configured to:output the control signaling via a UE configuration update command (Shen: para.0088 “The AI/ML task configuration information may include the amount of AI/ML tasks allocated by the network device to the terminal, an AI/ML model distributed by the network device to the terminal, an AI/ML training parameter arranged by the network device for the terminal, etc. … The AI/ML training parameter arranged by the network device for the terminal may include an AI/ML model to be trained by the terminal, a training period, the amount of data trained in each round, etc., and this is not limited in implementations of the present disclosure.” The configuration information sent by the network device to the terminal itself is a configuration update command, as it provides configuration to implement by the terminal device, alternatively, the configuration information includes instructions to train, which updates the configuration of the model, which is also an update command. Lastly, Fig. 6 and 7 show multiple iterations of the receiving of configurations from the network device, a second configuration command from the network device can also be considered an update. See also Para.0116-0124).
Regarding Claim 22, Shen discloses claim 16 as set forth above.
Shen further discloses wherein, to output the control signaling, the processor is configured to: output the control signaling indicating the configuration for the machine learning model at the UE (Shen: para.0091 “In this implementation, after receiving the AI/ML capability information reported by the terminal, the network device can send the AI/ML task configuration information to the terminal” the configuration information is received by the terminal),
the control signaling indicating a machine learning model file address, a machine learning model training request (Shen: para.0088 “The AI/ML task configuration information may include the amount of AI/ML tasks allocated by the network device to the terminal, an AI/ML model distributed by the network device to the terminal, an AI/ML training parameter arranged by the network device for the terminal, etc. ” instructions to train.),
a machine learning model inference request, a machine learning model identifier (Shen: para.0135 “Correspondingly, the AI/ML task configuration information includes the identity of the AI/ML model needed by the terminal to process the AI/ML service” an identify of the ML model), a machine learning model location, a machine learning model version, a duration of time for performing analytics according to the machine learning model, an activation event for reporting the analytics, one or more parameters for performing the analytics, or any combination thereof.
Regarding Claim 23, Shen discloses claim 16 as set forth above.
Shen further discloses wherein, to obtain the indication, the processor is configured to: obtain a registration request indicating the first set of one or more machine learning models supported at the UE (Shen: para.0148 “ The terminal reports the stored AI/ML model list, the available storage space, the available computing power and the communication performance index requirement and the like for this AI/ML task to the network device. The AI/ML models that the network device can use to distribute to the terminal for this AI/ML task include model 1, model 2, model 3, model 4, model 5, wherein, model 1, model 4 and model 5 conform to the computing power and the communication performance index requirement reported by the terminal” the terminal sends a request to the network device of its current models and its capabilities to obtain a new compatible model. By informing the network device of the UEs current models, this constitutes a registration request as it registers to the network device this information.),
wherein, to output the indication of the second set of one or more machine learning models, the processor is configured to: output the indication of the second set of one or more machine learning models via a registration response message (Shen: para.0149 “According to the method for information reporting provided in an implementation of the present disclosure, based on the AI/ML capability information reported by the terminal, and the network device can flexibly distribute the needed AI/ML model which can be stored and used to the terminal, according to the requirements for the AI/ML model, available computing power and storage space, etc. by the terminal, thereby ensuring that the terminal has the most suitable AI/ML model in case of the limited AI/ML capability.” In response the network device distributes a compatible model 5 to the terminal device. This step distributes a machine learning model to the terminal in response to a registration request of requesting a model.).
Regarding Claim 24, Shen discloses claim 16 as set forth above.
Shen further discloses wherein, to obtain the indication, the processor is configured to: obtain a session establishment message or a modification request message indicating the first set of one or more machine learning models supported at the UE (Shen: para.0079 “The AI/ML capability information indicates the resource information used by the terminal to process a certain AI/ML service. For example, the AI/ML capability information may directly include … may further include a performance index requirement on wireless transmission of a network side by an AI/ML operation of a certain AI/ML service of the terminal, etc. … the AI/ML capability information may further include a serial number of a currently stored AI/ML model” the terminals capabilities are sent to the network device, first step of Fig. 6 and 7 to obtain configuration changes to be made. Fig. 8-9); and
wherein, to output the indication of the second set of one or more machine learning models, the processor is configured to: output the indication of the second set of one or more machine learning models via a session establishment response message or a modification response message (Shen: para.0088 “The AI/ML task configuration information may include the amount of AI/ML tasks allocated by the network device to the terminal, an AI/ML model distributed by the network device to the terminal, an AI/ML training parameter arranged by the network device for the terminal, etc. … The AI/ML training parameter arranged by the network device for the terminal may include an AI/ML model to be trained by the terminal, a training period, the amount of data trained in each round, etc., and this is not limited in implementations of the present disclosure.” The configuration information sent by the network device to the terminal itself is modification response message, as it provides machine learning models to be implemented. See also Para.0116-0124 Fig. 8-9).
Regarding Claim 29, it teaches the same steps as claim 1 but in A method for wireless communications at a user equipment (UE), comprising (Shen: abstract, claim 6). Therefore the rejection to claim 1 applies equally as well to that of claim 29.
Regarding Claim 30, it teaches all of the same steps as claim 16 but in A method for wireless communications at a first core network entity, comprising: (Shen: abstract, claim 6). Therefore the rejection to claim 16 applies equally as well to that of claim 30.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shen et al. (hereinafter Shen, US 2022/0342713 A1) in view of Xin et al. (hereinafter Xin, US 2023/0308930 A1).
Regarding Claim 4, Shen disclose claim 2 as set forth above.
However, while Shen discloses the ML services occurring in a 5g scenario that typically communicates with PDU messages, it does not explicitly disclose wherein, to transmit the request, the processor is configured to: transmit a protocol data unit session modification request message; and wherein, to receive the control signaling, the processor is further configured to: receive the control signaling via a protocol data unit session modification command message.
Xin discloses wherein, to transmit the request, the processor is configured to: transmit a protocol data unit session modification request message (Xin: para.0272-0273 “S301: In a process of establishing the service B initiated by a terminal or the AF, the PCF activates inference based on a service experience model obtained through vertical federated training….S301a: UE triggers the process of establishing the service B, namely, a PDU Session Modification process initiated by the UE.” The UE initiates the PDU session modification process, i.e via a type of request.); and
wherein, to receive the control signaling, the processor is further configured to: receive the control signaling via a protocol data unit session modification command message (Xin: para.0293-0294 “S301g: The RAN sends NAS signaling to the UE, to complete the service establishment process. In at least one embodiment, the RAN sends NAS signaling of a PDU Session Modification Command type to the UE, to complete the service establishment process.” And in response the UE receives a PDU session modification command.).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shen with Xin in order to incorporate wherein, to transmit the request, the processor is configured to: transmit a protocol data unit session modification request message; and wherein, to receive the control signaling, the processor is further configured to: receive the control signaling via a protocol data unit session modification command message to the 5g network operations of Shen.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of effectively communicating and establishing service in a 5g network (Xin: para.0111, para.0148).
Regarding Claim 18, Shen discloses claim 16 as set forth above.
Shen further discloses wherein the processor is further configured to: obtain a modification request message requesting the machine learning model (Shen: para.0148 “ The terminal reports the stored AI/ML model list, the available storage space, the available computing power and the communication performance index requirement and the like for this AI/ML task to the network device. The AI/ML models that the network device can use to distribute to the terminal for this AI/ML task include model 1, model 2, model 3, model 4, model 5, wherein, model 1, model 4 and model 5 conform to the computing power and the communication performance index requirement reported by the terminal” the terminal sends a request to the network device of its current models and its capabilities to obtain a new compatible model. The act of obtaining a machine learning model is a service provided by the network device to the terminal, therefore this is a service request.); and
where, to output the control signaling, the processor is configured to: output the control signaling via modification command message in response to the modification request message (Shen: para.0149 “According to the method for information reporting provided in an implementation of the present disclosure, based on the AI/ML capability information reported by the terminal, and the network device can flexibly distribute the needed AI/ML model which can be stored and used to the terminal, according to the requirements for the AI/ML model, available computing power and storage space, etc. by the terminal, thereby ensuring that the terminal has the most suitable AI/ML model in case of the limited AI/ML capability.” In response the network device distributes a compatible model 5 to the terminal device. This step distributes a machine learning model to the terminal in response to a service request of requesting a model, therefore is a service response message.).
However Shen does not explicitly disclose wherein the processor is further configured to: obtain a protocol data unit session modification request message requesting the machine learning model; and where, to output the control signaling, the processor is configured to: output the control signaling via a protocol data unit session modification command message in response to the protocol data unit session modification request message.
Xin discloses herein the processor is further configured to: obtain a protocol data unit session modification request message requesting the service (Xin: para.0272-0273 “S301: In a process of establishing the service B initiated by a terminal or the AF, the PCF activates inference based on a service experience model obtained through vertical federated training….S301a: UE triggers the process of establishing the service B, namely, a PDU Session Modification process initiated by the UE.” The UE initiates the PDU session modification process, i.e via a type of request.); and
where, to output the control signaling, the processor is configured to: output the control signaling via a protocol data unit session modification command message in response to the protocol data unit session modification request message (Xin: para.0293-0294 “S301g: The RAN sends NAS signaling to the UE, to complete the service establishment process. In at least one embodiment, the RAN sends NAS signaling of a PDU Session Modification Command type to the UE, to complete the service establishment process.” And in response the UE receives a PDU session modification command.).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shen with Xin in order to incorporate wherein the processor is further configured to: obtain a protocol data unit session modification request message requesting the service; and where, to output the control signaling, the processor is configured to: output the control signaling via a protocol data unit session modification command message in response to the protocol data unit session modification request message to the 5g network operations of Shen that obtains a machine learning model.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of effectively communicating and establishing service in a 5g network (Xin: para.0111, para.0148).
Claim(s) 10-11, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shen et al. (hereinafter Shen, US 2022/0342713 A1) in view of Kuge et al. (hereinafter Kuge, US 2023/0164726 A1).
Regarding Claim 10, Shen discloses claim 1 as set forth above.
Shen further discloses wherein, to receive the control signaling, the processor is configured to: receive the configuration for the machine learning model (Shen: para.0087 “In act S302, the terminal receives AI/ML task configuration information sent by the network device; wherein, the AI/ML task configuration information is used to indicate an AI/ML task configuration allocated by the network device to the terminal according to the AI/ML capability information.” the terminal receives from the network device AI/ML configuration information. see para.0088 for examples of configuration information that may be received for the machine learning model indicated by the UE. The second step of Fig. 6 and 7 also performs this step in addition to sending the model itself, as seen in para.0088. See also para.0138-140).
However while Shen discloses the ML services occurring in a 5g scenario that typically communicates with PDU messages, it does not explicitly disclose, it does not explicitly disclose wherein, to receive the control signaling, the processor is configured to: receive a protocol data unit session modification command indicating the configuration for the machine learning model.
Kuge discloses wherein, to receive the control signaling, the processor is configured to: receive a protocol data unit session modification command indicating the configuration update (Kuge: Fig. 8 para.0370-0371 “First, the AMF 140 transmits the Configuration update command message to the UE_A 10 via the 5G AN 120 (or the gNB) (S800), and thereby initiates the UE configuration update procedure.” Configuration updates to be made are sent by the AMF to the UE via a 5g network. Para.0095 “A session management (SM) message (also referred to as a Non-Access-Stratum (NAS) SM message) may be a NAS message used in a procedure for SM, or may be a control message transmitted and/or received between the UE A. 10 and the SW A 230 via the AMF A 240…. a PDU session modification command message, a PDU session modification complete message (PDU session modification complete)” communications between the UE and the AMF are performed using PDU session modification messages).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shen with Kuge in order to incorporate wherein, to receive the control signaling, the processor is configured to: receive a protocol data unit session modification command indicating the configuration update, and apply this to the 5g process in Shen that configures a machine learning model.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of effectively communicating in a 5g network (Kuge: para.0007-0009).
Regarding Claim 11, Shen-Kuge discloses claim 10 as set forth above.
Shen further discloses wherein the processor is further configured to: transmit, to the core network entity, a complete message based at least in part on the modification command indicating the configuration for the machine learning model (Shen: para.0157 “In an implementation, the terminal sends a training result of the AI/ML training task to the network device. After completing the training according to the AI/ML task configuration information sent by the network device, the terminal can also send the training result to the network device, so that the network device combines the training results reported by individual terminals to obtain a trained AI/ML model, or further distributes the training related AI/ML task configuration information according to the results.” In response to the completion of tasks according to the configuration information from the network device, the terminal device sends the results to the network device. ).
However while Shen discloses the ML services occurring in a 5g scenario that typically communicates with PDU messages, it does not explicitly disclose, wherein the processor is further configured to: transmit, to the core network entity, a protocol data unit session modification complete message based at least in part on the protocol data unit session modification command indicating the configuration for the machine learning model.
Kuge discloses wherein the processor is further configured to: transmit, to the core network entity, a protocol data unit session modification complete message based at least in part on the protocol data unit session modification command indicating the configuration (Kuge: Fig. 8 para.0370-0371 “First, the AMF 140 transmits the Configuration update command message to the UE_A 10 via the 5G AN 120 (or the gNB) (S800), and thereby initiates the UE configuration update procedure.” Para.0424-0425 “Furthermore, the UE may transmit a Configuration update complete message to the AMF 140 via the 5G AN (gNB) as a response message to the configuration update command message, based on the identification information included in the configuration update command message (S802). In a case that the UE 10 transmits the configuration update complete command message, the AMF 140 receives the configuration update complete message via the 5G AN (gNB) (S802).” Configuration updates to be made are sent by the AMF to the UE via a 5g network, and a complete message is sent in step 802 in response to 800. Note, para.0424 has a typo because the command message is S800 in Fig. 8. Para.0095 “A session management (SM) message (also referred to as a Non-Access-Stratum (NAS) SM message) may be a NAS message used in a procedure for SM, or may be a control message transmitted and/or received between the UE A. 10 and the SW A 230 via the AMF A 240…. a PDU session modification command message, a PDU session modification complete message (PDU session modification complete)” communications between the UE and the AMF are performed using PDU session modification messages).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shen with Kuge in order to incorporate wherein the processor is further configured to: transmit, to the core network entity, a protocol data unit session modification complete message based at least in part on the protocol data unit session modification command indicating the configuration, and apply this to the 5g process in Shen that configures a machine learning model.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of effectively communicating in a 5g network (Kuge: para.0007-0009).
Regarding Claim 20, Shen discloses claim 15 as set forth above.
Shen further discloses wherein the processor is further configured to: obtain a modification complete message in response to the control signaling indicating the configuration for the machine learning model (Shen: para.0157 “In an implementation, the terminal sends a training result of the AI/ML training task to the network device. After completing the training according to the AI/ML task configuration information sent by the network device, the terminal can also send the training result to the network device, so that the network device combines the training results reported by individual terminals to obtain a trained AI/ML model, or further distributes the training related AI/ML task configuration information according to the results.” In response to the completion of tasks according to the configuration information from the network device, the terminal device sends the results to the network device.); and
wherein, to output the control signaling, the processor is configured to: output the control signaling via a modification command message. (Shen: para.0088 “The AI/ML task configuration information may include the amount of AI/ML tasks allocated by the network device to the terminal, an AI/ML model distributed by the network device to the terminal, an AI/ML training parameter arranged by the network device for the terminal, etc. … The AI/ML training parameter arranged by the network device for the terminal may include an AI/ML model to be trained by the terminal, a training period, the amount of data trained in each round, etc., and this is not limited in implementations of the present disclosure.” The configuration information sent by the network device to the terminal itself is a configuration update command, as it provides configuration to implement by the terminal device, alternatively, the configuration information includes instructions to train, which updates the configuration of the model, which is also an update command. Lastly, Fig. 6 and 7 show multiple iterations of the receiving of configurations from the network device, a second configuration command from the network device can also be considered an update. See also Para.0116-0124).
However Shen does not explicitly disclose wherein the processor is further configured to: obtain a protocol data unit session modification complete message in response to the control signaling indicating the configuration for the machine learning model; and wherein, to output the control signaling, the processor is configured to: output the control signaling via a protocol data unit session modification command message.
Kuge discloses wherein the processor is further configured to: obtain a protocol data unit session modification complete message in response to the control signaling indicating the configuration (Kuge: Para.0424-0425 “Furthermore, the UE may transmit a Configuration update complete message to the AMF 140 via the 5G AN (gNB) as a response message to the configuration update command message, based on the identification information included in the configuration update command message (S802). In a case that the UE 10 transmits the configuration update complete command message, the AMF 140 receives the configuration update complete message via the 5G AN (gNB) (S802).” Para.0095 “A session management (SM) message (also referred to as a Non-Access-Stratum (NAS) SM message) may be a NAS message used in a procedure for SM, or may be a control message transmitted and/or received between the UE A. 10 and the SW A 230 via the AMF A 240…. a PDU session modification command message, a PDU session modification complete message (PDU session modification complete)”A PDU modification complete message may be received in response to message 800 in Fig. 8);
and wherein, to output the control signaling, the processor is configured to: output the control signaling via a protocol data unit session modification command message. (Kuge: Fig. 8 para.0370-0371 “First, the AMF 140 transmits the Configuration update command message to the UE_A 10 via the 5G AN 120 (or the gNB) (S800), and thereby initiates the UE configuration update procedure.” Configuration updates to be made are sent by the AMF to the UE via a 5g network, and a complete message is sent in step 802 in response to 800. Note, para.0424 has a typo because the command message is S800 in Fig. 8. Para.0095 “A session management (SM) message (also referred to as a Non-Access-Stratum (NAS) SM message) may be a NAS message used in a procedure for SM, or may be a control message transmitted and/or received between the UE A. 10 and the SW A 230 via the AMF A 240…. a PDU session modification command message, a PDU session modification complete message (PDU session modification complete)” communications between the UE and the AMF are performed using PDU session modification messages).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shen with Kuge in order to incorporate wherein the processor is further configured to: obtain a protocol data unit session modification complete message in response to the control signaling indicating the configuration; and wherein, to output the control signaling, the processor is configured to: output the control signaling via a protocol data unit session modification command message and apply this to the 5g process in Shen that configures a machine learning model.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of effectively communicating in a 5g network (Kuge: para.0007-0009).
Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shen et al. (hereinafter Shen, US 2022/0342713 A1) in view of Lee et al. (hereinafter Lee, US 2022/0108214 A1).
Regarding Claim 13, Shen discloses claim 1 as set forth above.
However Shen does not explicitly disclose wherein, to receive the control signaling, the processor is configured to: receive one or more parameters for the machine learning model; and wherein, to perform the analytics, the processor is configured to: perform the analytics based at least in part on the one or more parameters.
Lee discloses wherein, to receive the control signaling, the processor is configured to: receive one or more parameters for the machine learning model (Lee: para.0174-0181 “The ML model provisioning service consumer (that is, the NWDAF device 501) illustrated in FIG. 5 may provide input parameters listed below. [0176] Analytics information to use an ML model: [0177] A list of Analytic IDs: Used to identify analytics for which an ML model is used. [0178] Analytics filter information: Used to identify targets such as a slice and a region (for example, S-NSSAI, a field of interest, and the like) to be analyzed through an ML model. [0179] An analytics report target: Indicates an object to be analyzed through an ML model, an entity such as a specific UE, a group of UE(s), or all UEs (that is, all UEs). [0180] An ML model target period: Indicates a time interval [start and end] for which an ML model for analytics is requested. The time interval is expressed as an actual start time and an actual end time (for example, through a UTC time).” Para.0353 “In operation 7, the NWDAF device 1 1601 may invoke, from the NWDAF device 2 1602, a request response service operation or subscription notification service operation including a model parameter for an untrained initial version of model or a trained model.” Any one of the plurality of parameters for ML analytics is obtained.); and
wherein, to perform the analytics, the processor is configured to: perform the analytics based at least in part on the one or more parameters (Lee: para.0354-0355 “In operation 8, when the NWDAF device 2 1602 is capable of training an ML model, the NWDAF device 2 1602 may locally train the model and model parameter. In operation 9, the NWADF device 2 1602 may locally evaluate the ML model after training the ML model.” Fig. 17 after obtaining the model parameter in step 7 of Fig. 17, it is used to execute the machine learning model, i.e. perform analytics using the parameter.).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shen with Lee in order to incorporate wherein, to receive the control signaling, the processor is configured to: receive one or more parameters for the machine learning model; and wherein, to perform the analytics, the processor is configured to: perform the analytics based at least in part on the one or more parameters.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of properly executing the machine learning model by use of an obtained parameter from the model provider (Lee: para.0353, para.0174-0181).
Regarding Claim 14, Shen discloses claim 1 as set forth above.
However Shen does not explicitly disclose wherein the processor is further configured to: receive the machine learning model from a core network based at least in part on an address indicated via the control signaling.
Lee discloses wherein the processor is further configured to: receive the machine learning model from a core network based at least in part on an address indicated via the control signaling (Lee: Fig. 12 para.0301 “In addition, a service operation of operation 2 may include at least one of (i) ML model information including at least one of an ML model file address, an ML model file, a model ID, and a model version, (ii) a validity period, (iii) a spatial validity, (iv) a description of a requested parameter for ML model update, and (v) a description of a budget for an update reporting time (for example, a top-k gradient, a threshold for sparsification of gradient, and the like).” The ML model is received by the device 1 1201 in fig. 12 via a provision notify step including the ML model file address.).
Therefore it would have been obvious to one of ordinary skill in the art to combine Shen with Lee in order to incorporate wherein the processor is further configured to: receive the machine learning model from a core network based at least in part on an address indicated via the control signaling.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of providing the ML model using less data by only sending the location (Lee: para.0301).
Claim(s) 15, and 25-28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shen et al. (hereinafter Shen, US 2022/0342713 A1) in view of Jin et al. (hereinafter Jin, US 2023/0412513 A1).
Regarding Claim 15, Shen discloses claim 1 as set forth above.
However while Shen discloses its operation in a 5G network, Shen does not explicitly disclose wherein the core network entity is an access and mobility management function (AMF) entity or a session management function (SMF) entity.
Jin discloses wherein the core network entity is an access and mobility management function (AMF) entity or a session management function (SMF) entity (Jin: para.0102 “Operation 601: The UE attaches and triggers an AI application/model, and the UE transmits a request for a distributed AI model through the AMF to the SMF. Accordingly, a PDU session may be specially established/updated for the distributed AI model based on communications between the UE and the SMF (through the AMF).” The AMF or SMF entity distributes the AI model to the UE).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Shen with Jin order to incorporate wherein the core network entity is an access and mobility management function (AMF) entity or a session management function (SMF) entity.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improving speed and latency of the network (Jin: para.0290-0291).
Regarding Claim 25 Shen discloses claim 16 as set forth above.
However while Shen discloses its operation in a 5G network, Shen does not explicitly disclose wherein the first core network entity is an access and mobility management function (AMF) entity.
Jin discloses wherein the first core network entity is an access and mobility management function (AMF) entity (Jin: para.0102 “Operation 601: The UE attaches and triggers an AI application/model, and the UE transmits a request for a distributed AI model through the AMF to the SMF. Accordingly, a PDU session may be specially established/updated for the distributed AI model based on communications between the UE and the SMF (through the AMF).” The AMF entity distributes the AI model to the UE).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Shen with Jin order to incorporate wherein the first core network entity is an access and mobility management function (AMF) entity.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improving speed and latency of the network (Jin: para.0290-0291).
Regarding Claim 26, Shen-Xu discloses claim 25 as set forth above.
However while Shen discloses its operation in a 5G network Shen does not explicitly disclose wherein the processor is further configured to: output the indication of the first set of one or more machine learning models supported at the UE to a session management function (SMF) entity, wherein the second core network entity is the SMF entity; obtain, from the SMF entity, the indication of the second set of one or more machine learning models supported at the SMF entity; and obtain, from the SMF entity, the control signaling indicating the configuration for the machine learning model.
Jin discloses wherein the processor is further configured to: output the indication of the first set of one or more machine learning models supported at the UE to a session management function (SMF) entity, wherein the second core network entity is the SMF entity (Jin: Fig. 6A para.0103 “Operation 601: The UE attaches and triggers an AI application/model, and the UE transmits a request for a distributed AI model through the AMF to the SMF. Accordingly, a PDU session may be specially established/updated for the distributed AI model based on communications between the UE and the SMF (through the AMF).” The AMF sends to the SMF the indication of the AI model from the UE);
obtain, from the SMF entity, the indication of the second set of one or more machine learning models supported at the SMF entity (Jin: para.0106 “Operation 605: The SMF, according to deployment information from the PCF (received at operation 602), spreads portions of the distributed AI model and the corresponding weights to the AMF (for further distribution to the access network and UE as discussed below with respect to operations 606 and 607). As shown in FIG. 6A, the SMF may transmit a PDU session establishment/update response message to the AMF, and the PDU session establishment/update response message may include edge model portions of the AI model (i.e., [Edge Model portion, Edge model weight]) and local model portions of the AI model (i.e., [Local Model portion, Local model weight]). The deployment can thus be conducted using the PDU session specially established for the distributed AI service.” The models are obtained by the AMF from the SMF entity.); and
obtain, from the SMF entity, the control signaling indicating the configuration for the machine learning model (Jin: para.0106 “Operation 605: The SMF, according to deployment information from the PCF (received at operation 602), spreads portions of the distributed AI model and the corresponding weights to the AMF (for further distribution to the access network and UE as discussed below with respect to operations 606 and 607). As shown in FIG. 6A, the SMF may transmit a PDU session establishment/update response message to the AMF, and the PDU session establishment/update response message may include edge model portions of the AI model (i.e., [Edge Model portion, Edge model weight]) and local model portions of the AI model (i.e., [Local Model portion, Local model weight]). The deployment can thus be conducted using the PDU session specially established for the distributed AI service.” The model weights, the configuration for the models, are obtained from the SMF entity.).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Shen with Jin order to incorporate wherein the processor is further configured to: output the indication of the first set of one or more machine learning models supported at the UE to a session management function (SMF) entity, wherein the second core network entity is the SMF entity; obtain, from the SMF entity, the indication of the second set of one or more machine learning models supported at the SMF entity; and obtain, from the SMF entity, the control signaling indicating the configuration for the machine learning model.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improving speed and latency of the network (Jin: para.0290-0291).
Regarding Claim 27, Shen discloses claim 16 as set forth above.
However while Shen discloses its operation in a 5G network Shen does not explicitly disclose wherein the first core network entity is a session management function (SMF) entity.
Jin discloses wherein the first core network entity is a session management function (SMF) entity (Jin: para.0102 “Operation 601: The UE attaches and triggers an AI application/model, and the UE transmits a request for a distributed AI model through the AMF to the SMF. Accordingly, a PDU session may be specially established/updated for the distributed AI model based on communications between the UE and the SMF (through the AMF).” The SMF entity distributes the AI model to the UE).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Shen with Jin order to incorporate wherein the first core network entity is a session management function (SMF) entity.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improving speed and latency of the network (Jin: para.0290-0291).
Regarding Claim 28, Shen-Jin discloses claim 27 as set forth above.
However while Shen discloses its operation in a 5G network, Shen does not explicitly disclose wherein the processor is further configured to: obtain the indication of the first set of one or more machine learning models supported at the UE from an access and mobility management function (AMF) entity, wherein the second core network entity is the AMF entity; output, to the AMF entity, the indication of the second set of one or more machine learning models supported at the SMF entity; and output, to the AMF entity, the control signaling indicating the configuration for the machine learning model at the UE.
Jin discloses wherein the processor is further configured to: obtain the indication of the first set of one or more machine learning models supported at the UE from an access and mobility management function (AMF) entity, wherein the second core network entity is the AMF entity (Jin: Fig. 6A para.0103 “Operation 601: The UE attaches and triggers an AI application/model, and the UE transmits a request for a distributed AI model through the AMF to the SMF. Accordingly, a PDU session may be specially established/updated for the distributed AI model based on communications between the UE and the SMF (through the AMF).” The AMF sends to the SMF the indication of the AI model from the UE);
output, to the AMF entity, the indication of the second set of one or more machine learning models supported at the SMF entity (Jin: para.0106 “Operation 605: The SMF, according to deployment information from the PCF (received at operation 602), spreads portions of the distributed AI model and the corresponding weights to the AMF (for further distribution to the access network and UE as discussed below with respect to operations 606 and 607). As shown in FIG. 6A, the SMF may transmit a PDU session establishment/update response message to the AMF, and the PDU session establishment/update response message may include edge model portions of the AI model (i.e., [Edge Model portion, Edge model weight]) and local model portions of the AI model (i.e., [Local Model portion, Local model weight]). The deployment can thus be conducted using the PDU session specially established for the distributed AI service.” The models are output by the SMF to the AMF); and
output, to the AMF entity, the control signaling indicating the configuration for the machine learning model at the UE (Jin: para.0106 “Operation 605: The SMF, according to deployment information from the PCF (received at operation 602), spreads portions of the distributed AI model and the corresponding weights to the AMF (for further distribution to the access network and UE as discussed below with respect to operations 606 and 607). As shown in FIG. 6A, the SMF may transmit a PDU session establishment/update response message to the AMF, and the PDU session establishment/update response message may include edge model portions of the AI model (i.e., [Edge Model portion, Edge model weight]) and local model portions of the AI model (i.e., [Local Model portion, Local model weight]). The deployment can thus be conducted using the PDU session specially established for the distributed AI service.” The model weights, the configuration for the models, are output by the SMF entity to the AMF).
. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Shen with Jin order to incorporate wherein the processor is further configured to: obtain the indication of the first set of one or more machine learning models supported at the UE from an access and mobility management function (AMF) entity, wherein the second core network entity is the AMF entity; output, to the AMF entity, the indication of the second set of one or more machine learning models supported at the SMF entity; and output, to the AMF entity, the control signaling indicating the configuration for the machine learning model at the UE.
One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improving speed and latency of the network (Jin: para.0290-0291).
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shen et al. (hereinafter Shen, US 2022/0342713 A1) in view of Wang et al. (hereinafter Wang, US 2023/0004864 A1).
Regarding Claim 21, Shen discloses claim 16 as set forth above.
However Shen does not explicitly disclose wherein the processor is further configured to: obtain, from another core network entity, a request for the UE to perform analytics based at least in part on the machine learning model; and, wherein to output the control signaling, the processor is configured to: output the control signaling in response to the request.
Wang discloses wherein the processor is further configured to: obtain, from another core network entity, a request for the UE to perform analytics based at least in part on the machine learning model (Wang: para.0179 “At 1420, the core network server 302 selects a neural network formation configuration. As one example, the core network server 302 compares a current operating environment to input characteristics stored within the neural network table and identifies stored input characteristics aligned with the current operating environment (e.g., one or more of channel conditions, UE capabilities, BS capabilities, metrics). The core network server then obtains the index value(s) of the aligned input characteristics which, in turn, provides the index value(s) of the neural network formation configuration and/or neural network formation configuration elements. The core network server 302 then communicates the selected neural network formation configuration to the base station at 1425, such as by communicating the index value(s) using core network interface 320. In some implementations, the core network server communicates a processing assignment with the neural network formation configuration.” Base station 121 receives request Fig. 14 1425 that comprises a configuration update for the neural networks used by the UEs.); and,
wherein to output the control signaling, the processor is configured to: output the control signaling in response to the request (Wang: para.0180 “At 1430, the base station 121 forwards the neural network formation configuration to the UE 110. As an example, the base station 121 transmits the index value(s) to the UE 110, such as through layer 2 messaging (e.g., an RLC message, MAC control element(s)), to direct the UE to form the deep neural network using the neural network formation configuration, such as that described at 1320 of FIG. 13.” Fig. 14 1430 the control signaling for the neural network is sent by the base station to the UE to be used for processing in step 1440.).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shen with Wang in order to incorporate wherein the processor is further configured to: obtain, from another core network entity, a request for the UE to perform analytics based at least in part on the machine learning model; and, wherein to output the control signaling, the processor is configured to: output the control signaling in response to the request.
One of ordinary skill in the art would have been motivated to combine beause of the expected benefit of providing updated neural network configuration to improve network performance (Wang: para.0038).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ma et al. US 2021/0160149 A1, see para.0121, 0136 and Fig. 12 wherein AI/ML capability information is exchanged for control information from the BS.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EUI H KIM whose telephone number is (571)272-8133. The examiner can normally be reached 7:30-5 M-R, M-F alternating.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamal B Divecha can be reached at 5712725863. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EUI H KIM/ Examiner, Art Unit 2453
/DHAIRYA A PATEL/ Primary Examiner, Art Unit 2453