Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 30, 2026, has been entered.
Remarks
This Office Action is in response to applicant’s amendment filed on January 30, 2026, under which claims 1-32 are pending and under consideration.
Response to Arguments
Applicant’s amendments have overcome the previous § 112(b) and § 101 rejections. Therefore, these particular grounds of rejection have been withdrawn.
Applicant’s amendments and arguments directed to the § 103 rejection have been fully considered, but do not distinguish over the previously applied references. The rejections have been updated to account for the amended claim language, and the previously applied references remain cited in the updated rejection of the claims. Since applicant’s arguments merely present the amended claim language without discussion of specific technical features in the references, applicant is directed to the updated rejections below.
Claim Objections
Claim 22 objected to because of the following informalities:
Claim 22 should be amended to “wherein the ” in a manner that is consistent with the amendments made to claims 2 and other similar dependent claims. The Examiner believes that the current language of claim 22 was likely an oversight, since all other similar claims (e.g., claim 2) were amended to address issues raised in the previous action. Under the current language, the term “identifying” is both inconsistent with and redundant to the term “to identify” in the parent claim, and is thus objected to under the requirement of 37 CFR 1.71(a) for “full, clear, concise, and exact terms.” For purposes of examination, this claim is interpreted in the manner of the suggested amendment.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
1. Claims 1-6, 8-12, 14-19, and 21-32 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wang et al. (US 2023/0259789 A1) (“Wang”) in view of Namgoong et al. (US 2023/0261909 A1) (“Namgoong”) and Timo et al. (US 2022/0149904 A1) (“Timo”) (cited in the IDS filed on 07/02/2025).
As to claim 1, Wang teaches one or more processors, comprising circuitry to: [[0033]: “The base station 120 also includes processor(s) 260 and computer-readable storage media 262.” [0042]: “The core network server 302…includes processor(s) 304 and computer-readable storage media 306.” Any one or combination of the processors described above may correspond to the instant limitation.]
obtain information about one or more capabilities of a user equipment (UE) to train one or more neural networks [[0068]: “In aspects, the base station receives a UE capability information message (not illustrated) from each UE and selects the initial ML configuration based on a common UE capability between the UEs 111, 112, and 113. As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures, supported number of layers, available processing power, memory/storage capabilities, available power budget, fixed-point processing vs. floating-point processing, maximum kernel size capability, computation capability), and the base station 120 selects the initial ML configuration based on a common ML capability supported by the UEs 111, 112, and 113.” That is, the base station obtains the UE capabilities of each UE, which includes ML capabilities. In the present context, this is for training a neural network. See [0072]: “At 615, 616, and 617, the UEs 111, 112, and 113, respectively, receive the directions to form the DNN and then form the DNN using the initial ML configuration”; [0029]: “the base station 120 indicates, to the UE federated learning manager 220, to perform the training procedure”] to generate information about one or more fifth-generation (5G) signals; [The DNNs at the UE perform signal processing. See [0054]: “the DNNs 408 can perform any combination of extracting data embedded on the Rx signal, recovering binary data, correcting for data errors based on forward error correction applied at the transmitter block, extracting payload from frames and/or slots, and so forth.” The signal is a “5G” signal as described in [0021]: “The wireless links 130 may include one or more wireless links (e.g., radio links) or bearers implemented using…Fifth Generation New Radio (5G NR), and so forth.” [0025]: “The antennas 202 and the RF front end 204 can be tuned to, and/or be tunable to, one or more frequency bands defined by the 3GPP LTE and 5G NR communication standards and implemented by the LTE transceiver 206, and/or the 5G NR transceiver 208.”]
send one or more instructions to the UE based, at least in part, on the information about the one or more capabilities of the UE, to train […] [[0071]: “At 610, the base station 120 directs each UE in a set of UEs (e.g., UE 111, UE 112, UE 113) to form a (respective) DNN using the initial ML configuration. To illustrate, the base station 120 transmits an indication of an index value of a neural network table to the UEs 111, 112, and 113.” [0073]: “At 620, the base station 120 requests each UE…to report updated ML information generated using a training procedure and input data local to the respective UE. …To illustrate, the base station implicitly requests the UE to report the updated ML information (and/or to perform the training procedure) by indicating one or more update conditions that specify rules or instructions on when to report the updated ML information.” [0074]: “In aspects, the base station 120 directs each UE to perform an online training procedure, such as an online training procedure that trains the DNNs while processing the wireless network communications.” That is, the indication of the neural network table values, the request to report updated ML information, and the request to perform online training collectively constitute one or more instructions to train the neural network, since the training is performed at least in part by the UE. These communications are based on the capability information, since they are based on the initial ML configuration that was selected “based on a common ML capability supported by the UEs” ([0068]).]
Wang does not explicitly teach:
(1) the model being “an autoencoder to compress and decompress 5G signal information”;
(2) “wherein the one or more instructions comprise: at least one instruction to identify one or more resources to be used by the UE according to a channel state information (CSI) resource setting, at least one instruction to measure CSI, and at least one instruction to report the generated information about the one or more 5G signals using one or more formats.”
Namgoong teaches “an autoencoder to compress and decompress 5G signal information” [[0126]: “federated learning for a classifier and a set of associated autoencoders…” The autoencoders are neural networks (see [0127]: “the neural network parameters may include a set of neural network parameters for the classifier and the K autoencoders.”) and are trained by the clients (which is a UE, see [0031]) in a federated learning process, as stated in [0128]: “The process 900 may further include determining a loss for each of the client autoencoders (block 920). For example, the client may input the observed wireless communication training vector, x, to the encoder of the k-th autoencoder to determine a training latent vector, h. The client may input the training latent vector, h, to a decoder of the k-th autoencoder to determine a training output of the k-th autoencoder.” The autoencoder is used to “compress and decompress 5G signal information.” See [0091]: “autoencoders may be used for compressing CSF for feeding back CSI to a server…the observed wireless communication vector, x, may comprise a propagation channel that the client (e.g., a UE 120) estimates based at least in part on a received CSI-RS. The latent vector, h, may comprise compressed CSF to be fed back to a server (e.g., a base station 110).” In general, the autoencoder described here is a CSI autoencoder. See [0167] (“providing the CSI as input to the autoencoder”); [0083] (“For example, in aspects in which the wireless communication is a CSI-RS, the observed wireless communication vector, x, may include channel state information (CSI).”). The act of “decompress” is disclosed in: [0070]: “The server may decode the compressed measurements using one or more decompression operations and reconstruction operations associated with one or more neural networks”; [0082]: “The server autoencoder 320 also may include a decoder 324 configured to receive the latent vector, h, as input and to provide the observed wireless communication vector, x, as output” (noting that “decoding” in this context has the meaning of decompression). Note that the autoencoder exists in both the client and the server since it is jointly trained in federated learning and the server also uses a copy of the autoencoder, as shown in FIG. 3, and the client also performs this decompression during training, as quoted above. The limitation of “5G signal information” is disclosed in [0041]: “aspects may be described herein using terminology commonly associated with a 5G or NR radio access technology”; [0042]: “The wireless network 100 may be or may include elements of a 5G (NR) network… A base station (BS) is …a 5G node B (NB).”] “wherein the one or more instructions comprise at least one instruction to identify one or more resources to be used by the UE according to a channel state information (CSI) resource setting” [[0083]: “As shown by reference number 330, for example, the server 304 may transmit, using the transceiver 328, a wireless communication to the client 302… In some aspects, the wireless communication may include a reference signal such as a channel state information reference signal (CSI-RS).” Note that since the CSI-RS is transmitted by the server (BS) for use at the client (UE), it is identified by both the server and the client, and the communication of it to the client (UE) also configures the client to use the reference signal. Furthermore, the CSI-RS itself is “according to” a CSI resource setting that was implicitly the basis for the reference signal.] “at least one instruction to measure CSI” [[0091]: “In some aspects, the observed wireless communication vector, x, may comprise a propagation channel that the client (e.g., a UE 120) estimates based at least in part on a received CSI-RS.” This estimation is a CSI, as clarified in [0167]: “receiving a CSI-RS, determining CSI based at least in part on the CSI-RS, and providing the CSI as input to the autoencoder.” In the context of this operation, the wireless communication that includes the CSI-RS is regarded as an instruction to use the CSI-RS it to measure CSI, since the client measures the CSI on the basis of this communication from the client.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Wang and Namgoong by modifying the system of Wang to include an autoencoder as taught in Namgoong as the neural network being trained at the UE and to utilize autoencoders at the UE and base station for wireless communication as taught in Namgoong, so as to arrive at the limitations of “an autoencoder to compress and decompress 5G signal information” and “wherein the one or more instructions comprise at least one instruction to identify one or more resources to be used by the UE according to a channel state information (CSI) resource setting, at least one instruction to measure CSI.” The motivation would have been to use a set of models, specifically autoencoders, for compressing and reconstructing information transmitted between client and server components, and to train them in a suitable manner using a federated learning process. Namgoong, [0033] (“the client and server may use a classifier and an associated set of autoencoders for compressing and reconstructing information. In some cases, a classifier and an associated set of autoencoders may be trained using federated learning.”); [0031] (“compress measurements in a way that limits compression loss. The client may transmit the compressed measurements to the server”).
The combination of references thus far does not explicitly teach the limitation of also indicating “one or more formats to report the generated information about the one or more 5G signals” [The Examiner notes that Namgoong teaches “channel state information reporting” ([0002]) in general, but does not explicitly teach this is an element specified by information sent by the sever or base station.]
Timo teaches “at least one instruction to report the generated information about the one or more 5G signals using one or more formats” [[0242]: “When a new UE joins a cell, the serving BS may configure the UE CSI reporting settings as part of the RRC (see Section 5.2 in 3GPP TS 38.214 V15.4.0).” [0254]: “Each Reporting Setting ReportConfig is associated with a single downlink BWP (higher layer parameter bandwidthPartId) and contains the reported parameter(s) for one CSI reporting band: CSI Type (I or II) if reported, codebook configuration including codebook subset restriction, time-domain behavior, frequency granularity for CQI and PMI, measurement restriction configurations, the strongest layer indicator (SLI), the reported L1-RSRP parameter(s), CRI, and SSBRI (SSB Resource Indicator). Each ReportConfig contains a ReportConfigID to identify the ReportConfig, a ReportConfigType to specify the time domain behavior of the report (either aperiodic, semi-persistent, or periodic), a ReportQuantity to indicate the CSI-related or L1-RSRP-related quantities to report, a ReportFreqConfiguration to indicate the reporting granularity in the frequency domain.” See also [0259]: “If configured for Type-I or Type-II CSI feedback, the ReportConfig contains a CodebookConfig that specifies configuration parameters for Type-1 and Type-II CSI feedback.”That is, a format in the form of reporting content and granularity is specified by the reporting settings, which is regarded as part of the instructions provided by the base station.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far with the teachings of Timo by modifying the one or more instructions to comprise “at least one instruction to report the generated information about the one or more 5G signals using one or more formats.” The motivation for doing so would have been to provide information to a UE so as to configure it to provide CSI reporting in a manner desired by the base station, as suggested by Timo ([0242]: “When a new UE joins a cell, the serving BS may configure the UE CSI reporting settings as part of the RRC.”)
As to claim 2, the combination of Wang, Namgoong, and Timo teaches the processor of claim 1, as set forth above.
Namgoong further teaches “wherein the one or more resources to be used by the UE includes one or more channel state information (CSI) resources to be used by the UE.” [[0083]: “As shown by reference number 330, for example, the server 304 may transmit, using the transceiver 328, a wireless communication to the client 302… In some aspects, the wireless communication may include a reference signal such as a channel state information reference signal (CSI-RS).” This reference signal is a CSI resource.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Namgoong cited above for the instant claim are part of, or are used to implement, the teachings or techniques discussed in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for the teachings of Namgoong in the rejection of the parent claim.
As to claim 3, the combination of Wang, Namgoong, and Timo teaches the processor of claim 1, as set forth above.
Timo further teaches “wherein the one or more formats to report the generated information about the one or more 5G signals includes a channel state information (CSI) report format.” [[0242]: “When a new UE joins a cell, the serving BS may configure the UE CSI reporting settings as part of the RRC (see Section 5.2 in 3GPP TS 38.214 V15.4.0).” [0254]: “Each Reporting Setting ReportConfig is associated with a single downlink BWP (higher layer parameter bandwidthPartId) and contains the reported parameter(s) for one CSI reporting band: CSI Type (I or II) if reported, codebook configuration including codebook subset restriction, time-domain behavior, frequency granularity for CQI and PMI, measurement restriction configurations, the strongest layer indicator (SLI), the reported L1-RSRP parameter(s), CRI, and SSBRI (SSB Resource Indicator). Each ReportConfig contains a ReportConfigID to identify the ReportConfig, a ReportConfigType to specify the time domain behavior of the report (either aperiodic, semi-persistent, or periodic), a ReportQuantity to indicate the CSI-related or L1-RSRP-related quantities to report, a ReportFreqConfiguration to indicate the reporting granularity in the frequency domain.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Wang, Namgoong, and Timo so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Timo cited for the instant claim are part of those cited in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for Timo in the rejection of the parent claim.
As to claim 4, the combination of Wang, Namgoong, and Timo teaches the processor of claim 1, wherein the circuitry is to cause the UE to train [Wang, [0029]: “The base station 120 indicates, to the UE federated learning manager 220, to perform a training procedure and/or to transmit updated ML information”] at least a portion of the autoencoder. [The “autoencoder” is taught by Namgoong and, in the rejection of the parent claim, is implemented as the neural network being trained at the UE in the combined teachings of the references. Therefore, the combination of the references and the associated rationale discussed in the rejection of the parent independent claim covers the instant limitation.]
As to claim 5, the combination of Wang, Namgoong, and Timo teaches the processor of claim 1, wherein the one or more capabilities of the UE to train one or more neural networks to generate information about one or more 5G signals include […] training capabilities of the UE […] [Wang, [0068]: “…As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures, supported number of layers, available processing power, memory/storage capabilities, available power budget, fixed-point processing vs. floating-point processing, maximum kernel size capability, computation capability), and the base station 120 selects the initial ML configuration based on a common ML capability supported by the UEs 111, 112, and 113.”].
Namgoong, teaches “one or more channel state information (CSI) autoencoder” training [As discussed in the rejection of the parent independent claim, Namgoong teaches training an autoencoder, specifically a CSI autoencoder. See [0167] (“providing the CSI as input to the autoencoder”); [0083] (“For example, in aspects in which the wireless communication is a CSI-RS, the observed wireless communication vector, x, may include channel state information (CSI).”). Since an autoencoder is merely a type of neural network, generic neural network training capabilities also apply to autoencoders.] “the circuitry is to cause the UE to train an encoder of a CSI autoencoder.” [As shown in FIG. 3, each autoencoder include an “encoder.” See [0080]: “As shown, the client autoencoder 310 may include an encoder 314 configured to receive an observed wireless communication vector, x, as input and to provide a latent vector, h, as output.” The encoder parameters are also being trained in the training process. See [0102] (“qϕ(h|x,z) is parameterized by the encoder of the autoencoder… KL (qϕ(h|x, z)∥pθ(h|z)) is the regularization term for the autoencoder”), where the loss function in this paragraph is used in accordance with the loss function in [0104] that is “used as a loss function in the training” in order to optimize the neural network. See [0104] (“to find the neural network parameters θ and ϕ that maximizes the ELBO…”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Namgoong cited above for the instant claim are part of, or are used to implement, the teachings or techniques discussed in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for the teachings of Namgoong in the rejection of the parent claim.
As to claim 6, the combination of Wang, Namgoong, and Timo teaches the processor of claim 1, wherein the circuitry is to cause a wireless radio network base station to train [Wang, [0080]: “Accordingly, at 640, the base station 120 receives updated ML information from at least some of the UEs, where the updated ML information can indicate any combination of ML parameters, ML architectures, and/or ML gradients.” Wang, [0082]: “In determining the common ML configuration, the base station 120 applies federated learning techniques that aggregate the updated ML information received from multiple UEs…the base station 120 performs averaging that aggregates ML parameters, gradients, and so forth.” Wang, [0084]: “At 655, the base station 120 directs the subset of UEs to update the DNN formed at 615 and at 616 using the common ML configuration determined at 650.” Note that this process of aggregating information constitutes “training” in the absence of further limitations defining what specific training operations are performed. The limitation of “wireless radio” is disclosed in e.g., [0021]: “The base stations 120 communicate with the user equipment 110 using the wireless links 130, which may be implemented as any suitable type of wireless link… The wireless links 130 may include one or more wireless links (e.g., radio links)”] “at least a portion of the autoencoder” [As discussed in the rejection of the parent independent claim, Namgoong teaches an autoencoder, which, in the combined teachings of the references as set forth in the rejection of the parent claim, is implemented as the neural network being trained at the UE. Therefore, the combination of the references and the associated rationale discussed in the rejection of the parent independent claim covers the instant limitation.]
As to claim 8, the combination of Wang, Namgoong, and Timo teaches the processor of claim 1, wherein the one or more capabilities of the UE to train one or more neural networks to generate information about one or more 5G signals include […] training capabilities of the UE, [Wang, [0068]: “…As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures, supported number of layers, available processing power, memory/storage capabilities, available power budget, fixed-point processing vs. floating-point processing, maximum kernel size capability, computation capability), and the base station 120 selects the initial ML configuration based on a common ML capability supported by the UEs 111, 112, and 113.”] and circuitry to cause a CSI training configuration to be sent to the UE based, at least in part, on the […] training capabilities. [[0045]: “the base station 120 obtains the various criteria and/or link quality indications (e.g., any one or more of: RSSI, power information, SINR, RSRP, CQI, CSI, Doppler feedback, BLER, HARQ, timing measurements, error metrics, etc.) during the communications with the UE and forwards the criteria and/or link quality indications to the core network neural network manager 312.” [0049]: “[[0068]: “The neural network table 318 stores multiple different NN formation configuration elements generated using the training module 316. In some implementations, the neural network table includes input characteristics for each NN formation configuration element and/or NN formation configuration, where the input characteristics describe properties about the training data used to generate the NN formation configuration. For instance, the input characteristics can include a power information, SINR information, CQI, CSI, Doppler feedback, RSS, error metrics, etc.” Note that the neural network table is a training configuration (see, e.g., [0039]: “the neural network table includes input characteristics for each NN formation configuration element and/or NN formation configuration, where the input characteristics describe properties about the training data used to generate the NN formation configuration element and/or NN formation configuration.”). Thus, the includes CSI input characteristics, constitutes a CSI training configuration. Regarding the limitation of “to be sent,” the configuration/neural network table is sent to the UEs as disclosed in [0071]: “At 610, the base station 120 directs each UE in a set of UEs (e.g., UE 111, UE 112, UE 113) to form a (respective) DNN using the initial ML configuration. To illustrate, the base station 120 transmits an indication of an index value of a neural network table to the UEs 111, 112, and 113.” That is, that the content of the initial ML configuration is transmitted.]
Namgoong further teaches “one or more channel state information (CSI) autoencoder” training [As discussed in the rejection of the parent independent claim, Namgoong teaches training an autoencoder, specifically a CSI autoencoder. See [0167] (“providing the CSI as input to the autoencoder”); [0083] (“For example, in aspects in which the wireless communication is a CSI-RS, the observed wireless communication vector, x, may include channel state information (CSI).”). Since an autoencoder is merely a type of neural network, generic neural network training capabilities also apply to autoencoders.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Namgoong cited above for the instant claim are part of, or are used to implement, the teachings or techniques discussed in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for the teachings of Namgoong in the rejection of the parent claim.
As to claim 9, this claim is directed to a system that performs operations that are the same or substantially the same as those of claim 1. Therefore, the rejection made to claim 1 is applied to claim 9.
Additionally, Wang teaches the following limitations of claim 9 which differ from those of claim 1: “a system, comprising: one or more processors” [[0033]: “The base station 120 also includes processor(s) 260 and computer-readable storage media 262.” [0042]: “The core network server 302…includes processor(s) 304 and computer-readable storage media 306.”] “one or more memories to store at least a portion of the one or more neural networks. [[0033]: “The base station 120 also includes processor(s) 260 and computer-readable storage media 262.” [0037]: “The CRM 262 includes a training module 272 and a neural network table 274. In implementations, the base station 120 manages and deploys NN formation configurations to UE 110.” Additionally, [0080]: “Accordingly, at 640, the base station 120 receives updated ML information from at least some of the UEs, where the updated ML information can indicate any combination of ML parameters, ML architectures, and/or ML gradients.” That is, the model updates, which are a portion of the one or more neural networks, is also stored in the memory of the base station.]
As to claim 10, the combination of Wang, Namgoong, and Timo teaches the system of claim 9, as set forth above.
Namgoong further teaches “wherein the one or more resources to be used by the UE and one or more formats to report the generated information about the one or more 5G signals includes the one or more channel state information (CSI) resources to be used by the UE” [[0083]: “As shown by reference number 330, for example, the server 304 may transmit, using the transceiver 328, a wireless communication to the client 302… In some aspects, the wireless communication may include a reference signal such as a channel state information reference signal (CSI-RS).” This reference signal is a CSI resource.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far so as to also arrive at the above-identified further limitations of the instant dependent claim. Since the teachings of Namgoong cited for the instant claim are part of those cited in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for the teachings of Namgoong in the rejection of the parent claim.
Timo further teaches “a CSI report format” [[0242]: “When a new UE joins a cell, the serving BS may configure the UE CSI reporting settings as part of the RRC (see Section 5.2 in 3GPP TS 38.214 V15.4.0).” [0254]: “Each Reporting Setting ReportConfig is associated with a single downlink BWP (higher layer parameter bandwidthPartId) and contains the reported parameter(s) for one CSI reporting band: CSI Type (I or II) if reported, codebook configuration including codebook subset restriction, time-domain behavior, frequency granularity for CQI and PMI, measurement restriction configurations, the strongest layer indicator (SLI), the reported L1-RSRP parameter(s), CRI, and SSBRI (SSB Resource Indicator). Each ReportConfig contains a ReportConfigID to identify the ReportConfig, a ReportConfigType to specify the time domain behavior of the report (either aperiodic, semi-persistent, or periodic), a ReportQuantity to indicate the CSI-related or L1-RSRP-related quantities to report, a ReportFreqConfiguration to indicate the reporting granularity in the frequency domain.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Wang, Namgoong, and Timo so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Timo cited for the instant claim are part of those cited in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for Timo in the rejection of the parent claim.
As to claim 11, the combination of Wang, Namgoong, and Timo teaches the system of claim 9, wherein the one or more processors are to cause the autoencoder to be trained based, at least in part, on an event trigger. [Wang, [0029]: “To illustrate, the base station 120 indicates, to the UE federated learning manager 220, to perform a training procedure and/or to transmit updated ML information in response to identifying a trigger event (e.g., changing ML parameters, changing ML architectures, changing signal or link quality parameters, changing UE-location).” Wang, [0036]: “The BS federated learning manager 270 indicates, to the UE 110, one or more update conditions (e.g., a trigger event, a periodicity) that specify when to perform a training procedure and/or when to report updated ML information to the BS federated learning manager 270.” The “autoencoder” is taught by Namgoong and, in the rejection of the parent claim, is implemented as the neural network being trained at the UE in the modification of Wang by the teachings of the other references. Therefore, the combination of the references and the associated rationale discussed in the rejection of the parent independent claim covers the instant limitation.]
As to claim 12, the combination of Wang, Namgoong, and Timo teaches the system of claim 9, wherein the one or more processors are to cause the autoencoder to be trained by a user equipment (UE) device [As noted in the rejection of claim 1, the user equipment (UEs) and base station collectively train the model. See, e.g., Wang, [0078]: “At 630, at 631, and at 632, the UEs 111, 112, and 113 optionally perform a training procedure to generate the updated ML information…”; Wang, [0030]: “The UE training module 222 teaches and trains DNNs using known input data and/or by providing feedback to the ML algorithm.” The “autoencoder” is taught by Namgoong and, in the rejection of the parent claim, is implemented as the neural network being trained at the UE in the modification of Wang by the teachings of the other references. Therefore, the combination of the references and the associated rationale discussed in the rejection of the parent independent claim covers the instant limitation.] and a wireless radio network base station. [Wang, [0022]: “The base stations 120 are collectively a Radio Access Network 140 (e.g., RAN, Evolved Universal Terrestrial Radio Access Network, E-UTRAN, 5G NR RAN or NR RAN).” That is, the base stations and the UEs are part of a wireless radio network. Wang, [0080]: “Accordingly, at 640, the base station 120 receives updated ML information from at least some of the UEs, where the updated ML information can indicate any combination of ML parameters, ML architectures, and/or ML gradients.” Wang, [0082]: “In determining the common ML configuration, the base station 120 applies federated learning techniques that aggregate the updated ML information received from multiple UEs…the base station 120 performs averaging that aggregates ML parameters, gradients, and so forth.” Wang, [0084]: “At 655, the base station 120 directs the subset of UEs to update the DNN formed at 615 and at 616 using the common ML configuration determined at 650.”]
As to claim 14, the combination of Wang, Namgoong, and Timo teaches the system of claim 9, wherein the one or more processors are to cause the autoencoder to be trained by the UE. [As noted in the rejection of claim 1, the user equipment (UEs) and base station collectively train the model. See, e.g., [0078]: “At 630, at 631, and at 632, the UEs 111, 112, and 113 optionally perform a training procedure to generate the updated ML information…”; [0030]: “The UE training module 222 teaches and trains DNNs using known input data and/or by providing feedback to the ML algorithm.” The “autoencoder” is taught by Namgoong and, in the rejection of the parent claim, is implemented as the neural network being trained at the UE in the modification of Wang by the teachings of the other references. Therefore, the combination of the references and the associated rationale discussed in the rejection of the parent independent claim covers the instant limitation.]
As to claim 15, this claim is directed to a machine-readable medium for performing operations that are the same or substantially the same as those of claim 1. Therefore, the rejection made to claim 1 is applied to claim 15.
Additionally, Wang teaches a machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least: [[0104]: “Some operations of the example methods may be described in the general context of executable instructions stored on computer-readable storage memory that is local and/or remote to a computer processing system, and implementations can include software applications, programs, functions, and the like.” [0033]: “The base station 120 also includes processor(s) 260 and computer-readable storage media 262.” [0042]: “The core network server 302…includes processor(s) 304 and computer-readable storage media 306.” Any one or combination of the processors described above may correspond to the instant limitation.]
As to claim 16, the combination of Wang, Namgoong, and Timo teaches the non-transitory machine-readable medium of claim 15, as set forth above.
Timo further teaches “wherein the one or more formats to report the generated information about the one or more 5G signals includes a channel state information (CSI) codebook format.” [[0242]: “When a new UE joins a cell, the serving BS may configure the UE CSI reporting settings as part of the RRC (see Section 5.2 in 3GPP TS 38.214 V15.4.0).” [0254]: “Each Reporting Setting ReportConfig is associated with a single downlink BWP (higher layer parameter bandwidthPartId) and contains the reported parameter(s) for one CSI reporting band: CSI Type (I or II) if reported, codebook configuration including codebook subset restriction, time-domain behavior, frequency granularity for CQI and PMI, measurement restriction configurations, the strongest layer indicator (SLI), the reported L1-RSRP parameter(s), CRI, and SSBRI (SSB Resource Indicator).” [0259]: “If configured for Type-I or Type-II CSI feedback, the ReportConfig contains a CodebookConfig that specifies configuration parameters for Type-1 and Type-II CSI feedback.” See also [0057]-[0058]; [0172]; [0303] which contain further examples of codebook format.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Wang, Namgoong, and Timo so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Timo cited for the instant claim are part of those cited in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for Timo in the rejection of the parent claim.
As to claim 17, the combination of Wang, Namgoong, and Timo teaches the non-transitory machine-readable medium of claim 15, wherein the set of instructions, which if performed by the one or more processors, cause the one or more processors to at least cause one or more of the UE, a wireless radio network base station, and a wireless radio network operations, administration, and maintenance (OAM) node to train [As noted in the rejection of claim 1, the user equipment (UEs) and base station collectively train the model. See, e.g., Wang, [0078]: “At 630, at 631, and at 632, the UEs 111, 112, and 113 optionally perform a training procedure to generate the updated ML information…”; Wang, [0030]: “The UE training module 222 teaches and trains DNNs using known input data and/or by providing feedback to the ML algorithm.” The Examiner notes that the phrase “one or more of” denotes an alternate expression, which is met with at least respect to the alternative of “UE.”] the autoencoder. [The “autoencoder” is taught by Namgoong and, in the rejection of the parent claim, is implemented as the neural network being trained at the UE in the modification of Wang by the teachings of the other references. Therefore, the combination of the references and the associated rationale discussed in the rejection of the parent independent claim covers the instant limitation.]
As to claim 18, the combination of Wang, Namgoong, and Timo teaches the machine-readable medium of claim 15, wherein the set of instructions, which if performed by the one or more processors, cause the one or more processors to cause a training configuration to be sent to the UE [Wang, [0067]: “As illustrated, at 605, the base station 120 selects an initial ML configuration for a DNN that processes wireless network communications.” From FIG. 6, it is understood that the initial ML configuration is sent to the UEs. See also Wang, [0071]: “At 610, the base station 120 directs each UE in a set of UEs (e.g., UE 111, UE 112, UE 113) to form a (respective) DNN using the initial ML configuration. To illustrate, the base station 120 transmits an indication of an index value of a neural network table to the UEs 111, 112, and 113.” That is, that the content of the initial ML configuration is transmitted via indices of the neural network table.] to perform one or more of split training and federated training with one or more of a wireless radio network base station and a wireless radio network operations, administration, and maintenance (OAM) node. [Wang, [0029]: “The UE federated learning manager 220 identifies requests from the base station 120 that indicate one or more conditions that specify when to train a DNN and/or when to report the updated ML information … To illustrate, the base station 120 indicates, to the UE federated learning manager 220, to perform a training procedure and/or to transmit updated ML information.” Wang, [0082]: “In determining the common ML configuration, the base station 120 applies federated learning techniques that aggregate the updated ML information received from multiple UEs…the base station 120 performs averaging that aggregates ML parameters, gradients, and so forth.” The limitation of “wireless radio” is disclosed in e.g., Wang, [0021]: “The base stations 120 communicate with the user equipment 110 using the wireless links 130, which may be implemented as any suitable type of wireless link… The wireless links 130 may include one or more wireless links (e.g., radio links).” The phrase “one or more of” denotes an alternate expression, which is met here at least with respect to the alternative of “base station” such that the alternative of “OAM node” does not need to be met.].
As to claim 19, the combination of Wang, Namgoong, and Timo teaches the non-transitory machine-readable medium of claim 15, wherein the one or more capabilities of the UE to train one or more neural networks to generate information about one or more 5G signals include one or more of a computational capability of the UE to perform training and a memory storage of the UE to perform training. [Wang, [0068]: “…As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures, supported number of layers, available processing power, memory/storage capabilities, available power budget, fixed-point processing vs. floating-point processing, maximum kernel size capability, computation capability), and the base station 120 selects the initial ML configuration based on a common ML capability supported by the UEs 111, 112, and 113.”]
As to claim 21, this claim is directed to a method comprising operations that are the same or substantially the same as those of claim 1. Therefore, the rejection made to claim 1 is applied to claim 21.
As to claim 22, the combination of Wang, Namgoong, and Timo teaches the method of claim 21, as set forth above.
Namgoong further teaches “wherein identifying one or more resources to be used by the UE includes identifying one or more channel state information (CSI) resources to be used by the UE to perform CSI measurements to be used by the UE to train the autoencoder.” [[0083]: “As shown by reference number 330, for example, the server 304 may transmit, using the transceiver 328, a wireless communication to the client 302… In some aspects, the wireless communication may include a reference signal such as a channel state information reference signal (CSI-RS).” This reference signal is a CSI resource. It is also used to perform CSI measurements and for training, as disclosed in [0083]: “For example, in aspects in which the wireless communication is a CSI-RS, the observed wireless communication vector, x, may include channel state information (CSI)”; [0091]: “In some aspects, the observed wireless communication vector, x, may comprise a propagation channel that the client (e.g., a UE 120) estimates based at least in part on a received CSI-RS.” The observations, which are estimates of the channel, are CSI measurements and are based on the CSI-RS. See also [0031]: “The client may transmit the compressed measurements to the server (e.g., a TRP, another UE, a base station, and/or the like)”; [0032]: “The server may decode the compressed measurements using one or more decompression operations and reconstruction operations associated with one or more neural networks.” Furthermore, as shown in FIG. 3, the measurements x is used to train the autoencoder, as also described in [0105]: “During the training, each of the K autoencoders is trained using the same observed wireless communication vector, x.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Namgoong cited above for the instant claim are part of, or are used to implement, the teachings or techniques discussed in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for the teachings of Namgoong in the rejection of the parent claim.
As to claim 23, the combination of Wang, Namgoong, and Timo teaches the method of claim 21, wherein the one or more capabilities of the UE to train one or more neural networks to generate information about one or more 5G signals include one or more capabilities of the UE to train [Wang, [0068]: “…As one example, the UE capability information message includes ML capabilities (e.g., any one or more of: supported ML architectures, supported number of layers, available processing power, memory/storage capabilities, available power budget, fixed-point processing vs. floating-point processing, maximum kernel size capability, computation capability), and the base station 120 selects the initial ML configuration based on a common ML capability supported by the UEs 111, 112, and 113.”] “at least a portion of the autoencoder that includes the one or more neural networks” [The “autoencoder” is taught by Namgoong and, in the rejection of the parent claim, is implemented as the neural network being trained at the UE in the combined teachings of the references. Therefore, the combination of the references and the associated rationale discussed in the rejection of the parent independent claim covers the instant limitation.]
As to claim 24, the combination of Wang, Namgoong, and Timo teaches the method of claim 21, wherein the method further includes sending one or more training configurations to one or more devices based, at least in part, on the one or more capabilities of the UE to train one or more neural networks to generate information about one or more 5G signals. [Wang, [0067]: “As illustrated, at 605, the base station 120 selects an initial ML configuration for a DNN that processes wireless network communications.” From FIG. 6, it is understood that the initial ML configuration is sent to the UEs. See also Wang, [0071]: “At 610, the base station 120 directs each UE in a set of UEs (e.g., UE 111, UE 112, UE 113) to form a (respective) DNN using the initial ML configuration. To illustrate, the base station 120 transmits an indication of an index value of a neural network table to the UEs 111, 112, and 113.” That is, that the content of the initial ML configuration is transmitted via indices of the neural network table. Furthermore, as discussed in the rejection of claim 1, the ML configuration is based on the capabilities. See Wang, [0068]: “In aspects, the base station receives a UE capability information message (not illustrated) from each UE and selects the initial ML configuration based on a common UE capability between the UEs 111, 112, and 113.”]
As to claim 25, the combination of Wang, Namgoong, and Timo teaches the method of claim 21, as set forth above.
Namgoong further teaches “the method further includes causing the UE to train an encoder of the autoencoder” [As noted in the rejection of claim 1, Namgoong teaches training an autoencoder. Furthermore, as shown in FIG. 3, each autoencoder include an “encoder.” See [0080]: “As shown, the client autoencoder 310 may include an encoder 314 configured to receive an observed wireless communication vector, x, as input and to provide a latent vector, h, as output.” The encoder parameters are also being trained in the training process. See [0102] (“qϕ(h|x,z) is parameterized by the encoder of the autoencoder… KL (qϕ(h|x, z)∥pθ(h|z)) is the regularization term for the autoencoder”), where the loss function in this paragraph is used in accordance with the loss function in [0104] that is “used as a loss function in the training” in order to optimize the neural network. See [0104] (“to find the neural network parameters θ and ϕ that maximizes the ELBO…”] “and causing another device to train a decoder of the autoencoder” [The limitation of “another device” is met by the fact that there are other UEs (clients). See, e.g., [0033]: “Federated learning is a machine learning technique that enables multiple clients to collaboratively learn neural network models”; [0062]: “At base station 110, the uplink signals from UE 120 and other UEs may be received by antennas 234.” In general, UEs, including another UE, also takes part in training their local autoencoders, which include a decoder portion as shown in FIG. 3. See [0102]: “B [note that “B” this is a misprint of “θ”] represents the parameters for the decoders”; [0103]: “pθ(x|h, z) is parameterized by the decoder of the autoencoder.” As noted above, the client determines the updated set of parameters (see [0104]: “In some aspects, it may be desired to find the neural network parameters θ and ϕ that maximizes the ELBO…”), which includes the parameters θ for the decoder.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Namgoong cited above for the instant claim are part of, or are used to implement, the teachings or techniques discussed in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for the teachings of Namgoong in the rejection of the parent claim.
As to claim 26, the combination of Wang, Namgoong, and Timo teaches the method of claim 21, as set forth above.
Namgoong further teaches “wherein the autoencoder is a first autoencoder, and the method further includes causing the UE to train the first autoencoder” [As noted in the rejection of claim 1, Namgoong teaches training an autoencoder. The autoencoder or an instance of an autoencoder on one device can be regarded as a first autoencoder.] “and another UE to train a second autoencoder” [The limitation of “another device” is met by the fact that there are other UEs (clients). See, e.g., [0033]: “Federated learning is a machine learning technique that enables multiple clients to collaboratively learn neural network models”; [0062]: “At base station 110, the uplink signals from UE 120 and other UEs may be received by antennas 234.” In general, UEs, including another UE, also takes part in training their local autoencoders, which include a decoder portion as shown in FIG. 3. Therefore, the instant limitation is taught.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Namgoong cited above for the instant claim are part of, or are used to implement, the teachings or techniques discussed in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for the teachings of Namgoong in the rejection of the parent claim.
As to claim 27, this claim is directed to a wireless radio network base station that performs operations that are the same or substantially the same as those of claim 1. Therefore, the rejection made to claim 1 is applied to claim 27.
Furthermore, Wang teaches a wireless radio network base station, [[0080]: “Accordingly, at 640, the base station 120 receives updated ML information from at least some of the UEs, where the updated ML information can indicate any combination of ML parameters, ML architectures, and/or ML gradients.” The limitation of “wireless radio” is disclosed in e.g., [0021]: “The base stations 120 communicate with the user equipment 110 using the wireless links 130, which may be implemented as any suitable type of wireless link… The wireless links 130 may include one or more wireless links (e.g., radio links).”].
Claim 27 differs from claim 1 in reciting “update one or more parameters” rather than “train.” However, since the training taught in Wang discussed in the rejection of claim 1 also constitutes updating one or more parameters. See, e.g., Wang [0017]: “the multiple devices each report learned parameters (e.g., weights or coefficients) generated by the ML algorithm while processing their own particular input data, and the ML controller creates an updated ML configuration by averaging the weights or coefficients to create an updated ML configuration.” Note that the learning process in Wang repeats (and thus has repeated update cycles. See Wang, [0094]: “the method 700 iteratively repeats as indicated at 735. For instance, the base station receives additional updated ML information from the subset of UEs (e.g., UE 111, UE 112) and/or other UEs omitted from the subset (e.g., UE 113).”
As to claim 28, the combination of Wang, Namgoong, and Timo teaches the wireless radio network base station of claim 27, as set forth above.
Namgoong further teaches “wherein the one or more UE includes one or more channel state information (CSI) resources to be used by the UE to perform CSI measurements to be used by the UE to train the autoencoder.” [[0083]: “As shown by reference number 330, for example, the server 304 may transmit, using the transceiver 328, a wireless communication to the client 302… In some aspects, the wireless communication may include a reference signal such as a channel state information reference signal (CSI-RS).” This reference signal is a CSI resource. It is also used to perform CSI measurements and for training, as disclosed in [0083]: “For example, in aspects in which the wireless communication is a CSI-RS, the observed wireless communication vector, x, may include channel state information (CSI)”; [0091]: “In some aspects, the observed wireless communication vector, x, may comprise a propagation channel that the client (e.g., a UE 120) estimates based at least in part on a received CSI-RS.” The observations, which are estimates of the channel, are CSI measurements and are based on the CSI-RS. See also [0031]: “The client may transmit the compressed measurements to the server (e.g., a TRP, another UE, a base station, and/or the like)”; [0032]: “The server may decode the compressed measurements using one or more decompression operations and reconstruction operations associated with one or more neural networks.” Furthermore, as shown in FIG. 3, the measurements x is used to train the autoencoder, as also described in [0105]: “During the training, each of the K autoencoders is trained using the same observed wireless communication vector, x.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Namgoong cited above for the instant claim are part of, or are used to implement, the teachings or techniques discussed in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for the teachings of Namgoong in the rejection of the parent claim.
As to claim 29, the combination of Wang, Namgoong, and Timo teaches the wireless radio network base station of claim 27, wherein the circuitry is to cause a training configuration to be sent to the UE based, at least in part, on the one or more capabilities of the UE to update one or more parameters of one or more neural networks to generate information about one or more 5G signals. [Wang, [0067]: “As illustrated, at 605, the base station 120 selects an initial ML configuration for a DNN that processes wireless network communications.” From FIG. 6, it is understood that the initial ML configuration is sent to the UEs. See also Wang, [0071]: “At 610, the base station 120 directs each UE in a set of UEs (e.g., UE 111, UE 112, UE 113) to form a (respective) DNN using the initial ML configuration. To illustrate, the base station 120 transmits an indication of an index value of a neural network table to the UEs 111, 112, and 113.” That is, that the content of the initial ML configuration is transmitted via indices of the neural network table.]
As to claim 30, the combination of Wang, Namgoong, and Timo teaches the wireless radio network base station of claim 27, as set forth above.
Namgoong further teaches “wherein the circuitry is to cause the UE to update parameters of an encoder and a decoder of the autoencoder.” [As shown in FIG. 3, each autoencoder include an “encoder.” See [0080]: “As shown, the client autoencoder 310 may include an encoder 314 configured to receive an observed wireless communication vector, x, as input and to provide a latent vector, h, as output.” The encoder parameters are also being trained in the training process. See [0102] (“qϕ(h|x,z) is parameterized by the encoder of the autoencoder… KL (qϕ(h|x, z)∥pθ(h|z)) is the regularization term for the autoencoder”), where the loss function in this paragraph is used in accordance with the loss function in [0104] that is “used as a loss function in the training” in order to optimize the neural network. See [0104] (“to find the neural network parameters θ and ϕ that maximizes the ELBO…” in regards to “decoder,” in general, the local autoencoders also include a decoder portion as shown in FIG. 3. See [0102]: “B [note that “B” this is a misprint of “θ”] represents the parameters for the decoders”; [0103]: “pθ(x|h, z) is parameterized by the decoder of the autoencoder.” As noted above, the client determines the updated set of parameters (see [0104]: “In some aspects, it may be desired to find the neural network parameters θ and ϕ that maximizes the ELBO…”), which includes the parameters θ for the decoder.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far so as to also arrive at the further limitations of the instant dependent claim. Since the teachings of Namgoong cited above for the instant claim are part of, or are used to implement, the teachings or techniques discussed in the rejection of the parent independent claim, the motivation for doing so is the same as the one given for the teachings of Namgoong in the rejection of the parent claim.
As to claim 31, the combination of Wang, Namgoong, and Timo teaches the wireless radio network base station of claim 27, wherein the circuitry is to aggregate locally trained autoencoder models from a plurality of UE devices that includes the UE. [Wang, [0036]: “The BS federated learning manager 270 also receives updated ML information from a set of UEs and aggregates the updated ML information to determine a common ML configuration usable by a subset of UEs to form DNNs that process wireless communications.” Wang, [0082]: “In determining the common ML configuration, the base station 120 applies federated learning techniques that aggregate the updated ML information received from multiple UEs (e.g., updated ML information transmitted at 635, 636, and/or 637) without potentially exposing private data used at the UE to generate the updated ML information.” The limitation of “autoencoder” is taught by Namgoong and, in the rejection of the parent claim, is implemented as the neural network being trained at the UE in the combined teachings of the references. Therefore, the combination of the references and the associated rationale discussed in the rejection of the parent independent claim covers the instant limitation.]
As to claim 32, the combination of Wang, Namgoong, and Timo teaches the wireless radio network base station of claim 27, wherein the circuitry is to cause one or more channel state information (CSI) configurations to be sent to the UE device based, at least in part, on the one or more capabilities of the UE to update one or more parameters of one or more neural networks to generate information about one or more 5G signals. [Wang, [0045]: “the base station 120 obtains the various criteria and/or link quality indications (e.g., any one or more of: RSSI, power information, SINR, RSRP, CQI, CSI, Doppler feedback, BLER, HARQ, timing measurements, error metrics, etc.) during the communications with the UE and forwards the criteria and/or link quality indications to the core network neural network manager 312.” Wang, [0068]: “The neural network table 318 stores multiple different NN formation configuration elements generated using the training module 316. In some implementations, the neural network table includes input characteristics for each NN formation configuration element and/or NN formation configuration, where the input characteristics describe properties about the training data used to generate the NN formation configuration. For instance, the input characteristics can include a power information, SINR information, CQI, CSI, Doppler feedback, RSS, error metrics, etc.” Note that the neural network table is a training configuration (see, e.g., Wang, [0039]: “the neural network table includes input characteristics for each NN formation configuration element and/or NN formation configuration, where the input characteristics describe properties about the training data used to generate the NN formation configuration element and/or NN formation configuration.”). Thus, the neural network table, which includes CSI input characteristics, constitutes a CSI training configuration. Regarding the limitation of “to be sent,” the configuration/neural network table is sent to the UEs as disclosed in [0071]: “At 610, the base station 120 directs each UE in a set of UEs (e.g., UE 111, UE 112, UE 113) to form a (respective) DNN using the initial ML configuration. To illustrate, the base station 120 transmits an indication of an index value of a neural network table to the UEs 111, 112, and 113.” That is, that the content of the initial ML configuration is transmitted vias indices of the neural network table.]
2. Claims 7 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Namgoong and Timo, and further in view of Kumar et al. (US 2022/0377844 A1) (“Kumar”).
As to claim 7, the combination of Wang, Namgoong, and Timo teaches the processor of claim 1, wherein the circuitry is to cause […] of a wireless radio network to train [[0022]: “The base stations 120 are collectively a Radio Access Network 140 (e.g., RAN, Evolved Universal Terrestrial Radio Access Network, E-UTRAN, 5G NR RAN or NR RAN).” That is, the base stations and the UEs are part of a wireless radio network. Furthermore, the base stations and network perform the training process. See [0080]: “Accordingly, at 640, the base station 120 receives updated ML information from at least some of the UEs, where the updated ML information can indicate any combination of ML parameters, ML architectures, and/or ML gradients.” [0082]: “In determining the common ML configuration, the base station 120 applies federated learning techniques that aggregate the updated ML information received from multiple UEs…the base station 120 performs averaging that aggregates ML parameters, gradients, and so forth.” [0084]: “At 655, the base station 120 directs the subset of UEs to update the DNN formed at 615 and at 616 using the common ML configuration determined at 650.”] at least a portion of the autoencoder. [The “autoencoder” is taught by Namgoong and, in the rejection of the parent claim, is implemented as the neural network being trained at the UE in the combined teachings of the references. Therefore, the combination of the references and the associated rationale discussed in the rejection of the parent independent claim covers the instant limitation.]
The combination of references thus far does not explicitly teach the element of “an operations, administration, and maintenance (OAM) node” as being caused to perform the training.
Kumar, which generally pertains to machine learning model training (see title) in a federated learning context (see [0083]), teaches the above limitations of “an operations, administration, and maintenance (OAM) node” [[0086]: “The training may be initiated via a network entity 506 correspond to any of an OAM, a base station (e.g., including a RAN-based ML controller), a NWDAF, etc., that transmits, at 510, a model provisioning request to a model repository 508.” See also [0082] (“The training request may be based on protocols of an operations, administration, and maintenance (OAM) entity, a base station (e.g., including a RAN-based ML controller), and/or a network data analytics function (NWDAF)”); [0088] (“An ML function manager (e.g., OAM, RAN-based ML controller, etc.) may initiate the training procedure…).” [0087]: “The network entity 506 may or may not forward, at 530 b, the model training results to the model repository 508 for data aggregation, at either 532 a or 532 b. That is, the model may then be aggregated, at 532 a, via the network entity 506 and/or aggregated, at 532 b, via the model repository 508.” As shown in FIG. 5, the network entity 506 can be an OAM or base station.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far with the teachings of Kumar by implementing “an operations, administration, and maintenance (OAM) node” as being caused to perform the training. Doing so would have been obvious as a combination of prior art elements according to known methods to yield predictable results, supported by the factors in MPEP § 2143(I)(A). Here, Wang as modified thus far teaches a processor which differed from the claimed invention by the lack of the OAM component. However, OAMs are known in Kumar are known in the art as being parts of a network for performing distributed learning and for being suitable for performing functions in distributed learning that can also be performed by a base station. Therefore, one of ordinary skill in the art could have combined the elements as claimed by known methods, and that in combination, each element merely performs the same function as it does separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely the use of another component suitable for implementing federated learning.
As to claim 13, the combination of Wang, Namgoong, and Timo teaches the system of claim 9, wherein the one or more processors are to cause the autoencoder to be trained by a user equipment (UE) device [As noted in the rejection of claim 1, the user equipment (UEs) and base station collectively train the model. See, e.g., Wang, [0078]: “At 630, at 631, and at 632, the UEs 111, 112, and 113 optionally perform a training procedure to generate the updated ML information…”; Wang, [0030]: “The UE training module 222 teaches and trains DNNs using known input data and/or by providing feedback to the ML algorithm.”] […] of a wireless radio network. [Wang, [0022]: “The base stations 120 are collectively a Radio Access Network 140 (e.g., RAN, Evolved Universal Terrestrial Radio Access Network, E-UTRAN, 5G NR RAN or NR RAN).” The “autoencoder” is taught by Namgoong and, in the rejection of the parent claim, is implemented as the neural network being trained at the UE in the combined teachings of the references. Therefore, the combination of the references and the associated rationale discussed in the rejection of the parent independent claim covers the instant limitation.], but does not teach the element of “an operations, administration, and maintenance (OAM) node.”
Kumar, which generally pertains to machine learning model training (see title) in a federated learning context (see [0083]), teaches the above limitations of “an operations, administration, and maintenance (OAM) node” [[0086]: “The training may be initiated via a network entity 506 correspond to any of an OAM, a base station (e.g., including a RAN-based ML controller), a NWDAF, etc., that transmits, at 510, a model provisioning request to a model repository 508.” See also [0082] (“The training request may be based on protocols of an operations, administration, and maintenance (OAM) entity, a base station (e.g., including a RAN-based ML controller), and/or a network data analytics function (NWDAF)”); [0088] (“An ML function manager (e.g., OAM, RAN-based ML controller, etc.) may initiate the training procedure…).” [0087]: “The network entity 506 may or may not forward, at 530 b, the model training results to the model repository 508 for data aggregation, at either 532 a or 532 b. That is, the model may then be aggregated, at 532 a, via the network entity 506 and/or aggregated, at 532 b, via the model repository 508.” As shown in FIG. 5, the network entity 506 can be an OAM or base station.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far with the teachings of Kumar by implementing “an operations, administration, and maintenance (OAM) node” as being caused to perform the training. Doing so would have been obvious as a combination of prior art elements according to known methods to yield predictable results, supported by the factors in MPEP § 2143(I)(A). Here, Wang as modified thus far teaches a processor which differed from the claimed invention by the lack of the OAM component. However, OAMs are known in Kumar are known in the art as being parts of a network for performing distributed learning and for being suitable for performing functions in distributed learning that can also be performed by a base station. Therefore, one of ordinary skill in the art could have combined the elements as claimed by known methods, and that in combination, each element merely performs the same function as it does separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely the use of another component suitable for implementing federated learning.
3. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Namgoong and Timo, and further in view of Choi et al. (US 2022/0245527 A1) (“Choi”).
As to claim 20, the combination of Wang, Namgoong, and Timo teaches the machine-readable medium of claim 15, but does not explicitly teach the further limitations of the instant dependent claim.
Choi, which pertains to federated learning systems (see title), teaches “wherein the one or more capabilities are one or more of one or more types of inputs supported by a user equipment (UE) device to perform training and one or more quantization types supported by the UE device.” [[0077]: “In some examples, the UE 115 may transmit a capability message to the server (via the base station 105) indicating a set of quantization levels supported by the UE 115.” [0085]: “In some examples, each worker 215 may transmit a capability message 235 (e.g., capability messages 235-a, 235-b, and 235-c) to the server 205 that indicates the set of quantization levels supported by the worker 215. Here, the server 205, for each worker 215, may select a quantization level from the set of quantization levels supported by the worker 215.” Note that the instant wherein clause recites an alternate expression denoted by the phrase “one or more of….and…” Therefore, when the alternative of quantization types is met by the prior art, the entire alternate expression is satisfied and the other alternative of “types of inputs” do not need to be met.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Wang with the teachings of choi by implementing the one or more capabilities to be one or more quantization types supported by the UE device. The motivation would be to enable to the server or central controller to select a quantization level that is supported, in order to implement adaptive quantization level selection for data compression. See Choi, abstract (“To support adaptive quantization level selection in federated learning, a server may cause a base station to transmit an indication of a quantization level for a user equipment (UE) to use to compress gradient data output by a machine learning model.”)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following document depicts the state of the art.
Yuan et al. (US 2024/0056150 A1) teaches receiving a command to perform CSI measurements in response to receiving instructions in the form of CSI reporting configuration (see [0083]).
Ryden et al. (US 2024/0049003 A1) teaches a communications system in which an autoencoder is used for data compression, and the base station sends request a wireless device such as a User Equipment (UE) to perform measurements on a set of Channel State Information Reference Signal (CSI-RS) beams.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YAO DAVID HUANG whose telephone number is (571)270-1764. The examiner can normally be reached Monday - Friday 9:00 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Y.D.H./Examiner, Art Unit 2124
/MIRANDA M HUANG/Supervisory Patent Examiner, Art Unit 2124