DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-17 are pending
Claims 1-11,14-15 have been amended, and 16 and 17 have been added by preliminary amendment.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 4/20/2023and 9/18/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-5 and 11- 16 are rejected under 35 U.S.C. 103 as being unpatentable over US Pat. Pub. 20150372728 to Md. Saufur Rahman et al. (hereinafter Rahman) in view of US Pat. Pub. 20230084164 to Bo Chen et al. (hereinafter Chen).
Rahman teaches communications between a radio head (RRH) compression modules and baseband (BBU) compression modules using a vector quantization codebook. Rahman does not teach a neural network. Fig. 3 below presents an exemplary embodiment:
PNG
media_image1.png
444
348
media_image1.png
Greyscale
Chen teaches compression techniques using a neural network and receiving multiple neural network training configurations for channel state feedback which includes as taught in para. [0074]. Chen teaches that the neural network compression techniques shown in Fig. 7, below apply to multiple components of a base station:
PNG
media_image2.png
498
717
media_image2.png
Greyscale
Each of Rahman and Chen are in the same field of endeavor, wireless communications, and each address compression techniques between network components.
Regarding claim 1, Rahman teaches A radio head apparatus for use in a base station of a radio access network, (Rahman Fig. 3 remote radio head RRH 311) the radio head apparatus being-configured-to comprising:
at least one processor; (Rahman Fig. 2 illustrates RX processing circuitry 220 which is coupled to central processor 225)
and
at least one non-transitory memory (Rahman memory 230 within RRH system 300 as taught in para. [0052]) storing instructions that, when executed with the at least one processor, cause the radio head apparatus to:
receive compression model information from a central processing apparatus of the base station. (Rahman para. [0052] teaches that DL compression module 303 compresses symbols that are input to fronthaul interface 305 and received at the fronthaul interface 307 on the side of the RRH 311. Rahman para. [0055] and Fig. 4 teaches a vector quantizer block that maps according to a vector quantization codebook stored in internal memory of both compression modules 303 and 313 for CPRI compression. Para. [0058] teaches that the quantization codebook may be implemented by different components in the network including the BBU and RRH. Examiner interprets the sharing of the quantization codebook as receiving compression model information.)
receive a data signal from a user equipment over a radio channel; (Rahman Fig. 1 illustrates mobile devices 115 and 116 coupled to eNB 102 and 103, which may include BBU-RRH system 300 according to para. [0052]. Rahman Fig. 2 and para. [0050] teaches that RF transceivers 210a-n may be radio heads (RRH) capable of receiving data signals from user equipment and implementing signal compression and decompression.)
pre-process the data signal, including compress the data signal (Rahman para. [0050] teaches that processor 225 may be a DU or BBU and RF transceivers 210 may be remote radio heads with OFDM signal compression and decompression being implemented on one or more links there between. Rahman para. [0084] teaches that the processing using the vector quantization codebooks may be performed by a RRH. As shown in Fig. 3, compression is performed in the RRH (313) and decompressed in the BBU (316), which is “pre-processing”:
PNG
media_image1.png
444
348
media_image1.png
Greyscale
) and
transmit the pre-processed data signal to the central processing apparatus. (Rahman para. [0053] teaches that for an uplink processing sequence, “the output of the RRH 311 may be time-domain complex OFDM symbol waveforms 312, e.g., complex Inverse Fast Fourier Transform (IFFT) signals including a cyclic prefix (CP), with one OFDM symbol per antenna port. The OFDM symbols are compressed by the UL compression module 313, and the compressed signals 314 are input to the fronthaul interface 307 that formats the signals for transmission over the fronthaul link 306. The data is then transported via the fronthaul link 306 and received at the fronthaul interface 305 on the side of the BBU 301.)
Rahman does NOT teach preprocessing using a neural network, the neural network being selected amongst a set of neural networks based on the compression model information.
However, in the analogous art of 3GPP 5G wireless communications, Chen teaches preprocessing using a neural network, the neural network being selected amongst a set of neural networks based on the compression model information. (Chen teaches in para. [0044] that “any component in Fig. 2 may perform or direct operations of processes 700, 800 of Figs. 7 and 8.” Fig. 7 includes “receiving multiple neural network training configurations for channel state feedback” which includes as taught in para. [0074] a machine-learning (ML)-based channel state information (CSI) compression and feedback trained decoder model sent to a base station.)
It would have been obvious to one of ordinary skill in the art prior to the effective date of the invention to have combined Rahman with Chen to teach neural networks applied to wireless compression. Each of Rahman and Chen are in the field of wireless communications and address noisy wireless channel issues in wireless communications. One of ordinary skill in the art would have been motivated to combine Chen with Rahman because it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005] to replace the Gaussian white noise model channel estimation for quantization taught in para. [0088] of Rahman with a more accurate neural network based on actual channel estimation taught in Chen Fig. 7.
Regarding claim 2, Rahman teaches A method for compressing a data signal at a radio head apparatus in a base station of a radio access network, the method comprising:
receiving compression model information from a central processing apparatus of the base station (Rahman para. [0058] teaches that the quantization codebook may be implemented by different components in the network including the BBU and RRH. Examiner interprets the sharing of the quantization codebook as receiving compression model information)
receiving a data signal from a user equipment over a radio channel (Rahman Fig. 1 illustrates mobile devices 115 and 116 coupled to eNB 102 and 103, which may include BBU-RRH system 300 according to para. [0052]. Rahman Fig. 2 and para. [0050] teaches that RF transceivers 210a-n may be radio heads (RRH) capable of receiving data signals from user equipment and implementing signal compression and decompression.)
pre-processing the data signal, including compressing the data signal (Rahman para. [0050] teaches that processor 225 may be a DU or BBU and RF transceivers 210 may be remote radio heads with OFDM signal compression and decompression being implemented on one or more links there between. Rahman para. [0084] teaches that the processing using the vector quantization codebooks may be performed by a RRH. )
and
transmitting the pre-processed data signal to the central processing apparatus. (Rahman para. [0053] teaches that for an uplink processing sequence, “the output of the RRH 311 may be time-domain complex OFDM symbol waveforms 312, e.g., complex Inverse Fast Fourier Transform (IFFT) signals including a cyclic prefix (CP), with one OFDM symbol per antenna port. The OFDM symbols are compressed by the UL compression module 313, and the compressed signals 314 are input to the fronthaul interface 307 that formats the signals for transmission over the fronthaul link 306. The data is then transported via the fronthaul link 306 and received at the fronthaul interface 305 on the side of the BBU 301.)
Rahman does NOT teach preprocessing using a neural network, the neural network being selected amongst a set of neural networks based on the compression model information.
However, in the analogous art of 3GPP 5G wireless communications, Chen teaches preprocessing using a neural network, the neural network being selected amongst a set of neural networks based on the compression model information. (Chen Figs. 6 and 7 and paras. [0074] –[0085] teach that a base station can receive multiple neural network models from UEs. Chen para. [0085] teaches that different base station antenna structures benefit from different neural networks with different frameworks. Chen para. [0074] teaches that UE trained encoder/decoder neural network pairs for channel state information compression and feedback are sent to a base station so the base station can recover the raw channel from the channel state feedback.
PNG
media_image3.png
912
1400
media_image3.png
Greyscale
.)
It would have been obvious to one of ordinary skill in the art prior to the effective date of the invention to have combined Rahman with Chen to teach neural networks applied to wireless compression. Each of Rahman and Chen are in the field of wireless communications and address noisy wireless channel issues in wireless communications. One of ordinary skill in the art would have been motivated to combine Chen with Rahman because it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005] to replace the Gaussian white noise model channel estimation for quantization taught in para. [0088] of Rahman with a more accurate neural network based on actual channel estimation taught in Chen Fig. 7.
Regarding claim 3, Rahman does NOT teach A radio head apparatus as claimed in claim 1 wherein the radio channel is defined with channel coefficients, and the neural network comprises an input layer receiving the data signal and the channel coefficients, compression layers to compress the data signal, and quantization layers to perform quantization of the compressed data signal.
However, Chen teaches wherein the radio channel is defined with channel coefficients, and the neural network comprises an input layer receiving the data signal and the channel coefficients, compression layers to compress the data signal, and quantization layers to perform quantization of the compressed data signal. (Chen teaches that a base station includes a radio head in para. [0003]. Chen teaches in para. [0077] that as a UE moves from location to location, the channel environment changes and the decoder coefficients may change, so updated decoder coefficients are fed back to the base station. Chen paras. [0054] teaches successive layers for feed-forward networks and feedback connections for compression. Fig. 5 illustrates multiple layers performed to compress the data signal and perform quantizations including a convolution layer, normalization layer and max pooling layer).
It would have been obvious to one of ordinary skill in the art prior to the effective date of the invention to have combined Rahman with Chen to teach neural networks applied to wireless compression. Each of Rahman and Chen are in the field of wireless communications and address noisy wireless channel issues in wireless communications. One of ordinary skill in the art would have been motivated to combine Chen with Rahman because it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005] to replace the Gaussian white noise model channel estimation for quantization taught in para. [0088] of Rahman with a more accurate neural network based on actual channel estimation taught in Chen Fig. 7.
Regarding claim 4, Rahman teaches A radio head apparatus as claimed in claim 1 wherein the instructions, when executed with the at least one processor, train . (The crossed out language is addressed below with respect to Chen) (Rahman teaches compression modules that share a vector quantization codebook between a BBU and RRH as illustrated in Fig. 3 and taught in para. [0058]-[0059] using training 801 illustrated in Fig. 6.
Rahman Fig. 10 and para. [0083] teach iterations and updates of weights:
PNG
media_image4.png
499
492
media_image4.png
Greyscale
Rahman does NOT teach , train the neural network, .... performing updates of weights on iterations
However, in the analogous art of 3GPP 5G wireless communications, Chen teaches train the neural network ... with performing updates of weights on iterations of the neural network (Chen teaches in para. [0077] that the weights for the neural network are updated and sent to the base station to reflect the changing environment. Chen Fig. 7 and para. [0080] –[0084] teach that each encoder/decoder pair receive training configurations and processors train each neural network decoder/encoder pair. The base station then aggregates the models from multiple UEs and generates new models.)
It would have been obvious to one of ordinary skill in the art prior to the effective date of the invention to have combined Rahman with Chen to teach neural networks applied to wireless compression. Each of Rahman and Chen are in the field of wireless communications and address noisy wireless channel issues in wireless communications. One of ordinary skill in the art would have been motivated to combine Chen with Rahman because it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005] to replace the Gaussian white noise model channel estimation for quantization taught in para. [0088] of Rahman with a more accurate neural network based on actual channel estimation taught in Chen Fig. 7.
Regarding claim 5, Rahman teaches A method for compressing a data signal as claimed in claim 2 comprising training . (The crossed out language is addressed below with respect to Chen) (Rahman teaches compression modules that share a vector quantization codebook between a BBU and RRH as illustrated in Fig. 3 and taught in para. [0058]-[0059] using training 801 illustrated in Fig. 6.)
Rahman does NOT teach training the neural network ... with weights on iterations of the neural network.
However, in the analogous art of 3GPP 5G wireless communications, Chen teaches training the neural network, .... with performing updates of weights on iterations of the neural network. (Chen teaches training a neural network by a base station, see e.g. receive processor 238 and controller/processor 240 in Fig. 2, and Chen para. [0077] teach receiving decoder weights “fed back to the base station from the UE to reflect the changing environment”. Chen paras. [0083] – [0084] teach that receive processor 238 and controller/processor 240 aggregate models from multiple UEs and generate new models for the UEs and the process is then repeated.)
It would have been obvious to one of ordinary skill in the art prior to the effective date of the invention to have combined Rahman with Chen to teach neural networks applied to wireless compression. Each of Rahman and Chen are in the field of wireless communications and address noisy wireless channel issues in wireless communications. One of ordinary skill in the art would have been motivated to combine Chen with Rahman because it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005] to replace the Gaussian white noise model channel estimation for quantization taught in para. [0088] of Rahman with a more accurate neural network based on actual channel estimation taught in Chen Fig. 7.
Regarding claim 11, Rahman teaches A method for optimizing compression of data signals in a base station of a radio access network, wherein the base station comprises at least a radio head apparatus and a central processing apparatus, (Rahman Fig. 3 illustrates a BBU coupled to a RRH) the method comprising:
receiving at the central processing apparatus, through the radio head apparatus, at least a channel information signal transmitted with a user equipment over a radio channel; (Rahman teaches in para. [0046] that processor 225 controls reception of forward and reverse channel signals via transceivers 210a-n which are forwarded through fronthaul link 306 between RRH and BBU.)
obtaining with the central processing apparatus, based on the channel information signal, compression model information (The crossed out language is addressed below with respect to Chen) (Rahman teaches in para. [0058]- [0059] a compression model based on channel information signal to be used for compression in both the RRH and BBU by teaching a vector quantization codebook that is trained using the network traces (mapped to channel information signal) received by the UE.)
and
sending with the central processing apparatus to the radio head apparatus the compression model information. (Rahman teaches in paras. [0058]-[0059] that the vector quantization codebook is shared by the central processor 225 so that the DL and UL compression modules in both BBU and RRH access the codebook: “ the compression module (303 or 313), the controller/processor 225, the processing circuitry 215 or 220 for the BBU of the RF transceivers 210) may use a set of training data 801 as an input”. The training data is compression model information sent from the central processing apparatus to create the vector quantization codebooks used in both BBU and RRH modules illustrated in Fig. 3.
Rahman does NOT teach that the compression model indicates a neural network ... amongst a set of neural networks.
However, in the analogous art of 3GPP 5G wireless communications, Chen teaches compression model indicates a neural network ... amongst a set of neural networks. (Chen Fig. 7 includes “receiving multiple neural network training configurations for channel state feedback” which includes as taught in para. [0074] a machine-learning (ML)-based channel state information (CSI) compression and feedback trained decoder model sent to a base station.)
It would have been obvious to one of ordinary skill in the art prior to the effective date of the invention to have combined Rahman with Chen to teach neural networks applied to wireless compression. Each of Rahman and Chen are in the field of wireless communications and address noisy wireless channel issues in wireless communications. One of ordinary skill in the art would have been motivated to combine Chen with Rahman because it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005].
Regarding claim 12, Rahman does NOT teach A method for optimizing compression of data signals as claimed in claim 11, comprising obtaining, based on the channel information signal, a number of layers to be multiplexed on the channel, wherein the compression model information depends on the number of layers to be multiplexed on the channel.
However, in the analogous art of 3GPP 5G wireless communications, Chen teaches obtaining, based on the channel information signal, a number of layers to be multiplexed on the channel, wherein the compression model information depends on the number of layers to be multiplexed on the channel (Chen Fig. 7 includes “receiving multiple neural network training configurations for channel state feedback” which includes as taught in para. [0074] a machine-learning (ML)-based channel state information (CSI) compression and feedback trained decoder model sent to a base station. Chen para. [0075] specifically addresses MIMO and teaches “The auto-encoder 600 includes an encoder 610 having a neural network (NN). The encoder 610 receives the channel realization and/or interference realization as an input and compresses the channel/interference realization. The channel realization can also be referred to as a channel estimate. The interference realization can also be referred to as an interference estimate. Interference depends upon the environment and can address uplink interference or inter-stream interference in MIMO scenarios.” MIMO layers multiplexed in the channel are therefore part of the channel information signal of Chen.)
It would have been obvious to one of ordinary skill in the art prior to the effective date of the invention to have combined Rahman with Chen to teach neural networks applied to wireless compression. Each of Rahman and Chen are in the field of wireless communications and address noisy wireless channel issues in wireless communications. One of ordinary skill in the art would have been motivated to combine Chen with Rahman because it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005] to replace the Gaussian white noise model channel estimation for quantization taught in para. [0088] of Rahman with a more accurate neural network based on actual channel estimation taught in Chen Fig. 7.
Regarding claim 13 A method for optimizing compression of data signals as claimed in claim 12, comprising obtaining wideband information from the channel information signal, wherein the compression model information depends on the wideband information. (Rahman teaches in para. [0135] that the construction of precoder dependent vector quantization codebooks the precoding may be wideband.
Rahman does NOT teach that the wideband information is from the channel information signal.
However, Chen teaches that the wideband information from the channel information signal. (Chen teaches in para. [0029] and Fig. 1 illustrates that each base station may be a 5G node within a 3GPP cell. Chen para. [0085]-[0086] teach that the neural network frameworks for channel state feedback (compression model information) can be indicated via 3GPP DCI, RRC or MAC-CE 3GPP wideband signaling.)
It would have been obvious to one of ordinary skill in the art prior to the effective date of the invention to have combined Chen with Rahman to teach wideband information from the channel information signal. Each of Rahman and Chen are in the field of wireless communications and compression. One of ordinary skill in the art would have been motivated to combine Rahman with Chen because it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005].
Regarding claim 14, Rahman teaches A method for optimizing compression of data signals as claimed in claim 11, comprising
receiving a pre-processed data signal from the radio head apparatus (Rahman illustrates pre-processing data in Fig. 3 by compression data in the RRH and decompressing data in the BBU::
PNG
media_image1.png
444
348
media_image1.png
Greyscale
and
Rahman does NOT teach decoding the pre-processed data signal using the neural network associated with the compression model information sent to the radio head apparatus, wherein the neural network comprises a receiving layer for receiving the pre-processed data signal and decoding layers for decoding the pre- processed data signal.
In the analogous art of 3GPP 5G wireless communications, Chen teaches decoding the pre-processed data signal using the neural network associated with the compression model information sent to the radio head apparatus, wherein the neural network comprises a receiving layer for receiving the pre-processed data signal and decoding layers for decoding the pre- processed data signal (Chen teaches the UE sharing neural network encoder/decoders with a base station which includes a radio head apparatus (see Chen para. [0003]). Chen para. [0055] teaches “In a fully connected neural network 402, a neuron in a first layer may communicate its output to every neuron in a second layer, so that each neuron in the second layer will receive input from every neuron in the first layer.” Examiner interprets the first layer as a receiving layer because Chen Fig. 5 illustrates multiple layers and teaches in para. [0070] as “The convolution layers 556 may include one or more convolutional filters, which may be applied to the input data to generate a feature map.”)
It would have been obvious to one of ordinary skill in the art to have combined Chen with Rahman to teach using a neural network in the compression modules of Rahman. Each of Rahman and Chen teach compression for wireless networks. One of ordinary skill in the art would have been motivated to combine Rahman with Chen because it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005].
Regarding claim 15, Rahman teaches A method for optimizing compression of data signals as claimed in claim 14, comprising training vector quantization codebook, jointly with the radio head apparatus. (Rahman teaches compression modules that share a vector quantization codebook between a BBU and RRH as illustrated in Fig. 3 and taught in para. [0058]-[0059] using training 801 illustrated in Fig. 6.)
Rahman does NOT teach that the training a neural network and performing updates of weights on iterations of the neural network.
Examiner interprets training a neural network defined as performing updates of weights on iterations of the neural network. Examiner would only address the latter.
However, Chen teaches performing updates of weights on iterations of the neural network (Chen Fig. 7 includes “receiving multiple neural network training configurations for channel state feedback” which includes as taught in para. [0074] a machine-learning (ML)-based channel state information (CSI) compression and feedback trained decoder model sent to a base station. Fig. 7 illustrates training using weights. Chen para. [0084] teaches that the base station maintains an encoder/decoder NN for each UE. The UE downloads the model maintained at the base station. Additionally, the UE trains and updates the model based on the channel/interference realizations observed at the UE. )
It would have been obvious to one of ordinary skill in the art would have been motivated to substitute the vector quantization codebook Rahman with the neural network of Chen, because they are both compression algorithms, and it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005].
Regarding claim 16. Rahman combined with Chen teach A non-transitory program storage device readable with an apparatus, tangibly embodying a program of instructions executable with the apparatus for performing the method of claim 2. (Rahman teaches in para. [0011] that functions taught can be implemented in computer programs embodied on “non-transitory” computer readable medium such as a rewritable optical disc or an erasable memory device. Likewise. Chen teaches in para. [0012] “a non-transitory computer-readable medium with program code recorded thereon is disclosed”.
It would have been obvious to one of ordinary skill in the art to combine Chen with Rahman to teach performing updates of weights on iterations of the neural network. Each of Chen and Rahman are in the field of wireless communication compression. One of ordinary skill in the art would have been motivated to combine Chen and Rahman because it is desirable to apply neural network processing to wireless communications to achieve greater efficiencies as taught in Chen para. [0005].
Claims 6-10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Rahman.
Regarding claim 6, Chen teaches A central processing apparatus for use in a base station of a radio access network, the central processing apparatus being comprising:
at least one processor; (Chen Fig. 2, controller/processor 240)
and
at least one non-transitory memory (Chen Fig. 2, memory 292) storing instructions that, when executed with the at least one processor, cause the central processing apparatus to:
receive, at least a channel information signal transmitted with a user equipment over a radio channel; (Chen teaches a that base stations BS may be referred to as a radio head in para. [0003] and further illustrates a receive processor 238 coupled to a controller/processor 240 within base station 200 and controller/processor 290 within network controller 130 which receives signals to/from base stations/radio heads. Chen para. [0044] teaches that any component in Fig. 2 may perform channel state information learning. Chen para. [0084] –[0085] teaches the base station receives encoder/decoder networks from the UEs and the base station maintains a neural network encoder/decoder for each UE and maintains a model and generates new models for multiple UEs. The neural network frameworks are used at the UE to train for channel state feedback (CSF) according to how the base station intends to use the CSF.)
obtain, based on the channel information signal, compression model information indicating a neural network to be used for compression with the radio head apparatus amongst a set of neural networks; (Chen teaches in para. [0084]-[0085] that models are updated based on channel/interference realizations observed at the UE and that “Different encoder/decoder neural network frameworks (e.g., architectures) may be used by the UE to perform neural network training for channel state feedback (CSF), according to how the base station intends to use the CSF.... different base station antenna structures, such as 1D/2D cross polarization or 1D/2D vertical/horizontal polarization, would benefit from different neural networks with different frameworks.” )
and
send the compression model information to the radio head apparatus. (Chen teaches in para. [0044] that “any component in Fig. 2 may perform or direct operations of processes 700, 800 of Figs. 7 and 8.” Fig. 7 includes “receiving multiple neural network training configurations for channel state feedback” which includes as taught in para. [0074] a machine-learning (ML)-based channel state information (CSI) compression and feedback trained decoder model sent to a base station.)
Chen does NOT specifically identify a radio head as receiving channel information signal.
However, in the analogous art of 3GPP LTE wireless communications, Rahman teaches receive, through a radio head apparatus of the base station, at least a channel information signal. (Rahman Fig. 3 illustrates a radio head coupled to a central processor wherein the radio head RRH receives channel information and transmits to the BBU over fronthaul link 306:
PNG
media_image5.png
703
609
media_image5.png
Greyscale
Rahman teaches in para. [0058] that the vector quantization codebook that is shared between the RRH and the BBU may be trained using training data, and implemented by compression modules 303/313 (both RRH and BBU) and “The training data can also be obtained from real/live network traces (e.g., signals received from UEs or over the backhaul network)”. Examiner interprets “channel information signal” as claimed to encompass “real/live network traces” received from UEs because network traces are channel signals capable of training the vector quantization model codebook.)
It would have been obvious to one of ordinary skill in the art prior to the effective date of the invention to have combined Rahman with Chen to teach receiving channel signals at a radio head. Each of Chen and Rahman are in the field of wireless communications and 3GPP communications. One of ordinary skill in the art would have been motivated to combine Rahman with Chen to increase the data rate between radio heads and baseband units as taught in Rahman para. [0004].
Regarding claim 7, Chen teaches A central processing apparatus as claimed in claim 6, wherein the instructions, when executed with the at least one processor, cause the central processing apparatus to obtain, based on the channel information signal, a number of layers to be multiplexed on the channel, wherein the compression model information depends on the number of layers to be multiplexed on the channel. (Chen para. [0084] teaches the base station receives encoder/decoder neural networks from a UE. Chen para. [0076] teaches that the decoder 620 passes received information through a fully connected layer and a series of convolutional layers to recover the channel state. The convolutional layers are illustrated in Fig. 5 element 556 which as shown in Fig. 5, and para. [0069] multiple different types of layers based on connectivity and weight sharing determine a “deep convolutional network”. The layers shown in Fig. 5 are multiplexed into a single encoder/decoder for channel state feedback.)
Regarding claim 8, Chen teaches A central processing apparatus as claimed in claim 7, further configured wherein the instructions, when executed with the at least one processor, cause the central processing apparatus to obtain wideband information from the channel information signal, wherein the compression model information depends on the wideband information. (Chen para. [0029] and Fig. 1 illustrates that each base station may be a 5G node within a 3GPP cell. Chen para. [0085]-[0086] teach that the neural network frameworks for channel state feedback (compression model information) can be indicated via 3GPP DCI, RRC or MAC-CE 3GPP wideband signaling.)
Regarding claim 9, Chen teaches A central processing apparatus as claimed in claim 6 wherein the instructions, when executed with the at least one processor,
cause the central processing apparatus to receive a pre-processed data signal from the radio head apparatus and decode the pre-processed data signal using a neural network selected amongst the set of neural networks based on the compression model information sent to the radio head apparatus, (Chen teaches in para. [0044] that “any component in Fig. 2 may perform or direct operations of processes 700, 800 of Figs. 7 and 8.” Base station 110 includes a radio head per para. [0003]. Fig. 7 includes “receiving multiple neural network training configurations for channel state feedback” which includes as taught in para. [0074] a machine-learning (ML)-based channel state information (CSI) compression and feedback trained decoder model sent to a base station.)
wherein the neural network comprises a receiving layer for receiving the pre-processed data signal and decoding layers for decoding of the pre-processed data signal. (Chen teaches the UE sharing neural network encoder/decoders with a base station which includes a radio head apparatus. Chen para. [0055] teaches “ In a fully connected neural network 402, a neuron in a first layer may communicate its output to every neuron in a second layer, so that each neuron in the second layer will receive input from every neuron in the first layer.”)
Regarding claim 10, Chen teaches A central processing apparatus as claimed in claim 9, further configured wherein the instructions, when executed with the at least one processor, cause the central processing apparatus to train the neural network, with performing updates of weights on iterations the neural network. (Chen teaches in para. [0044] that “any component in Fig. 2 may perform or direct operations of processes 700, 800 of Figs. 7 and 8.” Fig. 7 includes “receiving multiple neural network training configurations for channel state feedback” which includes as taught in para. [0074] a machine-learning (ML)-based channel state information (CSI) compression and feedback trained decoder model sent to a base station. Fig. 7 illustrates training using weights. Chen para. [0084] teaches that the base station maintains an encoder/decoder NN for each UE. The UE downloads the model maintained at the base station. Additionally, the UE trains and updates the model based on the channel/interference realizations observed at the UE.”
Chen does not teach training jointly with the radio head apparatus.
However, Rahman teaches training jointly with the radio head apparatus. Rahman Fig. 3 (above) illustrates compression decompression modules that are trained jointly as taught in Rahman, paras. [0058]-[0059] and Fig. 6. Training 801 creates vector quantization codebook 803 and “the codebook can be stored, configured or hardcoded in an internal memory of the compression/decompression modules (303, 309, 313, 316) in both baseband unit and radio head.
It would have been obvious to one of ordinary skill in the art prior to the effective date of the invention to have combined Rahman with Chen to teach receiving channel signals at a radio head. Each of Chen and Rahman are in the field of wireless communications and 3GPP communications. One of ordinary skill in the art would have been motivated to combine Rahman with Chen to increase the data rate between radio heads and baseband units as taught in Rahman para. [0004].
Regarding claim 17, Chen combined with Rahman teach A non-transitory program storage device readable with an apparatus, tangibly embodying a program of instructions executable with the apparatus for performing the method of claim 11. (Chen teaches in para. [0012] “a non-transitory computer-readable medium with program code recorded thereon is disclosed”. Likewise, Rahman teaches in para. [0011] that functions taught can be implemented in computer programs embodied on “non-transitory” computer readable medium such as a rewritable optical disc or an erasable memory device.
It would have been obvious to one of ordinary skill in the art to combine Chen with Rahman to teach performing updates of weights on iterations of the neural network. Each of Chen and Rahman are in the field of wireless communication compression. Each of Chen and Rahman are in the field of wireless communications and 3GPP communications. One of ordinary skill in the art would have been motivated to combine Rahman with Chen to increase the data rate between radio heads and baseband units as taught in Rahman para. [0004].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARGARET MARIE ANDERSON whose telephone number is (703)756-1068. The examiner can normally be reached M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CHARLES JIANG can be reached at 571-270-7191. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARGARET MARIE ANDERSON/ Examiner, Art Unit 2412 /CHARLES C JIANG/Supervisory Patent Examiner, Art Unit 2412