DETAILED ACTION
This final action is in response to the amendment and remarks filed on 12/09/2025 for application 17/973,781.
Claims 1-2, 5-7, 9-10, 13-15, and 18 have been amended.
Claims 1-20 thereby remain pending in the application. Claims 1, 5, 9, and 13 are independent claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 12/09/2025 has been entered.
Applicant’s amendment with respect to resolving specification objections and claim objections has been considered, and the objections set forth in the office action mailed 10/02/2025 are consequently withdrawn.
Claim Objections
Claims 1 and 9 are objected to because of the following informalities:
In claims 1 and 9, “adjusting, by the first device, based on the communication system data” and “selecting, by the first device, a fifth neural network and a sixth neural network from a second set based on the communication system data” should read “adjusting, by the first device, based on communication system data” and “selecting, by the first device, a fifth neural network and a sixth neural network from a second set based on communication system data” respectively, as the prior limitation “selecting, by the first device, the third neural network and the fourth neural network from a first set based on communication system data,” is recited as a separate selection within an alternative expression, and thereby does not provide antecedent basis.
Appropriate corrections are required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 5-8, 13-16, 18 and 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 5, it recites the limitation “receiving, by the second device, decoder model description information about a fourth neural network from a first device, the decoder model description information being provided as control-plane metadata separate from payload signals”. However, the originally filed specification does not appear to provide support for decoder model description information being provided as “control-plane metadata”, or appear to provide support for said information being provided as “separate from payload signals”.
Regarding claim 13, it has the same deficiencies as found in claim 5 above. Consequently, it is likewise rejected under 112(a) for failing to comply with the written description requirement.
Regarding claims 6-8, 14-16, 18, and 20, they inherit the deficiencies of their parent claims. Consequently, they are likewise rejected under 112(a) for failing to comply with the written description requirement.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-4, 9-12, 17, and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, it recites the limitations “selecting by the first device, the third neural network and the fourth neural network from a first set based on communications system data, wherein the first set comprises a plurality of neural networks; or…selecting, by the first device, a fifth neural network and a sixth neural network from a second set based on the communication system data”. However, these limitations are recited as separate possible selections within an alternative expression – as such, there is insufficient antecedent basis for a “second set” in the selection in which it is recited, because a “first” set is not previously defined before recitation of the alternative expression. Consequently, one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
For purposes of examination, the limitation “selecting, by the first device, a fifth neural network and a sixth neural network from a second set based on the communication system data” is interpreted as “selecting, by the first device, a fifth neural network and a sixth neural network from a first set based on the communication system data”.
Regarding claim 9, it has the same deficiencies as those found in claim 1 above. Consequently, it is rejected for the same reasons and is likewise interpreted as detailed above.
Regarding claims 2-4, 10-12, 17, and 19, they inherit the deficiencies of their parent claims. Consequently, they are also rejected under 35 U.S.C. 112(b) as being indefinite for depending on an indefinite parent claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 5-7, 9-10, 13-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. ("Convolutional Neural Network-Based Multiple-Rate Compressive Sensing for Massive MIMO CSI Feedback: Design, Simulation, and Analysis", published IEEE 28 January 2020), hereinafter Guo, in view of Pezeshki et al. (Pub. No. US 20210194548 A1, “Neural Network and Antenna Configuration Indication”, effectively filed 12/20/2019), hereinafter Pezeshki.
Regarding claim 1, Guo teaches A neural network adjustment method (“Massive multiple-input multiple-output (MIMO) is a promising technology to increase link capacity and energy efficiency. However, these benefits are based on available channel state information (CSI) at the base station (BS). Therefore, user equipment (UE) needs to keep on feeding CSI back to the BS, thereby consuming precious bandwidth resource. Large-scale antennas at the BS for massive MIMO seriously increase this overhead. In this paper, we propose a multiple-rate compressive sensing neural network framework to compress and quantize the CSI” [Guo Abstract]), relating to a first neural network and a second neural network, wherein the first neural network is applied to a first device side, the second neural network is applied to a second device side, (“In the FDD system, UE estimates the downlink channel and then feeds this information (CSI) to the BS. With the downlink CSI, the BS calculates precoding vector vn ∈ CNt×1 via singular value decomposition” [Guo page 3 Massive MIMO-OFDM System]; “In this section, we describe the proposed framework, which mainly includes neural network architecture…The CsiNet in [21], an encoder-decoder structure, has demonstrated promising performance in CSI compression and reconstruction…The proposed architecture of neural network, CsiNet+, as shown in Fig. 2, is based on the CsiNet with two main modifications: convolutional kernel size and refinement process” [Guo pages 3-4 CSI Compression Based on Convolutional Neural Networks]; see Fig. 2 including UE(Encoder) and BS(Decoder) – “Fig. 2. Overview of CsiNet+ architecture. The left module is an encoder at the UE, compressing the CSI matrix. Meanwhile, the right module is a decoder at the BS, reconstructing CSI matrix from the received compressive measurements” [Guo page 4]; In a frequency-division duplexing (FDD) system wherein a user equipment (UE) (i.e., first device side) feeds channel state information (CSI) to a base station (BS) (i.e., second device side), the convolutional neural network-based encoder-decoder structure of CsiNet+ applies an encoder (and associated neural network structure) to the UE, and a decoder (and associated neural network structure) to the BS) and the method comprises:
determining, by a first device, a third neural network and a fourth neural network, wherein the third neural network and the fourth neural network respectively correspond to the first neural network (“The CSI feedback in massive MIMO systems should be drastically compressed while the coherence time is short and vice versa. Therefore, the compression rate (CR) must be adjusted according to the environments” [Guo page 2 Introduction]; “Although CSI compression can reduce feedback overhead, accuracy of reconstructed CSI at the BS is sacrificed, which may adversely affect MIMO communication network performance. Hence, communication systems sometime need adjust the CR according to the environments, as mentioned in Section I. In contrast with the traditional iterative algorithms that can work with different CRs, the existing DL-based methods can only compress CSI matrix with a fixed CR and have to train and store a different neural network for a different CR, thereby occupying large storage space at the UE. In this part, we focus on a multiple-rate framework, which can compress the CSI matrix at different CRs to save the storage space at the UE” [Guo page 6 Multiple-Rate CSI Feedback]; “In general, highly compressed measurement vectors can be generated from the low ones, as in Fig. 5. For instance, we can first compress the CSI matrix by fourfold and then continue to compress the compressed CSI matrix by twofold to obtain eightfold compression. This method decreases the FC layer parameter number from 2048×256 to 512×256 for eightfold compression compared with compressing from the original CSI matrix. Meanwhile, the first two convolutional layers, which are used to extract features, are also shared by different compression encoders, thereby further decreasing the number of encoder parameters” [Guo page 6 Serial Multiple-Rate Compression Framework: SM-CsiNet+]; see Fig. 5 including Encoder(UE), wherein different encoders for different compression rates (CR:4, CR:8, CR:16, CR:32) share initial convolutional layers (CR:4 Encoder) and then further comprise a number of consecutive FC layers (512*1, 256*1, 128*1, 64*1) corresponding to their compression rate – “Fig. 5. Serial multiple-rate compression framework” [Guo page 6]; The different compression encoders (i.e., plurality of neural networks including, e.g., a third neural network) correspond to the overall UE Encoder architecture (i.e., first neural network) that produces the compressed CSI matrices) and the second neural network, (“In the practical environments, the UE selects a suitable CR, and then the encoder compresses the CSI matrix to generate the corresponding measurement vectors. Once the BS receives these measurement vectors, it decompresses them using the corresponding decoder network” [Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+]; see Fig. 5 including Decoder(BS), including different decoders for different compression rates (CR:4 Decoder, CR:8 Decoder, CR:16 Decoder, CR:32 Decoder) [Guo page 6]; The different compression encoders (e.g., third neural network) also correspond to respective decoders (e.g., fourth neural network) based on compression rate, that correspond to the overall BS Decoder architecture (i.e., second neural network) that reconstructs CSI) wherein the first neural network and the second neural network are a pair of neural networks to perform first signal processing and second signal processing, (see Fig. 2 including UE(Encoder) and BS(Decoder) – “Fig. 2. Overview of CsiNet+ architecture. The left module is an encoder at the UE, compressing the CSI matrix. Meanwhile, the right module is a decoder at the BS, reconstructing CSI matrix from the received compressive measurements” [Guo page 4]; The UE encoder (i.e. first neural network) and BS decoder (i.e., second neural network) work in tandem (i.e., are paired) within the overall CsiNet+ architecture to transmit compressed CSI (i.e., perform first signal processing) and reconstruct received CSI (i.e., perform second signal processing)) and the third neural network and the fourth neural network are a pair of neural networks to perform the first signal processing and the second signal processing (see Fig. 5 including Encoder(UE) with different encoders for different compression rates (CR:4, CR:8, CR:16, CR:32) and Decoder(BS), with different decoders for different compression rates (CR:4 Decoder, CR:8 Decoder, CR:16 Decoder, CR:32 Decoder) [Guo page 6]; The different encoders (e.g., third neural network) also correspond to (i.e., are paired to) respective decoders (e.g., fourth neural network) based on their shared compression rate); and
sending, by the first device, information about the fourth neural network to a second device, (“In the FDD system, UE estimates the downlink channel and then feeds this information (CSI) to the BS” [Guo page 3 Massive MIMO-OFDM System]; “Once the channel matrix H in the angular-delay domain is estimated at the UE, compression, quantization, and entropy encoding1 will be used in turn to reduce CSI feedback overhead. The compressed CSI matrix can be expressed as follows:
PNG
media_image1.png
27
198
media_image1.png
Greyscale
where fcom(·) and Q(·) denote the compression and quantization processes, respectively, and θ1 represents parameters of the compression module (encoder). Once the BS receives the compressed CSI matrix, dequantization and decompression will be used to recover the channel matrix in the angular-delay domain,
PNG
media_image2.png
43
312
media_image2.png
Greyscale
, where D(·) and fcom(·) represent the dequantization and decompression functions, respectively, and θ2 denotes the parameters in the decompression module (decoder)” [Guo page 3 CSI Feedback Process]; “In the practical environments, the UE selects a suitable CR, and then the encoder compresses the CSI matrix to generate the corresponding measurement vectors. Once the BS receives these measurement vectors, it decompresses them using the corresponding decoder network” [Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+]; The UE (i.e., first device) sends CSI matrix (i.e., information), which includes parameters of the respective compression encoder, to the BS (i.e., second device), wherein based on said information, the BS can recover the CSI matrix to obtain parameters of the corresponding decoder (e.g., fourth neural network)) wherein the first neural network or the third neural network is used by the first device to perform first signal processing ([Guo page 6 Multiple-Rate CSI Feedback] and [Guo page 6 Serial Multiple-Rate Compression Framework: SM-CsiNet+] and Fig. 5 including Encoder(UE) [Guo page 6] as detailed above; The UE encoder (i.e., first neural network) architecture, including its different compression encoders (e.g., third neural network), are used to compress CSI (i.e., perform first signal processing) for transmission to the BS), the second neural network or the fourth neural network is used by the second device to perform second signal processing, ([Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+] and Fig. 5 including Decoder(BS) [Guo page 6] as detailed above; The BS decoder (i.e., second neural network) architecture, including its decoders at differing compression rates (e.g., fourth neural network), are used to decompress measurement vectors (i.e., perform second signal processing)) and the second signal processing corresponds to the first signal processing ([Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+], as detailed above; The BS decoder decompresses measurement vectors (i.e., second signal processing) to reconstruct the CSI, wherein the measurement vectors are the output of (i.e., correspond to) the CSI compression (i.e., first signal processing) performed by the UE encoder).
However, Guo does not expressly teach wherein determining, by the first device, the third neural network and the fourth neural network further comprises: updating, by the first device, the first neural network and the second neural network to respectively obtain the third neural network and the fourth neural network, wherein updating, by the first device, the first neural network and the second neural network to respectively obtain the third neural network and the fourth neural network further comprises: selecting, by the first device, the third neural network and the fourth neural network from a first set based on communication system data, wherein the first set comprises a plurality of neural networks; or adjusting, by the first device, based on the communication system data, network weights corresponding to the first neural network and the second neural network, to obtain the third neural network and the fourth neural network; or selecting, by the first device, a fifth neural network and a sixth neural network from a second set based on the communication system data, and then adjusting, based on the communication system data, network weights corresponding to the fifth neural network and the sixth neural network, to obtain the third neural network and the fourth neural network.
In the same field of endeavor, Pezeshki teaches a wireless communications system for transmitting channel state information utilizing paired encoder-decoder networks split across devices (“The present application describes mechanisms for indicating antenna configuration information and neural network information between devices. According to embodiments of the present disclosure two wireless communications devices may use a machine learning/neural network framework in order to encode for transmission across a channel and decode upon receipt. For example, the two wireless devices may jointly train an autoencoder. The autoencoder may be split between the transmitting and receiving sides—i.e., the encoder and decoder of the autoencoder may be implemented in different devices…For example, a first wireless communications device may establish communication with a second wireless communications device. In an example, the first wireless communications device may be a user equipment (UE) and the second wireless communications device may be a base station (BS, also referred to as evolved node Bs or next generation eNBs for example);” [Pezeshki ¶ 0030-0031]; “For example, a BS 105 may transmit cell specific reference signals (CRSs) and/or channel state information—reference signals (CSI-RSs) to enable a UE 115 to estimate a DL channel.” [Pezeshki ¶ 0046]) wherein determining, by the first device, the third neural network and the fourth neural network further comprises:
updating, by the first device, the first neural network and the second neural network to respectively obtain the third neural network and the fourth neural network, (“The configuration 200 is exemplary; different numbers of layers may be included in either encoder or decoder, as well as additional layers and/or functions, and/or algorithms to implement different types of neural networks, while implementing embodiments of the present disclosure” [Pezeshki ¶ 0066]; “At action 512, the first wireless communications device 502 (described in FIG. 5 as the source of transmissions using a neural network, and therefore comprising the encoder such as encoder 202 discussed with respect to FIG. 2) determines a neural network to use in autoencoder communications with the second wireless communications device 504. As further discussed with respect to FIG. 3 (where the first wireless communications device 502 is a UE) or FIG. 4 (where the first wireless communications device 502 is a BS), this may be performed by a neural network communication module 308/408 (respectively)” [Pezeshki ¶ 0087]; Wherein an encoder hosted on a first device (i.e., first neural network) and decoder hosted on a second device (i.e., second neural network) jointly form an autoencoder, the first device, via a neural network communications module, may further determine particular configurations of the encoder/decoder of the autoencoder (i.e., third/fourth neural networks))
wherein updating, by the first device, the first neural network and the second neural network to respectively obtain the third neural network and the fourth neural network further comprises: selecting, by the first device, the third neural network and the fourth neural network from a first set based on communication system data, wherein the first set comprises a plurality of neural networks (“In some embodiments, this involves the first wireless communications device 502 selecting an AI module from among possibly a plurality of different AI modules configured at the first wireless communications device 502 (e.g., pre-installed at the device or received via one or more configuration updates from a network, and/or some combination thereof). For example, the first wireless communications device 502 may be pre-configured with a set of AI modules that each have different neural network parameters, such as number of layers, number of nodes per layer, machine learning algorithm (overall and/or per layer, etc.), and so forth. The possible AI modules may be pre-configured at the second wireless communications device 504 (or may have been the source of the preconfiguring at the first wireless communications device 502 where, for example, the second wireless communications device 504 is a BS 105)” [Pezeshki ¶ 0088]).
The examiner notes that the above limitations, as mapped to Pezeshki, comprise a possible selection within a recited alternative expression (see MPEP § 2143.03), and therefore cover the scope of the claim. For the sake of completeness, it is noted that Pezeshki may be further interpreted to teach additional limitations of adjusting, by the first device, network weights corresponding to the first neural network and the second neural network to obtain the third neural network and the fourth neural network (“By sending the neural network information to the second wireless communications device, the first and second wireless communications devices may engage in training the neural network for subsequent use. This may be a collaborative and iterative process to achieve the same output as the input training sequence(s). For example, the first wireless communications device may modify one or more training weights and/or biases for one or more nodes in one or more layers of the neural network at the first wireless communications device” [Pezeshki ¶ 0059]; “At block 614, the first wireless communications device trains the neural network selected at block 606 or determined at block 610 in cooperation with (i.e., jointly) with the second wireless communications device…. At block 616, the first wireless communications device may request one or more transmission resources in order to transmit one or more trained neural network weights to the second wireless communications device for storage in association with antenna configuration information of the first wireless communications device” [Pezeshki ¶ 0113-0114]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated wherein determining, by the first device, the third neural network and the fourth neural network further comprises: updating, by the first device, the first neural network and the second neural network to respectively obtain the third neural network and the fourth neural network, wherein updating, by the first device, the first neural network and the second neural network to respectively obtain the third neural network and the fourth neural network further comprises: selecting, by the first device, the third neural network and the fourth neural network from a first set based on communication system data, wherein the first set comprises a plurality of neural networks as taught by Pezeshki into Guo because they are both directed towards a wireless communications system for transmitting channel state information utilizing paired encoder-decoder networks split across devices. Incorporating the teachings of Pezeshki would allow for improved communications efficiency between devices due to the combinable transmission of relevant neural network parameters together with antenna configuration information [Pezeshki ¶ 0091].
Regarding claim 2, the combination of Guo and Pezeshki discloses the method of parent claim 1, and Guo further teaches wherein that the first neural network or the third neural network is used by the first device to perform first signal processing further comprises:
the first neural network is used by the first device to process a first signal to obtain a second signal; (see Fig. 2 including UE(Encoder) and BS(Decoder) – “Fig. 2. Overview of CsiNet+ architecture. The left module is an encoder at the UE, compressing the CSI matrix. Meanwhile, the right module is a decoder at the BS, reconstructing CSI matrix from the received compressive measurements” [Guo page 4]; see Fig. 5 including Encoder(UE) [Guo page 6] as detailed above; The UE (i.e., first device) encoder (i.e., first neural network) compresses CSI (i.e., processes a first signal) to obtain measurement vectors to be transmitted to the BS (i.e., obtains a second signal)), and
the third neural network is used by the first device to process a third signal to obtain a fourth signal; ([Guo page 6 Multiple-Rate CSI Feedback] and [Guo page 6 Serial Multiple-Rate Compression Framework: SM-CsiNet+] and Fig. 5 including Encoder(UE) [Guo page 6] as detailed above; The different compression encoders (e.g., third neural network) of the UE encoder are used to consecutively compress CSI (i.e., process a third signal) at a respective compression rate for transmission to the BS (i.e., obtain a fourth signal)) and that the second neural network or the fourth neural network is used by the second device to perform second signal processing further comprises:
the second neural network is used by the second device to process the second signal to obtain a fifth signal; (see Fig. 2 including UE(Encoder) and BS(Decoder) – “Fig. 2. Overview of CsiNet+ architecture. The left module is an encoder at the UE, compressing the CSI matrix. Meanwhile, the right module is a decoder at the BS, reconstructing CSI matrix from the received compressive measurements” [Guo page 4]; see Fig. 5 including Encoder(UE) [Guo page 6] as detailed above; The BS (i.e., second device) decoder (i.e., second neural network) is used to decompress the received measurement vectors (i.e., process the second signal) to produce a reconstructed CSI (i.e., obtain a fifth signal)) and
the fourth neural network is used by the second device to process the fourth signal to obtain a sixth signal ([Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+] and Fig. 5 including Decoder(BS) [Guo page 6] as detailed above; The decoders at differing compression rates (e.g, fourth neural network) of the BS (i.e., second device) decoder are used to decompress the measurement vector received from the corresponding compression encoder (i.e., process fourth signal to obtain sixth signal)).
The examiner notes that the above limitations, as mapped to Guo, comprise a possible selection within a recited alternative expression (see MPEP § 2143.03), and therefore cover the scope of the claim.
Regarding claim 5, Guo teaches A neural network adjustment method ([Guo Abstract] as detailed in claim 1 above), relating to a first neural network and a second neural network, wherein the first neural network is applied to a first device side, the second neural network is applied to a second device side, ([Guo page 3 Massive MIMO-OFDM System] and [Guo pages 3-4 CSI Compression Based on Convolutional Neural Networks] and Fig. 2 including UE(Encoder) and BS(Decoder) [Guo page 4] as detailed in claim 1 above) the method comprising:
receiving, by the second device, information about a fourth neural network from a first device, wherein the fourth neural network is a neural network corresponding to the second neural network, ([Guo page 3 Massive MIMO-OFDM System] and [Guo page 3 CSI Feedback Process] and [Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+] as detailed in claim 1 above; The BS (i.e., second device) receives the compressed CSI matrix (i.e, information), which includes parameters of the respective compression encoder, from the UE (i.e., first device), wherein based on said information, the BS can recover the CSI matrix to obtain parameters of the corresponding decoder (e.g., fourth neural network)), wherein the first neural network and the second neural network are a pair of neural networks to perform first signal processing and second signal processing, (see Fig. 2 including UE(Encoder) and BS(Decoder) – “Fig. 2. Overview of CsiNet+ architecture. The left module is an encoder at the UE, compressing the CSI matrix. Meanwhile, the right module is a decoder at the BS, reconstructing CSI matrix from the received compressive measurements” [Guo page 4]; The UE encoder (i.e. first neural network) and BS decoder (i.e., second neural network) work in tandem (i.e., are paired) within the overall CsiNet+ architecture to transmit compressed CSI (i.e., perform first signal processing) and reconstruct received CSI (i.e., perform second signal processing)) and a third neural network and the fourth neural network are a pair of neural networks to perform the first signal processing and the second signal processing; (see Fig. 5 including Encoder(UE) with different encoders for different compression rates (CR:4, CR:8, CR:16, CR:32) and Decoder(BS), with different decoders for different compression rates (CR:4 Decoder, CR:8 Decoder, CR:16 Decoder, CR:32 Decoder) [Guo page 6]; The different encoders (e.g., third neural network) also correspond to (i.e., are paired to) respective decoders (e.g., fourth neural network) based on their shared compression rate)
determining, by the second device, the fourth neural network based on the information about the fourth neural network ([Guo page 3 Massive MIMO-OFDM System] and [Guo page 3 CSI Feedback Process] and [Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+] as detailed in claim 1 above; Based on the received information, the BS can recover the CSI matrix to obtain parameters of the corresponding decoder (e.g., determine the fourth neural network)).
However, Guo does not expressly teach receiving decoder model description information about a neural network from a first device, the decoder model description information being provided as control-plane metadata separate from payload signals, or determining the neural network from among a plurality of pre-provisioned neural networks at the second device, based on the decoder model description information about the fourth neural network.
In the same field of endeavor, Pezeshki teaches a wireless communications system for transmitting channel state information utilizing paired encoder-decoder networks split across devices (“The present application describes mechanisms for indicating antenna configuration information and neural network information between devices. According to embodiments of the present disclosure two wireless communications devices may use a machine learning/neural network framework in order to encode for transmission across a channel and decode upon receipt. For example, the two wireless devices may jointly train an autoencoder. The autoencoder may be split between the transmitting and receiving sides—i.e., the encoder and decoder of the autoencoder may be implemented in different devices…For example, a first wireless communications device may establish communication with a second wireless communications device. In an example, the first wireless communications device may be a user equipment (UE) and the second wireless communications device may be a base station (BS, also referred to as evolved node Bs or next generation eNBs for example);” [Pezeshki ¶ 0030-0031]; “For example, a BS 105 may transmit cell specific reference signals (CRSs) and/or channel state information—reference signals (CSI-RSs) to enable a UE 115 to estimate a DL channel.” [Pezeshki ¶ 0046]) that receiv[es] decoder model description information about a neural network from a first device, the decoder model description information being provided as control-plane metadata separate from payload signals, (“At action 512, the first wireless communications device 502 (described in FIG. 5 as the source of transmissions using a neural network, and therefore comprising the encoder such as encoder 202 discussed with respect to FIG. 2) determines a neural network to use in autoencoder communications with the second wireless communications device 504. As further discussed with respect to FIG. 3 (where the first wireless communications device 502 is a UE) or FIG. 4 (where the first wireless communications device 502 is a BS), this may be performed by a neural network communication module 308/408 (respectively)…In some embodiments, this involves the first wireless communications device 502 selecting an AI module from among possibly a plurality of different AI modules configured at the first wireless communications device 502 (e.g., pre-installed at the device or received via one or more configuration updates from a network, and/or some combination thereof)… The possible AI modules may be pre-configured at the second wireless communications device 504 (or may have been the source of the preconfiguring at the first wireless communications device 502 where, for example, the second wireless communications device 504 is a BS 105).….With the neural network determined, at action 514 the first wireless communications device 502 transmits the neural network information together with antenna configuration information of the first wireless communications device 502 to the second wireless communications device 504….. For example, where the action 512 involved selecting from a number of pre-provisioned AI modules, action 514 may involve the first wireless communications device 502 transmitting an index (e.g., an implicit signaling of neural network parameters) that identifies the AI module selected to the second wireless communications device 504 together with the antenna configuration information. [Pezeshki ¶ 0087, 0088, 0090, 0091]; The transmitted index (i.e., model description information) from the first device identifying a possible AI module (i.e., neural network, including, e.g., decoder at second device) is sent to the second device along with other antenna configuration information (i.e., metadata)) and determin[es] the neural network from among a plurality of pre-provisioned neural networks at the second device, based on the decoder model description information about the fourth neural network (“As noted above, in these situations the second wireless communications device 504 may use the index to identify the AI module that has been provisioned at the second wireless communications device 504 as well. Thereby, the second wireless communications device 504 may know via the AI module the relevant neural network parameters” [Pezeshki ¶ 0091]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated receiving decoder model description information about a neural network from a first device, the decoder model description information being provided as control-plane metadata separate from payload signals, or determining the neural network from among a plurality of pre-provisioned neural networks at the second device, based on the decoder model description information about the fourth neural network as taught by Pezeshki into Guo because they are both directed towards a wireless communications system for transmitting channel state information utilizing paired encoder-decoder networks split across devices. Incorporating the teachings of Pezeshki would allow for improved communications efficiency between devices due to the combinable transmission of relevant neural network parameters together with antenna configuration information [Pezeshki ¶ 0091].
Regarding claim 6, the combination of Guo and Pezeshki teach the limitations of parent claim 5, and Guo further teaches receiving, by the second device, information about a third neural network from the first device, wherein the third neural network is a neural network corresponding to the first neural network, ([Guo page 3 Massive MIMO-OFDM System] and [Guo page 3 CSI Feedback Process] and [Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+] as detailed in claim 1 above; The BS (i.e., second device) receives the compressed CSI matrix (i.e, information), which includes parameters of the respective compression encoder (e.g., third neural network) corresponding to the overall UE (i.e., first device) encoder (i.e., first neural network)) wherein the first neural network or the third neural network is used by the first device to perform first signal processing ([Guo page 6 Multiple-Rate CSI Feedback] and [Guo page 6 Serial Multiple-Rate Compression Framework: SM-CsiNet+] and Fig. 5 including Encoder(UE) [Guo page 6] as detailed above; The UE encoder (i.e., first neural network) architecture, including its different compression encoders (e.g., third neural network), are used to compress CSI (i.e., perform first signal processing) for transmission to the BS), the second neural network or the fourth neural network is used by the second device to perform the second signal processing ([Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+] and Fig. 5 including Decoder(BS) [Guo page 6] as detailed above; The BS decoder (i.e., second neural network) architecture, including its decoders at differing compression rates (e.g., fourth neural network), are used to decompress measurement vectors (i.e., perform second signal processing)), and the second signal processing corresponds to the first signal processing ([Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+], as detailed above; The BS decoder decompresses measurement vectors (i.e., second signal processing) to reconstruct the CSI, wherein the measurement vectors are the output of (i.e., correspond to) the CSI compression (i.e., first signal processing) performed by the UE encoder)
Regarding claim 7, the combination of Guo and Pezeshki teach the limitations of parent claim 6, and Guo further teaches wherein the first neural network or the third neural network is used by the first device to perform first signal processing further comprises:
the first neural network is used by the first device to process a first signal to obtain a second signal; (see Fig. 2 including UE(Encoder) and BS(Decoder) – “Fig. 2. Overview of CsiNet+ architecture. The left module is an encoder at the UE, compressing the CSI matrix. Meanwhile, the right module is a decoder at the BS, reconstructing CSI matrix from the received compressive measurements” [Guo page 4]; see Fig. 5 including Encoder(UE) [Guo page 6] as detailed in claim 1 above; The UE (i.e., first device) encoder (i.e., first neural network) compresses CSI (i.e., processes a first signal) to obtain measurement vectors to be transmitted to the BS (i.e., obtains a second signal)) and
the third neural network is used by the first device to process a third signal to obtain a fourth signal; ([Guo page 6 Multiple-Rate CSI Feedback] and [Guo page 6 Serial Multiple-Rate Compression Framework: SM-CsiNet+] and Fig. 5 including Encoder(UE) [Guo page 6] as detailed in claim 1 above; The different compression encoders (e.g., third neural network) of the UE encoder are used to consecutively compress CSI (i.e., process a third signal) at a respective compression rate for transmission to the BS (i.e., obtain a fourth signal)) and the second neural network or the fourth neural network is used by the second device to perform the second signal processing further comprises:
the second neural network is used by the second device to process the second signal to obtain a fifth signal; (see Fig. 2 including UE(Encoder) and BS(Decoder) – “Fig. 2. Overview of CsiNet+ architecture. The left module is an encoder at the UE, compressing the CSI matrix. Meanwhile, the right module is a decoder at the BS, reconstructing CSI matrix from the received compressive measurements” [Guo page 4]; see Fig. 5 including Encoder(UE) [Guo page 6] as detailed above; The BS (i.e., second device) decoder (i.e., second neural network) is used to decompress the received measurement vectors (i.e., process the second signal) to produce a reconstructed CSI (i.e., obtain a fifth signal))
and the fourth neural network is used by the second device to process the fourth signal to obtain a sixth signal ([Guo pages 6-7 Serial Multiple-Rate Compression Framework: SM-CsiNet+] and Fig. 5 including Decoder(BS) [Guo page 6] as detailed above; The decoders at differing compression rates (e.g, fourth neural network) of the BS (i.e., second device) decoder are used to decompress the measurement vector received from the corresponding compression encoder (i.e., process fourth signal to obtain sixth signal)).
The examiner notes that the above limitations, as mapped to Guo, comprise a possible selection within a recited alternative expression (see MPEP § 2143.03), and therefore cover the scope of the claim.
Regarding claims 9 and 10, they are apparatus claims that correspond to the methods of claims 1 and 2, which are already taught by the combination of Guo and Pezeshki as detailed above. Guo further teaches the first device side in a neural network adjustment apparatus being an apparatus side, and the apparatus compris[ing]: a processor and a transmitter, each configured to perform the claimed functions (“Massive multiple-input multiple-output (MIMO) is a promising technology to increase link capacity and energy efficiency. However, these benefits are based on available channel state information (CSI) at the base station (BS). Therefore, user equipment (UE) needs to keep on feeding CSI back to the BS, thereby consuming precious bandwidth resource” [Guo Abstract]; “We consider a single-cell FDD massive MIMO-OFDM system, where there are Nt(>>1) transmit antennas at the BS and a single receiver antenna at the UE…In the FDD system, UE estimates the downlink channel and then feeds this information (CSI) to the BS” [Guo page 3 Massive MIMO-OFDM System]; In a massive MIMO-OFDM system, user equipment (UE) implicitly comprises devices (e.g., smartphones) with adequate processing and transmitting (e.g., antenna) hardware for performing the claimed functions). Consequently, claims 10 and 11 are rejected for the same reasons as claims 1 and 2.
Regarding claims 13-15, they are apparatus claims that correspond to the methods of claims 5-7, which are already taught by the combination of Guo and Pezeshki as detailed above. Guo further teaches the second device side in a neural network adjustment apparatus being an apparatus side, and the apparatus compris[ing]: a receiver and a processor, each configured to perform the claimed functions (“Massive multiple-input multiple-output (MIMO) is a promising technology to increase link capacity and energy efficiency. However, these benefits are based on available channel state information (CSI) at the base station (BS). Therefore, user equipment (UE) needs to keep on feeding CSI back to the BS, thereby consuming precious bandwidth resource” [Guo Abstract]; “We consider a single-cell FDD massive MIMO-OFDM system, where there are Nt(>>1) transmit antennas at the BS and a single receiver antenna at the UE…In the FDD system, UE estimates the downlink channel and then feeds this information (CSI) to the BS” [Guo page 3 Massive MIMO-OFDM System]; In a massive MIMO-OFDM system, base station (BS) implicitly comprises large-scale devices (e.g., cell tower) with adequate processing and receiving (e.g., antenna) hardware for performing the claimed functions). Consequently, claims 13-15 are rejected for the same reasons as claims 5-7.
Regarding claim 17, it is a product claim that corresponds to the method of claim 1, which is already taught by the combination of Guo and Pezeshki as detailed above. Guo further teaches A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores a computer program, and when the computer program is run by a computer, the computer is enabled to perform the claimed functions (“Massive multiple-input multiple-output (MIMO) is a promising technology to increase link capacity and energy efficiency. However, these benefits are based on available channel state information (CSI) at the base station (BS). Therefore, user equipment (UE) needs to keep on feeding CSI back to the BS, thereby consuming precious bandwidth resource” [Guo Abstract]; “We consider a single-cell FDD massive MIMO-OFDM system, where there are Nt(>>1) transmit antennas at the BS and a single receiver antenna at the UE…In the FDD system, UE estimates the downlink channel and then feeds this information (CSI) to the BS” [Guo page 3 Massive MIMO-OFDM System]; In a massive MIMO-OFDM system, user equipment (UE) implicitly comprises devices (e.g., smartphones) with adequate processing and storage hardware (i.e., storage medium coupled to processor) for performing the claimed functions). Consequently, claim 17 is rejected for the same reasons as claim 1.
Regarding claim 18, it is a product claim that corresponds to the method of claim 5, which is already taught by the combination of Guo and Pezeshki as detailed above. Guo further teaches A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores a computer program, and when the computer program is run by a computer, the computer is enabled to perform the claimed functions (“Massive multiple-input multiple-output (MIMO) is a promising technology to increase link capacity and energy efficiency. However, these benefits are based on available channel state information (CSI) at the base station (BS). Therefore, user equipment (UE) needs to keep on feeding CSI back to the BS, thereby consuming precious bandwidth resource” [Guo Abstract]; “We consider a single-cell FDD massive MIMO-OFDM system, where there are Nt(>>1) transmit antennas at the BS and a single receiver antenna at the UE…In the FDD system, UE estimates the downlink channel and then feeds this information (CSI) to the BS” [Guo page 3 Massive MIMO-OFDM System]; In a massive MIMO-OFDM system, base station (BS) implicitly comprises large-scale devices (e.g., on-site electronic equipment) with adequate processing and storage hardware (i.e., storage medium coupled to processor) for performing the claimed functions). Consequently, claim 18 is rejected for the same reasons as claim 5.
Regarding claim 19, it is an apparatus claim that corresponds to the method of claim 1, which is already taught by the combination of Guo and Pezeshki as detailed above. Guo further teaches A chip apparatus, comprising a processing circuit, wherein the processing circuit is configured to invoke a program from a memory and run the program, to enable a communication device in which the chip apparatus is installed to perform the claimed functions (“Massive multiple-input multiple-output (MIMO) is a promising technology to increase link capacity and energy efficiency. However, these benefits are based on available channel state information (CSI) at the base station (BS). Therefore, user equipment (UE) needs to keep on feeding CSI back to the BS, thereby consuming precious bandwidth resource” [Guo Abstract]; “We consider a single-cell FDD massive MIMO-OFDM system, where there are Nt(>>1) transmit antennas at the BS and a single receiver antenna at the UE…In the FDD system, UE estimates the downlink channel and then feeds this information (CSI) to the BS” [Guo page 3 Massive MIMO-OFDM System]; In a massive MIMO-OFDM system, user equipment (UE) implicitly comprises devices (e.g., smartphones) with adequate processing, storage, (i.e., memory coupled to processor) and transmitting (e.g., antenna) hardware for performing the claimed functions). Consequently, claim 19 is rejected for the same reasons as claim 1.
Regarding claim 20, it is an apparatus claim that corresponds to the method of claim 5, which is already taught by the combination of Guo and Pezeshki as detailed above. Guo further teaches A chip apparatus, comprising a processing circuit, wherein the processing circuit is configured to invoke a program from a memory and run the program, to enable a communication device in which the chip apparatus is installed to perform the claimed functions (“Massive multiple-input multiple-output (MIMO) is a promising technology to increase link capacity and energy efficiency. However, these benefits are based on available channel state information (CSI) at the base station (BS). Therefore, user equipment (UE) needs to keep on feeding CSI back to the BS, thereby consuming precious bandwidth resource” [Guo Abstract]; “We consider a single-cell FDD massive MIMO-OFDM system, where there are Nt(>>1) transmit antennas at the BS and a single receiver antenna at the UE…In the FDD system, UE estimates the downlink channel and then feeds this information (CSI) to the BS” [Guo page 3 Massive MIMO-OFDM System]; In a massive MIMO-OFDM system, base station (BS) implicitly comprises large-scale devices (e.g., on-site electronic equipment) with adequate processing, storage (i.e., processing circuit coupled to memory), and receiving (e.g., antennas) hardware for performing the claimed functions). Consequently, claim 20 is rejected for the same reasons as claim 5.
Claims 3-4, 8, 11-12, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Guo and Pezeshki, as applied to claims 2, 7, 10, and 15 above, further in view of Liao et al. ("CSI Feedback Based on Deep Learning for Massive MIMO Systems", published 24 June 2019), hereinafter Liao.
Regarding claim 3, the combination of Guo and Pezeshki teaches the limitations of parent claim 2, and Guo further teaches wherein a degree of difference between a first training signal related to the third signal and a second training signal related to the sixth signal meets a first condition (“Once the channel matrix H in the angular-delay domain is estimated at the UE, compression, quantization, and entropy encoding1 will be used in turn to reduce CSI feedback overhead. The compressed CSI matrix can be expressed as follows:
PNG
media_image1.png
27
198
media_image1.png
Greyscale
where fcom(·) and Q(·) denote the compression and quantization processes, respectively, and θ1 represents parameters of the compression module (encoder). Once the BS receives the compressed CSI matrix, dequantization and decompression will be used to recover the channel matrix in the angular-delay domain,
PNG
media_image2.png
43
312
media_image2.png
Greyscale
, where D(·) and fcom(·) represent the dequantization and decompression functions, respectively, and θ2 denotes the parameters in the decompression module (decoder). Therefore, the optimization compression and recovery can be formulated by combining (3) and (4) together with the mean-squared error (MSE) distortion metric as the following:
PNG
media_image3.png
71
717
media_image3.png
Greyscale
” [Guo page 3 CSI Feedback Process]; A mean-squared error (i.e., degree of difference) is found between the original channel matrix H (i.e., first training signal) and the reconstructed matrix H^ (i.e., second training signal), which are both related as input to and output of the various signals (e.g., third signal and sixth signal) passed between the UE encoder and BS decoder. The calculated error is further minimized to optimize compression (i.e., meet a first condition)).
However, the combination does not expressly teach a similarity between a first training signal and a second training signal meet[ing] a second condition (The examiner notes that a degree of difference and a similarity are recited as possible selections within an alternative expression; Guo teaching the claimed degree of difference may thereby be interpreted as the reference teaching the limitations of the claim on its own merits (see MPEP § 2143.03). Nevertheless, for the sake of completeness, an additional reference is incorporated to teach the claimed similarity).
In the same field of endeavor, Liao teaches a CSI compression feedback algorithm based on deep learning techniques for a massive MIMO system (“Aiming at the problem of high complexity and low feedback accuracy of existing channel state information (CSI) feedback algorithms for frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, this paper proposes a CSI compression feedback algorithm based on deep learning (DL), which is suitable for single-user and multi-user scenarios in massive MIMO systems” [Liao Abstract]) wherein a similarity between a first training signal and a second training signal meets a second condition (“For the offline training, the CSI of massive MIMO channel is used as input data and label data to train the learning network” [Liao page 5 Offline Model Training and Online Feedback]; “In the massive MIMO system, there is a certain correlation between antennas because of the large number of antennas at transmitter and receiver, the dense arrangement of antennas makes the channel correlated highly. The channel matrix H with spatial correlation can be modeled as [8]
PNG
media_image4.png
66
272
media_image4.png
Greyscale
where
PNG
media_image5.png
37
156
media_image5.png
Greyscale
is the receiving correlation matrix,
PNG
media_image6.png
33
153
media_image6.png
Greyscale
is the transmitting correlation matrix…All elements in Rt or Rr are rij, where rij is the correlation coefficient between the ith antenna and the jth antenna of the transmitter or the receiver” [Liao page 3 System Model]; Wherein CSI, transmitted via signals, is used to train DL models, a correlation (i.e., similarity) between any antennas that receive/transmit the signals can be expressed via a correlation coefficient rij and incorporated into the channel matrix (i.e., set to meet a condition))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated a similarity between a first training signal and a second training signal meet[ing] a second condition as taught by Liao into the combination because both Guo and Liao are directed towards CSI compression feedback algorithms based on deep learning techniques for a massive MIMO system. Given that Guo already discusses consideration of spatial correlations between antennas in CSI feedback algorithms (“CS has been first applied to CSI feedback in the spatial-frequency domain, which exploits the high spatial correlation of CSI resulting from the limited distance among antennas in massive MIMO” [Guo page 1 Introduction]), and Liao also explicitly discusses its improved modeling of spatial correlation over the CsiNet framework (“The CSI feedback is realized by using CsiNet network which is composed of fully connected network and residual network. Compared with conventional CS algorithms, CsiNet network has higher recovery accuracy and better performance. However, the network has many training parameters, and only convolutional layers and fully connected layers are used to extract the features of the data to complete CSI compression and recovery, the spatial correlation between antennas is not fully utilized in massive MIMO system” [Liao page 2 Introduction]), a person of ordinary skill in the art would recognize the value of incorporating the teachings of Liao to enable full consideration of the impact of spatial correlations in the CsiNet+ framework (Aiming at the problems of high computational complexity, low feedback accuracy in conventional algorithms and a lack of consideration of spatial correlation between antennas in CsiNet network, this paper proposes a DL-based CSI compression feedback algorithm with low feedback overhead and high feedback accuracy for FDD massive MIMO systems, which considers the spatial correlation of massive MIMO channel data” [Liao page 2 Introduction]).
Regarding claim 4, the combination of Guo, Pezeshki, and Liao teaches the limitations of parent claim 3, and Guo further teaches wherein the degree of difference comprises any one of the following: a difference, a mean squared error, a normalized mean squared error, or an average absolute error ([Guo page 3 CSI Feedback Process] as detailed in claim 3 above; A mean-squared error (i.e., degree of difference) is found between the original channel matrix H and the reconstructed matrix H^). Liao further teaches wherein the similarity comprises a correlation coefficient ([Liao page 3 System Model] as detailed in claim 3 above; A correlation (i.e., similarity) between any antennas that receive/transmit the signals can be expressed via a correlation coefficient rij).
Regarding claim 8, the combination of Guo and Pezeshki teaches the limitations of parent claim 7, and Guo further teaches wherein a degree of difference between a first training signal related to the third signal and a second training signal related to the sixth signal meets a first condition (“Once the channel matrix H in the angular-delay domain is estimated at the UE, compression, quantization, and entropy encoding1 will be used in turn to reduce CSI feedback overhead. The compressed CSI matrix can be expressed as follows:
PNG
media_image1.png
27
198
media_image1.png
Greyscale
where fcom(·) and Q(·) denote the compression and quantization processes, respectively, and θ1 represents parameters of the compression module (encoder). Once the BS receives the compressed CSI matrix, dequantization and decompression will be used to recover the channel matrix in the angular-delay domain,
PNG
media_image2.png
43
312
media_image2.png
Greyscale
, where D(·) and fcom(·) represent the dequantization and decompression functions, respectively, and θ2 denotes the parameters in the decompression module (decoder). Therefore, the optimization compression and recovery can be formulated by combining (3) and (4) together with the mean-squared error (MSE) distortion metric as the following:
PNG
media_image3.png
71
717
media_image3.png
Greyscale
” [Guo page 3 CSI Feedback Process]; A mean-squared error (i.e., degree of difference) is found between the original channel matrix H (i.e., first training signal) and the reconstructed matrix H^ (i.e., second training signal), which are both related as input to and output of the various signals (e.g., third signal and sixth signal) passed between the UE encoder and BS decoder. The calculated error is further minimized to optimize compression (i.e., meet a first condition))
However, the combination does not expressly teach a similarity between a first training signal and a second training signal meet[ing] a second condition (The examiner notes that a degree of difference and a similarity are recited as possible selections within an alternative expression; Guo teaching the claimed degree of difference may thereby be interpreted as the reference teaching the limitations of the claim on its own merits (see MPEP § 2143.03). Nevertheless, for the sake of completeness, an additional reference is incorporated to teach the claimed similarity).
In the same field of endeavor, Liao teaches a CSI compression feedback algorithm based on deep learning techniques for a massive MIMO system (“Aiming at the problem of high complexity and low feedback accuracy of existing channel state information (CSI) feedback algorithms for frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, this paper proposes a CSI compression feedback algorithm based on deep learning (DL), which is suitable for single-user and multi-user scenarios in massive MIMO systems” [Liao Abstract]) wherein a similarity between a first training signal and a second training signal meets a second condition (“For the offline training, the CSI of massive MIMO channel is used as input data and label data to train the learning network” [Liao page 5 Offline Model Training and Online Feedback]; “In the massive MIMO system, there is a certain correlation between antennas because of the large number of antennas at transmitter and receiver, the dense arrangement of antennas makes the channel correlated highly. The channel matrix H with spatial correlation can be modeled as [8]
PNG
media_image4.png
66
272
media_image4.png
Greyscale
where
PNG
media_image5.png
37
156
media_image5.png
Greyscale
is the receiving correlation matrix,
PNG
media_image6.png
33
153
media_image6.png
Greyscale
is the transmitting correlation matrix…All elements in Rt or Rr are rij, where rij is the correlation coefficient between the ith antenna and the jth antenna of the transmitter or the receiver” [Liao page 3 System Model]; Wherein CSI, transmitted via signals, is used to train DL models, a correlation (i.e., similarity) between any antennas that receive/transmit the signals can be expressed via a correlation coefficient rij and incorporated into the channel matrix (i.e., set to meet a condition))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated a similarity between a first training signal and a second training signal meet[ing] a second condition as taught by Liao into the combination because both Guo and Liao are directed towards CSI compression feedback algorithms based on deep learning techniques for a massive MIMO system. Given that Guo already discusses consideration of spatial correlations between antennas in CSI feedback algorithms (“CS has been first applied to CSI feedback in the spatial-frequency domain, which exploits the high spatial correlation of CSI resulting from the limited distance among antennas in massive MIMO” [Guo page 1 Introduction]), and Liao also explicitly discusses its improved modeling of spatial correlation over the CsiNet framework (“The CSI feedback is realized by using CsiNet network which is composed of fully connected network and residual network. Compared with conventional CS algorithms, CsiNet network has higher recovery accuracy and better performance. However, the network has many training parameters, and only convolutional layers and fully connected layers are used to extract the features of the data to complete CSI compression and recovery, the spatial correlation between antennas is not fully utilized in massive MIMO system” [Liao page 2 Introduction]), a person of ordinary skill in the art would recognize the value of incorporating the teachings of Liao to enable full consideration of the impact of spatial correlations in the CsiNet+ framework (Aiming at the problems of high computational complexity, low feedback accuracy in conventional algorithms and a lack of consideration of spatial correlation between antennas in CsiNet network, this paper proposes a DL-based CSI compression feedback algorithm with low feedback overhead and high feedback accuracy for FDD massive MIMO systems, which considers the spatial correlation of massive MIMO channel data” [Liao page 2 Introduction]).
Regarding claims 11 and 12, they are apparatus claims that correspond to the methods of claims 3 and 4, which are already taught by the combination of Guo, Pezeshki, and Liao as detailed above. Consequently, they are rejected for the same reasons as claims 3 and 4.
Regarding claim 16, it is an apparatus claim that corresponds to the method of claim 8, which is already taught by the combination of Guo, Pezeshki, and Liao as detailed above. Consequently, it is rejected for the same reasons as claim 8.
Response to Arguments
The remarks filed 12/09/2025 have been fully considered.
Applicant’s remarks [Remarks pages 16-19] traversing the prior art rejections under 35 U.S.C. 102 and 35 U.S.C. 103 set forth in the office action mailed 10/02/2025, in view of independent claims 1, 5, 9, and 13 as amended, have been considered, but are moot because the new grounds of rejection set forth above does not rely on the reference(s) applied in the prior rejection of record for the subject matter being specifically challenged in applicant' s argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hoydis et al. (Pub. No. US 20210192320 A1, “End-to-End Learning in Communication Systems”, effectively filed 10/23/2017) discloses a system of organising a plurality of transmitter neutral networks and a plurality of receiver neural networks into a plurality of transmitter-receiver neural network pairs, wherein a transmitter-receiver neural network pair is defined for each of a plurality of subcarrier frequency bands of a multi-carrier transmission system; arranging a plurality of symbols of the multi-carrier transmission system into a plurality of transmit blocks; mapping each of said transmit blocks to one of the transmitter-receiver neural network pairs; transmitting each symbol using the mapped transmitter-receiver neural network pair; and training at least some weights of the transmit and receive neural networks using a loss function for each transmitter-receiver neural network pair.
Chen et al. (Pub. No. US 20230084164 A1, “Configurable Neural Network for Channel State Feedback (CSF) Learning”, effectively filed 04/17/2020) discloses a method of wireless communication, by a user equipment (UE), including receiving multiple neural network training configurations for channel state feedback (CSF). Each configuration corresponds to a different neural network framework. The method also includes training each of a group of neural network decoder/encoder pairs in accordance with the received training configurations.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIJAY M BALAKRISHNAN whose telephone number is (571) 272-0455. The examiner can normally be reached 10am-5pm EST Mon-Thurs.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JENNIFER WELCH can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/V.M.B./
Examiner, Art Unit 2143
/JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143