Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 11-12, 14, 16-17 and 19 are rejected under 112 second paragraph since claims 11 and 16 are recited in optional format (i.e. “in a case of”) and not a positive recitation. Words such as “may, might, can, could, in case, when, potentially, possibly” are optional language and do not narrow claim limitations (In re Johnston, 77 USPQ2d 1788 (Fed Cir 2006)).
Claims 12, 14, 17 and 19 are rejected since these claims are dependent on claims 11 and 16
Claim Rejections - 35 USC § 103
1. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3. Claims 1-3, 9-10, 15 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pre-Grant Publication No US 2024/0313838 to Wu et al. (hereinafter Wu) in view of U.S. Publication No US 2024/0154675 to Shi et al. (hereinafter Shi)
As to claims 1, Wu discloses an electronic device, comprising a processor and a memory configured to store a computer program, wherein the processor is configured to call and run the computer program stored in the memory to:
update a channel feedback model based on N first feedback information tags respectively corresponding to N pieces of channel information (Wu; [0112]; [0121]-[0125] discloses the auto decoder 340-a (e.g., branch 1) may be associated with a second CSI compression scheme (e.g., a neural network-based CSI compression scheme). In some examples, indication of a configuration of the auto decoder 340-a (e.g., in an indication 255) may be associated with a relatively larger payload, and accordingly may involve relatively larger or less efficient signaling payload, or relatively higher precision. In some examples, the auto decoder 340-a may be associated with a codebook-based CSI compression scheme, and may be an example of a persistent, semi-static, or otherwise less-frequently updated configuration. In some examples, the auto decoder 340-b (e.g., branch 2), or both the auto decoder 340-a and the auto decoder 340-b, may be associated with a CSI compression scheme that is associated with one or more aspects of machine learning, which may be described as a neural network-based CSI compression scheme. In some examples, the auto decoder 340-b may be associated with a Type-III CSI. In some examples, indication of a configuration of the auto decoder 340-b (e.g., in an indication 255) may be associated with a relatively smaller or less complex payload, and accordingly may involve relatively smaller or more efficient signaling payload (e.g., may be simpler to indicate to a UE 115). In some examples, the auto decoder 340-a may be an example of a dynamic or otherwise more-frequently updated configuration. [0130] discloses the classification block 540 may provide an output 550, which may be associated with a selection or determination to implement one CSI compression scheme or another. In one example, the output 550 may be a binary output, such as a value of ‘0’ corresponding to using a regular CSI compression scheme and a value of ‘1’ corresponding to using a machine learning CSI compression scheme (e.g., a CSI compression scheme that leverages machine learning, including for periodic or otherwise ongoing training or updates to an encoder, a decoder, or both)),
Wu discloses the N first feedback information tags comprise N pieces of codebook output information, but fails to discloses “wherein the N first feedback information tags comprise N pieces of codebook output information obtained by respectively quantizing, based on a codebook, the N pieces of channel information, the channel feedback model is used to obtain corresponding feedback information based on channel information which needs to be feedback, and N is an integer greater than or equal to 1”. However, Shi discloses
wherein the N first feedback information tags comprise N pieces of codebook output information obtained by respectively quantizing, based on a codebook, the N pieces of channel information, the channel feedback model is used to obtain corresponding feedback information based on channel information which needs to be feedback, and N is an integer greater than or equal to 1 (Shi; Fig.5; [0096]; [0118] discloses a CSI feedback framework based on an AI architecture. Feeding back CSI of a downlink channel is used as an example. An encoder is deployed on a side of UE, and the UE may measure a reference signal from a base station, to obtain a downlink MIMO channel. The downlink MIMO channel may be represented by using an M*N-dimensional matrix. An eigen-subspace matrix H of the downlink channel may be obtained by using a preprocessing module. A preprocessing result is input into the encoder, and a low-dimensional code word D is obtained through compression. After being quantized, the low-dimensional code word D is fed back to the base station through an air interface. The base station inputs the received code word into a decoder, and obtains an eigen-subspace matrix H′ of the downlink channel through reconstruction. Generally, H and H′ have a same matrix dimension. [0109] discloses the output of the first encoder may be a real number vector la, and la is quantized and may be transformed into a bit stream. The bit stream may be used as the first information. Alternatively, the bit stream may be fed back to the base station as an information element (information element, IE) of CSI, and the first information may be the CSI or the like. Similarly, in this embodiment of this application, a specific output of an encoder is related to a format supported by the encoder).
It is obvious for a person of ordinary skilled in the art to combine the teachings before the effective filing date of the invention. One would be motivated to combine the teachings so that the real number vector la is used for description, and does not constitute a limitation.
As to claims 2, the rejection of claim 1 as listed above is incorporated herein. In addition, Wu-Shi discloses wherein the channel feedback model comprises an encoder and a decoder (Shi; Fig.5; [0096]-[0097]; [0118]);
the encoder is used by a terminal device to perform encoding based on the channel information which needs to be feedback to obtain a corresponding bitstream (Shi; Fig.5; [0096]-[0097]; [0118] ); and
the decoder is used by a network device to perform decoding based on the corresponding bitstream to obtain corresponding feedback information (Shi; Fig.5; [0096]-[0097]; [0118]).
As to claims 3, the rejection of claim 1 as listed above is incorporated herein. In addition, Wu-Shi discloses wherein the processor is further configured to call and run the computer program stored in the memory to:
obtain N second feedback information tags corresponding to the N pieces of channel information based on the N pieces of channel information (Shi; Fig.5; [0096]-[0097]; [0109];[0118] discloses feeding back CSI of a downlink channel is used as an example. An encoder is deployed on a side of UE, and the UE may measure a reference signal from a base station, to obtain a downlink MIMO channel. The downlink MIMO channel may be represented by using an M*N-dimensional matrix. An eigen-subspace matrix H of the downlink channel may be obtained by using a preprocessing module. A preprocessing result is input into the encoder, and a low-dimensional code word D is obtained through compression. After being quantized, the low-dimensional code word D is fed back to the base station through an air interface. The base station inputs the received code word into a decoder, and obtains an eigen-subspace matrix H′ of the downlink channel through reconstruction. Generally, H and H′ have a same matrix dimension. Based on simulation verification and analyzing, in a CSI feedback method shown in Fig.5, neural networks in the encoder and the decoder are generalized, but reconstruction precision is usually limited. During actual application, there are various downlink channel environments between the UE and the base station, and large-scale information is rich. The large-scale information means different channel attenuation caused by refraction, reflection, scattering, or the like on a transmission path of a wireless channel due to a change in a physical environment, or different penetration losses caused by obstacles on a propagation path. In the CSI feedback method shown in Fig. 5, regardless of an environment corresponding to the downlink channel, a same set of an encoder and a decoder is used, and related information of the current downlink channel is not considered during encoding and/or decoding. As a result, precision of the eigen-subspace matrix H′ of the downlink channel obtained by the base station through reconstruction is low); and
update the channel feedback model based on N pieces of feedback information respectively corresponding to the N pieces of channel information, the N first feedback information tags respectively corresponding to the N pieces of channel information, and the N second feedback information tags respectively corresponding to the N pieces of channel information, wherein the N pieces of feedback information are obtained by respectively processing the N pieces of channel information by using the channel feedback model (Shi; Fig.5; [0096]-[0097]; [0109];[0118] discloses feeding back CSI of a downlink channel is used as an example. An encoder is deployed on a side of UE, and the UE may measure a reference signal from a base station, to obtain a downlink MIMO channel. The downlink MIMO channel may be represented by using an M*N-dimensional matrix. An eigen-subspace matrix H of the downlink channel may be obtained by using a preprocessing module. A preprocessing result is input into the encoder, and a low-dimensional code word D is obtained through compression. After being quantized, the low-dimensional code word D is fed back to the base station through an air interface. The base station inputs the received code word into a decoder, and obtains an eigen-subspace matrix H′ of the downlink channel through reconstruction. Generally, H and H′ have a same matrix dimension. Based on simulation verification and analyzing, in a CSI feedback method shown in Fig.5, neural networks in the encoder and the decoder are generalized, but reconstruction precision is usually limited. During actual application, there are various downlink channel environments between the UE and the base station, and large-scale information is rich. The large-scale information means different channel attenuation caused by refraction, reflection, scattering, or the like on a transmission path of a wireless channel due to a change in a physical environment, or different penetration losses caused by obstacles on a propagation path. In the CSI feedback method shown in Fig. 5, regardless of an environment corresponding to the downlink channel, a same set of an encoder and a decoder is used, and related information of the current downlink channel is not considered during encoding and/or decoding. As a result, precision of the eigen-subspace matrix H′ of the downlink channel obtained by the base station through reconstruction is low).
As to claims 9, the rejection of claim 1 as listed above is incorporated herein. In addition, Wu-Shi discloses wherein the electronic device is a network device, and the network device further comprises a transceiver configured to:
receive, from a terminal device, the N pieces of codebook output information and N bitstreams respectively corresponding to the N pieces of channel information, wherein the N bitstreams are obtained by the terminal device respectively processing the N pieces of channel information by using an encoder in the channel feedback model respectively (Shi; Fig.5; [0096]-[0097]; [0109];[0118] discloses feeding back CSI of a downlink channel is used as an example. An encoder is deployed on a side of UE, and the UE may measure a reference signal from a base station, to obtain a downlink MIMO channel. The downlink MIMO channel may be represented by using an M*N-dimensional matrix. An eigen-subspace matrix H of the downlink channel may be obtained by using a preprocessing module. A preprocessing result is input into the encoder, and a low-dimensional code word D is obtained through compression. After being quantized, the low-dimensional code word D is fed back to the base station through an air interface. The base station inputs the received code word into a decoder, and obtains an eigen-subspace matrix H′ of the downlink channel through reconstruction. Generally, H and H′ have a same matrix dimension. Based on simulation verification and analyzing, in a CSI feedback method shown in Fig.5, neural networks in the encoder and the decoder are generalized, but reconstruction precision is usually limited. During actual application, there are various downlink channel environments between the UE and the base station, and large-scale information is rich. The large-scale information means different channel attenuation caused by refraction, reflection, scattering, or the like on a transmission path of a wireless channel due to a change in a physical environment, or different penetration losses caused by obstacles on a propagation path. In the CSI feedback method shown in Fig. 5, regardless of an environment corresponding to the downlink channel, a same set of an encoder and a decoder is used, and related information of the current downlink channel is not considered during encoding and/or decoding. As a result, precision of the eigen-subspace matrix H′ of the downlink channel obtained by the base station through reconstruction is low),
wherein the processor is further configured to call and run the computer program stored in the memory to:
obtain the N first feedback information tags based on the N pieces of codebook output information, and update a decoder in the channel feedback model based on the N bitstreams and the N first feedback information tags (Shi; Fig.5; [0096]-[0097]; [0109]; [0118] discloses feeding back CSI of a downlink channel is used as an example. An encoder is deployed on a side of UE, and the UE may measure a reference signal from a base station, to obtain a downlink MIMO channel. The downlink MIMO channel may be represented by using an M*N-dimensional matrix. An eigen-subspace matrix H of the downlink channel may be obtained by using a preprocessing module. A preprocessing result is input into the encoder, and a low-dimensional code word D is obtained through compression. After being quantized, the low-dimensional code word D is fed back to the base station through an air interface. The base station inputs the received code word into a decoder, and obtains an eigen-subspace matrix H′ of the downlink channel through reconstruction. Generally, H and H′ have a same matrix dimension. Based on simulation verification and analyzing, in a CSI feedback method shown in Fig.5, neural networks in the encoder and the decoder are generalized, but reconstruction precision is usually limited. During actual application, there are various downlink channel environments between the UE and the base station, and large-scale information is rich. The large-scale information means different channel attenuation caused by refraction, reflection, scattering, or the like on a transmission path of a wireless channel due to a change in a physical environment, or different penetration losses caused by obstacles on a propagation path. In the CSI feedback method shown in Fig. 5, regardless of an environment corresponding to the downlink channel, a same set of an encoder and a decoder is used, and related information of the current downlink channel is not considered during encoding and/or decoding. As a result, precision of the eigen-subspace matrix H′ of the downlink channel obtained by the base station through reconstruction is low).
As to claims 10 and 15, Wu discloses a network device, comprising a transceiver, a processor and a memory configured to store a computer program, wherein the processor is configured to call and run the computer program stored in the memory to control the network device to:
receive, from a terminal device, N pieces of codebook output information (Wu; [0112]; [0121]-[0125] discloses the auto decoder 340-a (e.g., branch 1) may be associated with a second CSI compression scheme (e.g., a neural network-based CSI compression scheme). In some examples, indication of a configuration of the auto decoder 340-a (e.g., in an indication 255) may be associated with a relatively larger payload, and accordingly may involve relatively larger or less efficient signaling payload, or relatively higher precision. In some examples, the auto decoder 340-a may be associated with a codebook-based CSI compression scheme, and may be an example of a persistent, semi-static, or otherwise less-frequently updated configuration. In some examples, the auto decoder 340-b (e.g., branch 2), or both the auto decoder 340-a and the auto decoder 340-b, may be associated with a CSI compression scheme that is associated with one or more aspects of machine learning, which may be described as a neural network-based CSI compression scheme. In some examples, the auto decoder 340-b may be associated with a Type-III CSI. In some examples, indication of a configuration of the auto decoder 340-b (e.g., in an indication 255) may be associated with a relatively smaller or less complex payload, and accordingly may involve relatively smaller or more efficient signaling payload (e.g., may be simpler to indicate to a UE 115). In some examples, the auto decoder 340-a may be an example of a dynamic or otherwise more-frequently updated configuration. [0130] discloses the classification block 540 may provide an output 550, which may be associated with a selection or determination to implement one CSI compression scheme or another. In one example, the output 550 may be a binary output, such as a value of ‘0’ corresponding to using a regular CSI compression scheme and a value of ‘1’ corresponding to using a machine learning CSI compression scheme (e.g., a CSI compression scheme that leverages machine learning, including for periodic or otherwise ongoing training or updates to an encoder, a decoder, or both)),
Wu discloses the N first feedback information tags comprise N pieces of codebook output information, but fails to discloses of receiving bitstream. However, Shi discloses
receiving N bitstreams respectively corresponding to N pieces of channel information, wherein the N bitstreams are obtained by the terminal device respectively processing the N pieces of channel information by using an encoder in a channel feedback model, the N pieces of codebook output information are obtained by the terminal device respectively quantizing, based on a codebook, the N pieces of channel information, wherein N is an integer greater than or equal to 1 (Shi; Fig.5; [0096]-[0097]; [0109];[0118] discloses feeding back CSI of a downlink channel is used as an example. An encoder is deployed on a side of UE, and the UE may measure a reference signal from a base station, to obtain a downlink MIMO channel. The downlink MIMO channel may be represented by using an M*N-dimensional matrix. An eigen-subspace matrix H of the downlink channel may be obtained by using a preprocessing module. A preprocessing result is input into the encoder, and a low-dimensional code word D is obtained through compression. After being quantized, the low-dimensional code word D is fed back to the base station through an air interface. The base station inputs the received code word into a decoder, and obtains an eigen-subspace matrix H′ of the downlink channel through reconstruction. Generally, H and H′ have a same matrix dimension. Based on simulation verification and analyzing, in a CSI feedback method shown in Fig.5, neural networks in the encoder and the decoder are generalized, but reconstruction precision is usually limited. During actual application, there are various downlink channel environments between the UE and the base station, and large-scale information is rich. The large-scale information means different channel attenuation caused by refraction, reflection, scattering, or the like on a transmission path of a wireless channel due to a change in a physical environment, or different penetration losses caused by obstacles on a propagation path. In the CSI feedback method shown in Fig. 5, regardless of an environment corresponding to the downlink channel, a same set of an encoder and a decoder is used, and related information of the current downlink channel is not considered during encoding and/or decoding. As a result, precision of the eigen-subspace matrix H′ of the downlink channel obtained by the base station through reconstruction is low),; and
obtain N first feedback information tags based on the N pieces of codebook output information, and update a decoder in the channel feedback model based on the N bitstreams and the N first feedback information tags (Shi; Fig.5; [0096]-[0097]; [0109];[0118] discloses feeding back CSI of a downlink channel is used as an example. An encoder is deployed on a side of UE, and the UE may measure a reference signal from a base station, to obtain a downlink MIMO channel. The downlink MIMO channel may be represented by using an M*N-dimensional matrix. An eigen-subspace matrix H of the downlink channel may be obtained by using a preprocessing module. A preprocessing result is input into the encoder, and a low-dimensional code word D is obtained through compression. After being quantized, the low-dimensional code word D is fed back to the base station through an air interface. The base station inputs the received code word into a decoder, and obtains an eigen-subspace matrix H′ of the downlink channel through reconstruction. Generally, H and H′ have a same matrix dimension. Based on simulation verification and analyzing, in a CSI feedback method shown in Fig.5, neural networks in the encoder and the decoder are generalized, but reconstruction precision is usually limited. During actual application, there are various downlink channel environments between the UE and the base station, and large-scale information is rich. The large-scale information means different channel attenuation caused by refraction, reflection, scattering, or the like on a transmission path of a wireless channel due to a change in a physical environment, or different penetration losses caused by obstacles on a propagation path. In the CSI feedback method shown in Fig. 5, regardless of an environment corresponding to the downlink channel, a same set of an encoder and a decoder is used, and related information of the current downlink channel is not considered during encoding and/or decoding. As a result, precision of the eigen-subspace matrix H′ of the downlink channel obtained by the base station through reconstruction is low),,
wherein the decoder is configured to perform decoding based on a to-be-decoded bitstream received from the terminal device, to obtain corresponding feedback information (Shi; Fig.5; [0096]-[0097]; [0109];[0118] discloses feeding back CSI of a downlink channel is used as an example. An encoder is deployed on a side of UE, and the UE may measure a reference signal from a base station, to obtain a downlink MIMO channel. The downlink MIMO channel may be represented by using an M*N-dimensional matrix. An eigen-subspace matrix H of the downlink channel may be obtained by using a preprocessing module. A preprocessing result is input into the encoder, and a low-dimensional code word D is obtained through compression. After being quantized, the low-dimensional code word D is fed back to the base station through an air interface. The base station inputs the received code word into a decoder, and obtains an eigen-subspace matrix H′ of the downlink channel through reconstruction. Generally, H and H′ have a same matrix dimension. Based on simulation verification and analyzing, in a CSI feedback method shown in Fig.5, neural networks in the encoder and the decoder are generalized, but reconstruction precision is usually limited. During actual application, there are various downlink channel environments between the UE and the base station, and large-scale information is rich. The large-scale information means different channel attenuation caused by refraction, reflection, scattering, or the like on a transmission path of a wireless channel due to a change in a physical environment, or different penetration losses caused by obstacles on a propagation path. In the CSI feedback method shown in Fig. 5, regardless of an environment corresponding to the downlink channel, a same set of an encoder and a decoder is used, and related information of the current downlink channel is not considered during encoding and/or decoding. As a result, precision of the eigen-subspace matrix H′ of the downlink channel obtained by the base station through reconstruction is low).
It is obvious for a person of ordinary skilled in the art to combine the teachings before the effective filing date of the invention. One would be motivated to combine the teachings so that the real number vector la is used for description, and does not constitute a limitation.
As to claim 20, the rejection of claim 15 as listed above is incorporated herein. In addition, Wu-Shi discloses wherein the processor is configured to call and run the computer program stored in the memory to control the terminal device to:
periodically send codebook output information and a bitstream corresponding to channel information to the network device (Shi; [0103] discloses function of indicating a manner in which the UE reports the scenario-related information: For example, the UE may be notified to periodically report the scenario-related information, for example, report the scenario-related information every transmission time interval (transmission time interval, TTI), every M milliseconds, or the like).
Allowable Subject Matter
Claims 4-8, 13 and 18 are objected, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FAISAL CHOUDHURY whose telephone number is (571)270-3001. The examiner can normally be reached M-F 8AM-6P.M.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Avellino can be reached at 5712723905. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FAISAL CHOUDHURY/Primary Examiner, Art Unit 2478