DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4, 12-13 and 18-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wang et al. (US Publication 2021/0182658 A1).
The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement.
In regards to claim 1, Wang et al. (US Publication 2021/0182658 A1) teaches a computer-implemented method, in a first device, comprising: receiving an indication of a neural network architectural configuration responsive to providing capability information representing at least one capability of the first device to an infrastructure component (see figures 3 and 12; see paragraph 160; the core network server 302 or the base station 120 analyzes a neural network table using any combination of component carrier information, bandwidth information, current operating conditions, UE feedback, BS feedback, QoS requirements, metrics, and so forth, to determine the configuration for the DNNs 1202, 1204, and/or 1206. The UE 110 receives the configuration information, such as by receiving an indication of a neural network formation configuration as described with reference to FIG. 7, and forms the DNNs 1204. This can include the core network server 302 and/or the base station 120 determining updates to the DNNs 1204 (and/or the DNNs 1202 and DNNs 1206) based on feedback, where the updates can include large (e.g., architectural) changes or small (e.g., parameter) changes to the DNNs and/or sub-DNNs); implementing the neural network architectural configuration at a transmit neural network of the first device, wherein the transmit neural network is paired with a jointly trained receive neural network configured to receive and process output from the transmit neural network (see paragraph 197; the network entity optionally determines a modification to the DNN configuration(s) by analyzing the feedback. In implementations, the network entity (e.g., core network server 302, base station 120) analyzes a neural network table based on the feedback to determine the modification. This includes determining a large modification that corresponds to one or more architectural changes to the DNN configuration(s), or a smaller modification that corresponds to one or more parameter changes to a fixed DNN architecture of the DNN configuration(s)); receiving a representation of a channel status information (CSI) estimate as an input to the transmit neural network (see paragraph 170; the base station 120 receives UE metrics from the UE 110, such as power measurements (e.g., RSSI), error metrics, timing metrics, QoS, latency, a Reference Signal Receive Power (RSRP), SINR information, CQI, CSI, Doppler feedback, etc); generating, at the transmit neural network, a first output based on the representation of the CSI estimate, the first output representing a compressed version of the representation of a prediction of the CSI estimate for a future point in time (see paragraph 183; the core network server 302/base station 120 receives feedback from the UE 110. For example, the UE 110 communicates one or more metrics, such as BLER, SINR, CQI feedback, or a packet loss rate to the base station 120 (and/or the core network server 302 through the base station 120). Alternately or additionally, the base station 120 generates one or more metrics, such as a Round-Trip Time (RTT) latency metric, uplink received power, uplink SINR, uplink packet errors, uplink throughput, timing measurements, power information, SINR information, CQI, CSI, or Doppler feedback, and sends the metrics as feedback to the core network server 302); and controlling a radio frequency (RF) antenna interface of the first device to transmit a first RF signal representative of the first output for receipt by a second device implementing the receive neural network (see paragraph 40; the antennas 252, the RF front end 254, the LTE transceivers 256, and/or the 5G NR transceivers 258 may be configured to support beamforming, such as Massive-Multiple-In, Multiple Out (Massive-MIMO), for the transmission and reception of communications with the UE 110).
In regards to claim 2, Wang teaches, algorithmically determining the CSI estimate based on one or more RF signals received from the second device (see paragraph 183; the base station 120 generates one or more metrics, CSI).
In regards to claim 3, Wang teaches, wherein generating the first output further comprises generating the first output at the transmit neural network further based on a representation of a scheduling latency of a multiple-input multiple-output, (MIMO) process of the second device provided as an input to the transmit neural network (see paragraph 170; the base station 120 receives UE metrics from the UE 110, such as power measurements (e.g., RSSI), error metrics, timing metrics, QoS, latency, a Reference Signal Receive Power (RSRP), SINR information, CQI, CSI, Doppler feedback, etc; see paragraph 40 for the MIMO support).
In regards to claim 4, Wang teaches, wherein the neural network architectural configuration is selected for the transmit neural network from a plurality of candidate neural network architectural configurations based on the scheduling latency (see paragraph 55; The neural network table 316 stores multiple different NN formation configuration elements generated using the training module 314. In some implementations, the neural network table includes input characteristics for each NN formation configuration element and/or NN formation configuration, where the input characteristics describe properties about the training data used to generate the NN formation configuration. For instance, the input characteristics can include power information, SINR information, CQI, CSI, Doppler feedback, RSS, error metrics, minimum end-to-end (E2E) latency, desired E2E latency, E2E QoS, E2E throughput, E2E packet loss ratio, cost of service, etc.).
In regards to claim 12, Wang teaches, wherein the neural network architectural configuration is selected from a plurality of neural network architectural configurations based on at least one of: the at least one capability of the first device or a current signal propagation environment of the first device (see paragraph 101; In determining the neural network formation configuration, the base station analyzes any combination of information, such as a channel type being processed by the deep neural network (e.g., downlink, uplink, data, control, etc.), transmission medium properties (e.g., power measurements, signal-to-interference-plus-noise ratio (SINR) measurements, channel quality indicator (CQI) measurements), encoding schemes, UE capabilities, BS capabilities, and so forth).
In regards to claim 13, Wang teaches, wherein receiving the indication of the neural network architectural configuration comprises at least one of: receiving an identifier associated with one of a plurality of candidate neural network architectural configurations locally stored at the first device; or receiving one or more data structures representing parameters of the neural network architectural configuration (see paragraph 38; a neural network table 216 that stores various architecture and/or parameter configurations that form a neural network, such as, by way of example and not of limitation, parameters that specify a fully-connected layer neural network architecture, a convolutional layer neural network architecture, a recurrent neural network layer, a number of connected hidden neural network layers, an input layer architecture, an output layer architecture, a number of nodes utilized by the neural network, coefficients (e.g., weights and biases) utilized by the neural network, kernel parameters, a number of filters utilized by the neural network, strides/pooling configurations utilized by the neural network, an activation function of each neural network layer, interconnections between neural network layers, neural network layers to skip, and so forth).
In regards to claim 18, Wang teaches, wherein the at least one capability comprises at least one of: a processing capability; a power capability (see paragraph 51; the variations in fixed (or flexible) architecture and/or parameter configurations at each neural network are based on the processing resources (e.g., processing capabilities, memory constraints, quantization constraints (e.g., 8-bit vs. 16-bit), fixed-point vs. floating point computations, floating point operations per second (FLOPS), power availability) of the devices targeted to form the corresponding DNNs); or a sensor capability
In regards to claim 19, Wang teaches, A device comprising: a radio frequency (RF)antenna interface (see figure 2, Antennas and RF front ends 204, 254); at least one processor coupled to the RF antenna interface (see processors 210 and 260); and a memory storing executable instructions (see Computer readable storage media 212, 262), the executable instructions configured to manipulate the at least one processor to perform the method of claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 8-10, 14, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US Publication 2021/0182658 A1) further in view of Jang et al. (NPL titled; Deep Learning-based Limited Feedback Designs for MIMO Systems; provided by the applicant in the IDS).
The applied reference (Wang) has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2).
In regards to claim 8, Wang teaches, a computer-implemented method, in a first device, comprising: receiving an indication of a neural network architectural configuration responsive to providing capability information representing at least one capability of the first device to an infrastructure component (see figures 3 and 12; see paragraph 160; the core network server 302 or the base station 120 analyzes a neural network table using any combination of component carrier information, bandwidth information, current operating conditions, UE feedback, BS feedback, QoS requirements, metrics, and so forth, to determine the configuration for the DNNs 1202, 1204, and/or 1206. The UE 110 receives the configuration information, such as by receiving an indication of a neural network formation configuration as described with reference to FIG. 7, and forms the DNNs 1204. This can include the core network server 302 and/or the base station 120 determining updates to the DNNs 1204 (and/or the DNNs 1202 and DNNs 1206) based on feedback, where the updates can include large (e.g., architectural) changes or small (e.g., parameter) changes to the DNNs and/or sub-DNNs); implementing the neural network architectural configuration at a receive neural network of the first device, wherein the transmit receive neural network is paired with a jointly trained transmit neural network configured to transmit output from to the receive neural network (see paragraph 197; the network entity optionally determines a modification to the DNN configuration(s) by analyzing the feedback. In implementations, the network entity (e.g., core network server 302, base station 120) analyzes a neural network table based on the feedback to determine the modification. This includes determining a large modification that corresponds to one or more architectural changes to the DNN configuration(s), or a smaller modification that corresponds to one or more parameter changes to a fixed DNN architecture of the DNN configuration(s)).
In regards to claim 8, Wang teaches estimating or predicting how a signal distorts while propagating through a transmission environment (see paragraph 24).
However Wang fails to teach, receiving, at a radio frequency, (RF)antenna interface of the first device, a first RF signal from a second device implementing the transmit neural network, the first RF signal representative of a compressed representation of a predicted future channel state information, (CSI) estimate; providing a representation of the first RF signal as an input to the receive neural network; generating, at the receive neural network, the predicted future CSI estimate based on the input to the receive neural network; and managing at least one multiple-input multiple-output, MIMO, (MIMO) process at the first device based on the predicted future CSI estimate.
Jang teaches, receiving, at a radio frequency, (RF)antenna interface of the first device, a first RF signal from a second device implementing the transmit neural network, the first RF signal representative of a compressed representation of a predicted future channel state information, (CSI) estimate (see second column on page 1, first full paragraph; point-to-point MIMO system where a multi-antenna receiver sends the quantized CSI back to a multi-antenna transmitter via a feedback channel); providing a representation of the first RF signal as an input to the receive neural network (see second column on page 1, first full paragraph; pilot sequences are first conveyed from the transmitter so that the receiver can extract useful features of the CSI); generating, at the receive neural network, the predicted future CSI estimate based on the input to the receive neural network (see second column on page 1, first full paragraph; jointly design DL based limited feedback systems which include CSI prediction); and managing at least one multiple-input multiple-output, MIMO, (MIMO) process at the first device based on the predicted future CSI estimate (see the teachings of Jang above; the feedback design is for MIMO systems designed in part to predict CSI).
Both Wang and Jang relate to Machine learning and feedbacks.
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the present application to incorporate the prediction of future CSI as taught by Jang into the teaching of Wang. The motivation to do so would be to allow for have use processing resource efficiently by having a predicted feedback model that implement reduced computational complexity (see abstract of Jang).
In regards to claims 9-10, Wang teaches, wherein generating the output at the receive neural network further based on a representation of a scheduling latency of a MIMO process of the first device provided as an input to the receive neural network (see paragraph 170; the base station 120 receives UE metrics from the UE 110, such as power measurements (e.g., RSSI), error metrics, timing metrics, QoS, latency, a Reference Signal Receive Power (RSRP), SINR information, CQI, CSI, Doppler feedback, etc; see paragraph 40 for the MIMO support) and wherein the neural network architectural configuration is selected for the receive neural network from a plurality of candidate neural network architectural configurations based on the scheduling latency (see paragraph 55; The neural network table 316 stores multiple different NN formation configuration elements generated using the training module 314. In some implementations, the neural network table includes input characteristics for each NN formation configuration element and/or NN formation configuration, where the input characteristics describe properties about the training data used to generate the NN formation configuration. For instance, the input characteristics can include power information, SINR information, CQI, CSI, Doppler feedback, RSS, error metrics, minimum end-to-end (E2E) latency, desired E2E latency, E2E QoS, E2E throughput, E2E packet loss ratio, cost of service, etc.).
In further regards to claims 9-10, Wang and Jang in combination teach all the limitations for the parent claims as stated above in addition to certain limitations of claim 9. However, Wang fails to teach, the predicted future CSI estimate further comprises generating the predicted future CSI estimate.
Jang however fails to teach the predicted future CSI estimate further comprises generating the predicted future CSI estimate (see second column on page 1, first full paragraph; jointly design DL based limited feedback systems which include CSI prediction).
Both Wang and Jang relate to Machine learning and feedbacks.
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the present application to incorporate the prediction of future CSI as taught by Jang into the teaching of Wang. The motivation to do so would be to allow for have use processing resource efficiently by having a predicted feedback model that implement reduced computational complexity (see abstract of Jang).
In regards to claim 14, Wang and Jang in combination teach all the limitations of the parent claims as stated above.
Wang fails to teach, generating, at a transmit neural network of the first device, a CSI pilot signal; and controlling the RF antenna interface of the first device to transmit a second RF signal representative of the CSI pilot signal for receipt by the second device
Jang however teaches generating, at a transmit neural network of the first device, a CSI pilot signal; and controlling the RF antenna interface of the first device to transmit a second RF signal representative of the CSI pilot signal for receipt by the second device (see second column on page 1, first full paragraph; The receiver DNN accepts the pilot-aided received signal as an input and is designed to output bipolar vectors as a quantized representation of the CSI).
Both Wang and Jang relate to Machine learning and feedbacks.
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the present application to incorporate the prediction of future CSI as taught by Jang into the teaching of Wang. The motivation to do so would be to allow for have use processing resource efficiently by having a predicted feedback model that implement reduced computational complexity (see abstract of Jang).
In regards to claim 17, Wang teaches, wherein the at least one MIMO process comprises at least one of: a beamforming process; a space-time coding process; or a multiple-user MIMO process (see paragraph 40; the antennas 252, the RF front end 254, the LTE transceivers 256, and/or the 5G NR transceivers 258 may be configured to support beamforming, such as Massive-Multiple-In, Multiple Out (Massive-MIMO), for the transmission and reception of communications with the UE 110).
In regards to claim 20, Wang teaches, A device comprising: a radio frequency (RF)antenna interface (see figure 2, Antennas and RF front ends 204, 254); at least one processor coupled to the RF antenna interface (see processors 210 and 260); and a memory storing executable instructions (see Computer readable storage media 212, 262), the executable instructions configured to manipulate the at least one processor to perform the method of claim 8.
This rejection under 35 U.S.C. 103 might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C.102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B); or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. See generally MPEP § 717.02.
Allowable Subject Matter
Claims 5-7, 11 and 15-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
In regards to claim 5, the cited prior art fails to teach in addition to the other limitations, generating the first output at the transmit neural network further based on sensor data input to the transmit neural network from one or more sensors of the first device.
In regards to claim 6, the cited prior art fails to teach in addition to the other limitations, : receiving a representation of a CSI pilot signal as an input of a receive neural network of the first device; and generating, at the receive neural network, a second output based on the representation of the CSI pilot signal, the second output including the representation of the CSI estimate.
In regards to claim 11, the cited prior art fails to teach, generating the predicted future CSI estimate comprises generating the predicted future CSI estimate at the receive neural network further based on sensor data input to the receive neural network from one or more sensors of the first device.
In regards to claim 15, the cited prior art fails to teach in addition to the other limitations, wherein generating the CSI pilot signal comprises generating the CSI pilot signal at the transmit neural network further based on at least one of: a carrier frequency of a channel associated with the CSI estimate; at least one operational parameter for the RF antenna interface of the first device; or sensor data from one or more sensors of the first device; or a carrier frequency of a channel associated with the CSI estimate.
In regards to claim 16, the cited prior art fails to teach in addition to the other limitations wherein generating the predicted future CSI estimate further comprises generating the predicted future CSI estimate at the transmit neural network based on at least one of: sensor data from one or more sensors of the first device or a carrier frequency of a channel associated with the predicted future CSI estimate.
Prior art Fehri et al. (US Publication 2022/0060364 A1) teaches, with respect to an artificial neural network (ANN) and figures 8-10; at least two main architectural candidates for an ANN precoding engine 32 in the radio system (e.g., network node 16), as follows: 1. The ANN precoding engine 32 comprises only one entity, which is essentially an artificial neural network. 2. The ANN precoding engine 32 comprises multiple signal processing operations. For example, in addition to the artificial neural network, an ANN precoding engine 32 may further include a zero EVM/ACLR projection, a low-PAPR projection, an FFT, an IFFT, a clip and filter operation, etc. (see figures 8-10 and paragraphs 100-102).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAY P PATEL whose telephone number is (571)272-3086. The examiner can normally be reached M-F 9:30-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faruk Hamza can be reached at 571-272-7969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAY P PATEL/ Primary Examiner, Art Unit 2466