DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments set forth in the Remarks section of the submission dated 22 Dec. 2025 have been fully considered but they are not persuasive for at least the following reasons. Particularly, at pages 6-8 of the Remarks, the Applicant argues:
Yoo does not disclose or suggest the claimed IW scheme, which includes adding a neural network model in case that the data retransmission is performed. For example, Yoo does not disclose or suggest adding nodes or neurons to the neural network, nor changing the number of nodes or neurons in the neural network. In general, Yoo does not disclose or suggest "wherein the data retransmission is performed based on an incremental weight (IW) scheme, wherein the IW scheme includes adding a neural network model in case that data retransmission is performed" as described in claim feature (A) of amended claims 1 and 13.
(emphases in original).
In response to Applicant's argument that the references fail to show certain features of the invention, it is noted that some of the features upon which Applicant relies (i.e., “adding nodes or neurons to the neural network, nor changing the number of nodes or neurons in the neural network”) are not recited in the rejected claim(s).
With respect to Applicant’s arguments that YOO
expressly discloses that its neural network model is updated by changing weights of the neural network model. In particular, Yoo's initial weights are not fixed, but instead and they are changed to other weights. Therefore, Yoo does not disclose or suggest “wherein the first transmission weight is fixed in the IW scheme”
as claimed in feature (B) of amended claims 1 and 13, the Examiner disagrees. For example, YOO explicitly discloses that the “UE and the gNB may save and use previously stored CSI and encode and decode only a change in the CSI from a previous instance,” ¶ 0083, and “the UE may encode only a changed part of the CSI (compared to previous CSI), and thus provide a smaller size CSI feedback with the same reconstruction quality.” ¶ 0064. Thus, the Examiner finds that YOO’s persistence of existing weights, together with YOO’s process for learning incremental weights that capture new channel dynamics, to correspond to the claimed feature, “wherein the first transmission weight is fixed in the IW scheme.”
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. § 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-14 are rejected under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Regarding claims 1 and 13, the Examiner finds the limitations, wherein a first transmission weight applied to a first neural network model for the data transmission, and wherein a second transmission weight applied to a second neural network model for the data transmission, to be unclear due to grammatical and idiomatic errors. For purposes of compact prosecution, the Examiner interprets these limitations as follows:
wherein a first transmission weight [is] applied to a first neural network model for the data transmission,
wherein a second transmission weight [is] applied to a second neural network model for the data transmission,
If this is not correct, clarification in any subsequent submission is requested.
Further, the limitation, wherein the IW scheme includes adding a neural network model in case that data transmission is performed, to be unclear as to what neural network model is being added. For purposes of compact prosecution, the Examiner interprets the limitation as follows: wherein the IW scheme includes adding [[a]] [the second] neural network model in case that data transmission is performed,
If this is not correct, clarification in any subsequent submission is requested.
Because of these ambiguities, one of ordinary skill in the art would not be able to reasonably assess the claim scope, for example, as to the extent to which this non-functional descriptive matter may further limit the positively recited receiving and performing steps comprising the method of transmitting data by user equipment (UE). The ambiguities render claims 1 and 13 indefinite under 35 U.S.C. § 112(b). Claims 2-12 and 14 are indefinite under 35 U.S.C. § 112(b) at least by virtue of their dependencies. Accordingly, appropriate correction is required.
In addition, claim 3 is indefinite for the additional reason that the limitation, applied weights to the UE simultaneously based on a minimum rate, is found to be unclear due to grammatical and/or idiomatic inconsistency. For purposes of compact prosecution, the Examiner interprets the limitation as follows: [apply] weights to the UE simultaneously based on a minimum rate,
If this is not correct, clarification in any subsequent submission is requested.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in the Office Action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 5-7, and 11-14 are rejected under 35 U.S.C. § 103 as being unpatentable over US 2021/0273707 (hereinafter, “YOO”) in view of US 11,171,764 B1 (hereinafter, “BENNETT”).
Regarding claim 1, YOO discloses:
A method comprising:
receiving scheduling information related to data transmission; (¶ 0048: [T]ransmit processor 264 may receive and process data from a data source 262 and control information (e.g., for reports comprising RSRP, RSSI, RSRQ, CQI, and/or the like) from controller/processor 280; ¶ 0043: UE 120 may perform scheduling operations, resource selection operations, and/or other operations described elsewhere herein as being performed by the base station 110)
performing the data transmission based on the received scheduling information; (¶ 0049: [S]cheduler 246 may schedule UEs for data transmission on the downlink and/or uplink)
. . .
wherein a first transmission weight applied to a first neural network model for the data transmission, (¶¶ 0093-0094: UE 820 may create the neural network model, based at least in part on information about neural network structures, layers, weights, and/or the like. As shown by reference number 835, UE 820 (or BS 810) may train and update the CSI encoder and the CSI decoder. UE 820 may train (or further train) a neural network model; ¶ 0076: The device may train the neural network model to generate a trained neural network model. The device may provide training data to the neural network model and receive predictions based at least in part on providing the training data to the neural network model. Based at least in part on the predictions, the device may update the neural network model and provide the estimates to the updated neural network model. The device may repeat this process until a threshold level of accuracy for predictions are generated by the neural network model. The device may obtain encoder weights and decoder weights based at least in part on the predictions. These weights may be distributed to an encoder in a CSI transmitting device (e.g., UE) and a decoder in a CSI receiving device (e.g., gNB), or the weights may be part of an initial configuration for the UE and gNB that is specified beforehand)
wherein a second transmission weight applied to a second neural network model for the data transmission (¶ 0125: [P]rocess 1100 includes obtaining a second CSI instance for the channel, training a neural network model based at least in part on encoding the second CSI instance into second encoded CSI, decoding the second encoded CSI into second decoded CSI, and comparing the second CSI instance and the second decoded CSI, and updating the one or more encoder weights based at least in part on training the neural network model)
wherein the data transmission is performed based on an incremental weight (IW) scheme, (¶ 0082: The UE and the gNB may save and use previously stored CSI and encode and decode only a change in the CSI from a previous instance. This may provide for less CSI feedback overhead and improve performance; ¶ 0064: UE may encode only a changed part of the CSI (compared to previous CSI), and thus provide a smaller size CSI feedback with the same reconstruction quality. A receiving device, such as a base station, may receive the changed part as encoded CSI and decode the changed part using decoder weights from the training. The base station may determine decoded CSI from decoding the changed part and from previously decoded CSI. If only a changed part is sent as encoded CSI, the UE and the base station may transmit and receive a much smaller CSI. The UE may save power, and processing and signaling resources, by providing accurate CSI with reduced overhead)
wherein the IW scheme includes adding a neural network model in case that data transmission is performed, and (¶ 0084: CSI sequence encoder 620 may determine a previously encoded CSI instance h(t−1) from memory 630 and compare the intermediate encoded CSI m(t) and the previously encoded CSI instance h(t−1) to determine a change n(t) in the encoded CSI. The change n(t) may be a part of a channel estimate that is new and may not be predicted by the decoder. The encoded CSI at this point may be represented by [n(t), henc(t)]
PNG
media_image1.png
38
29
media_image1.png
Greyscale
genc,θ(m(t), henc(t−1)). CSI sequence encoder 620 may provide this change n(t) on the PUSCH or PUCCH, and the UE may transmit the change (e.g., information indicating the change) n(t) as the encoded CSI on the UL channel to the gNB. Because the change is smaller than an entire CSI instance, the UE may send a smaller payload for the encoded CSI on the UL channel)
wherein the first transmission weight is fixed in the IW scheme. (¶ 0084: Because the change is smaller than an entire CSI instance, the UE may send a smaller payload for the encoded CSI on the UL channel)
YOO does not explicitly disclose:
data retransmission
receiving indication of retransmission related to the data transmission from a base station; and
performing data retransmission based on the received indication of retransmission,
In the same field of endeavor, however, BENNETT teaches:
wherein a second transmission weight applied to a second neural network model for the data retransmission, (Col. 49, ll. 52-54: [C]hanges of parameters can be monitored and learned by the machine learning so that the retransmission process can be adjusted)
receiving indication of retransmission related to the data transmission from a base station; and (Col. 49, l. 64 – col. 50, l. 13: [I]n step 1370, the full duplex transceiver, some other component of the waveguide, or another device that can detect interference monitors the transmission medium for an interference. . . . [I]n step 1375, the full duplex transceiver identifies one or more data frames of the plurality of data frames that were transmitted during a period of the interference detected in step 1370. [I]n step 1377, the full duplex transceiver, some other component of the waveguide, or another device detects whether the interference is no longer present on the transmission medium. If the transmission channel is clear, then the process continues at step 1380)
performing data retransmission based on the received indication of retransmission, (Col. 50, ll. 14-16: In step 1380, the full duplex transceiver retransmits the one or more data frames to the receiver of another waveguide device at the other end of the transmission medium)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify YOO’s second training of a neural network model to provide data retransmission as taught by BENNETT to retransmit corrupted data that is affected by the noise/interference without waiting for an ACK/NACK message from the corresponding receiver at the other end of the transmission medium, so as to decrease latency in communications affected by the noise/interference. See BENNETT, at Col. 49, ll. 10-17.
Regarding claim 2, the combination of YOO and BENNETT, as applied above, renders obvious method of claim 1. YOO further discloses:
wherein the first transmission weight corresponds to a first layer of the neural network model, and the second transmission weight corresponds to a second layer of the neural network model, and wherein the second layer is a layer that receives the data and data applied the first transmission weight as an input. (¶ 0054: As shown in FIG. 5, the neural network model may be a succession of layers that each operate on input and provide an output. The layers may include an input layer, an output layer that produces output variables (e.g., encoder or decoder weights), and hidden layers between the input layer and the output layer. The layers may include one or more feed forward layers (e.g., one or more fully-connected pre-processing layers))
Regarding claim 5, the combination of YOO and BENNETT, as applied above, renders obvious method of claim 1. YOO does not explicitly disclose:
wherein, data which is outputted from a third neural network model is retransmitted to a base station based on indication of retransmission related to the data retransmission being received from the base station, wherein a third transmission weight is applied to the third neural network model.
In the same field of endeavor, however, BENNETT teaches:
wherein, data which is outputted from a third neural network model is retransmitted to a base station based on indication of retransmission related to the data retransmission being received from the base station, wherein a third transmission weight is applied to the third neural network model. (Col. 50, ll. 19-32: [I]n FIG. 13B, . . . some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. . . . FIGS. 13A-13B can be combined in whole or in part with one another, and/or can be combined in whole or in part with other embodiments of the subject disclosure, and/or can be adapted for use in whole or in part with other embodiments of the subject disclosure)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify YOO’s second training of a neural network model to provide data re-retransmission as taught by BENNETT to retransmit corrupted data that is affected by the noise/interference without waiting for an ACK/NACK message from the corresponding receiver at the other end of the transmission medium, so as to decrease latency in communications affected by the noise/interference. See BENNETT, at Col. 49, ll. 10-17.
Regarding claim 6, the combination of YOO and BENNETT, as applied above, renders obvious method of claim 5. YOO further discloses:
wherein the first transmission weight and the second transmission weight are fixed in the IW scheme. (¶ 0125: [P]rocess 1100 includes obtaining a second CSI instance for the channel, training a neural network model based at least in part on encoding the second CSI instance into second encoded CSI, decoding the second encoded CSI into second decoded CSI, and comparing the second CSI instance and the second decoded CSI, and updating the one or more encoder weights based at least in part on training the neural network model [Although YOO does not explicitly disclose that its update encoder weight that is learned by the neural network model is based on the IW scheme with the first and second transmission weights being fixed, YOO implies—by virtue of there being no mention of inputting those encoder weights into a third training—that the first and second encoder weights remain unchanged, i.e., “fixed,” thereby suggesting that the first transmission weight and the second transmission weight are fixed in the IW scheme])
Regarding claim 7, the combination of YOO and BENNETT, as applied above, renders obvious method of claim 1. YOO further discloses:
wherein the UE shares weight-related information based on the artificial neural network with the base station in advance. (¶ 0094: As shown by reference number 835, UE 820 . . . may inform BS 810 that a hidden state is reset)
Regarding claim 11, the combination of YOO and BENNETT, as applied above, renders obvious method of claim 1. YOO further discloses:
wherein the base station decodes data applied the first transmission weight based on a first reception weight corresponding to the first transmission weight. (¶ 0010: [A] base station that receives communications on a channel from a UE may include memory and one or more processors operatively coupled to the memory. The memory and the one or more processors may be configured to receive first encoded CSI from the UE. The first encoded CSI may be a first CSI instance for the channel that is encoded by the UE, based at least in part on one or more encoder weights that correspond to a neural network model associated with a CSI encoder and a CSI decoder. The memory and the one or more processors may be configured to decode the first encoded CSI into first decoded CSI based at least in part on one or more decoder weights that correspond to the neural network model; ¶ 0067: gNB may receive the encoded CSI, and CSI decoder 420 may decode the encoded CSI into decoded CSI using decoder parameters 425. Decoder parameters 425 may include decoder weights obtained from machine learning, such as from the training of the neural network model associated with a CSI encoder and a CSI decoder)
Regarding claim 12, the combination of YOO and BENNETT, as applied above, renders obvious method of claim 11. YOO further discloses:
wherein, based on the base station receiving retransmission for the data from the UE, the base station decodes data applied the second transmission weight based on a second reception weight corresponding to the second transmission weight, and (¶ 0125: [P]rocess 1100 includes obtaining a second CSI instance [mapped to retransmission] for the channel, training a neural network model based at least in part on encoding the second CSI instance into second encoded CSI, decoding the second encoded CSI into second decoded CSI, and comparing the second CSI instance and the second decoded CSI, and updating the one or more encoder weights based at least in part on training the neural network model)
wherein the base station reconstructs data by using the data decoded based on the first reception weight and the data decoded based on the second reception weight together. (¶ 0078: For each CSI feedback instance, the UE feeds back m=ƒenc,θ(H) for the estimated downlink channel H. The gNB may reconstruct an approximate downlink channel via Ĥ=ƒdec,ϕ(m))
Regarding claim 13, YOO discloses:
User equipment (UE) (UE 120 / 820) configured to operate in a wireless communication system (wireless network 100), comprising:
at least one transceiver (receive processor 258 / transmit processor 264);
at least one processor (controller/processor 280); and
at least one memory (memory 282) that is coupled with the at least one processor in an operable manner and is configured to store instructions that make, based on being executed, the at least one processor perform a specific operation,
wherein the specific operation is configured to:
control the at least one transceiver to receive scheduling information related to data transmission, (¶ 0048: [T]ransmit processor 264 may receive and process data from a data source 262 and control information (e.g., for reports comprising RSRP, RSSI, RSRQ, CQI, and/or the like) from controller/processor 280; ¶ 0043: UE 120 may perform scheduling operations, resource selection operations, and/or other operations described elsewhere herein as being performed by the base station 110)
perform the data transmission based on the received scheduling information, (¶ 0049: [S]cheduler 246 may schedule UEs for data transmission on the downlink and/or uplink)
. . .
wherein a first transmission weight applied to a first neural network model for the data transmission, (¶¶ 0093-0094: UE 820 may create the neural network model, based at least in part on information about neural network structures, layers, weights, and/or the like. As shown by reference number 835, UE 820 (or BS 810) may train and update the CSI encoder and the CSI decoder. UE 820 may train (or further train) a neural network model; ¶ 0076: The device may train the neural network model to generate a trained neural network model. The device may provide training data to the neural network model and receive predictions based at least in part on providing the training data to the neural network model. Based at least in part on the predictions, the device may update the neural network model and provide the estimates to the updated neural network model. The device may repeat this process until a threshold level of accuracy for predictions are generated by the neural network model. The device may obtain encoder weights and decoder weights based at least in part on the predictions. These weights may be distributed to an encoder in a CSI transmitting device (e.g., UE) and a decoder in a CSI receiving device (e.g., gNB), or the weights may be part of an initial configuration for the UE and gNB that is specified beforehand)
wherein a second transmission weight applied to a second neural network model for the data transmission, (¶ 0125: [P]rocess 1100 includes obtaining a second CSI instance for the channel, training a neural network model based at least in part on encoding the second CSI instance into second encoded CSI, decoding the second encoded CSI into second decoded CSI, and comparing the second CSI instance and the second decoded CSI, and updating the one or more encoder weights based at least in part on training the neural network model)
wherein the data transmission is performed based on an incremental weight (IW) scheme, (¶ 0082: The UE and the gNB may save and use previously stored CSI and encode and decode only a change in the CSI from a previous instance. This may provide for less CSI feedback overhead and improve performance; ¶ 0064: UE may encode only a changed part of the CSI (compared to previous CSI), and thus provide a smaller size CSI feedback with the same reconstruction quality. A receiving device, such as a base station, may receive the changed part as encoded CSI and decode the changed part using decoder weights from the training. The base station may determine decoded CSI from decoding the changed part and from previously decoded CSI. If only a changed part is sent as encoded CSI, the UE and the base station may transmit and receive a much smaller CSI. The UE may save power, and processing and signaling resources, by providing accurate CSI with reduced overhead)
wherein the IW scheme includes adding a neural network model in case that data transmission is performed, and (¶ 0084: CSI sequence encoder 620 may determine a previously encoded CSI instance h(t−1) from memory 630 and compare the intermediate encoded CSI m(t) and the previously encoded CSI instance h(t−1) to determine a change n(t) in the encoded CSI. The change n(t) may be a part of a channel estimate that is new and may not be predicted by the decoder. The encoded CSI at this point may be represented by [n(t), henc(t)]
PNG
media_image1.png
38
29
media_image1.png
Greyscale
genc,θ(m(t), henc(t−1)). CSI sequence encoder 620 may provide this change n(t) on the PUSCH or PUCCH, and the UE may transmit the change (e.g., information indicating the change) n(t) as the encoded CSI on the UL channel to the gNB. Because the change is smaller than an entire CSI instance, the UE may send a smaller payload for the encoded CSI on the UL channel)
wherein the first transmission weight is fixed in the IW scheme. (¶ 0125: [P]rocess 1100 includes obtaining a second CSI instance for the channel, training a neural network model based at least in part on encoding the second CSI instance into second encoded CSI, decoding the second encoded CSI into second decoded CSI, and comparing the second CSI instance and the second decoded CSI, and updating the one or more encoder weights based at least in part on training the neural network model [Although YOO does not explicitly disclose that its update encoder weight that is learned by the neural network model is based on the IW scheme with the first transmission weight being fixed, YOO implies—by virtue of there being no mention of inputting the initial encoder weight into the second training—that the first encoder weight remains unchanged, i.e., “fixed,” thereby suggesting that the first transmission weight is fixed in the IW scheme])
YOO does not explicitly disclose:
data retransmission
control the at least one transceiver to receive indication of retransmission related to the data transmission from a base station, and
perform data retransmission based on the received indication of retransmission,
In the same field of endeavor, however, BENNETT teaches:
wherein a second transmission weight applied to a second neural network model for the data retransmission, (Col. 49, ll. 52-54: [C]hanges of parameters can be monitored and learned by the machine learning so that the retransmission process can be adjusted)
control the at least one transceiver to receive indication of retransmission related to the data transmission from a base station, and
(Col. 49, l. 64 – col. 50, l. 13: [I]n step 1370, the full duplex transceiver, some other component of the waveguide, or another device that can detect interference monitors the transmission medium for an interference. . . . [I]n step 1375, the full duplex transceiver identifies one or more data frames of the plurality of data frames that were transmitted during a period of the interference detected in step 1370. [I]n step 1377, the full duplex transceiver, some other component of the waveguide, or another device detects whether the interference is no longer present on the transmission medium. If the transmission channel is clear, then the process continues at step 1380)
perform data retransmission based on the received indication of retransmission, (Col. 50, ll. 14-16: In step 1380, the full duplex transceiver retransmits the one or more data frames to the receiver of another waveguide device at the other end of the transmission medium)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify YOO’s second training of a neural network model to provide data retransmission as taught by BENNETT to retransmit corrupted data that is affected by the noise/interference without waiting for an ACK/NACK message from the corresponding receiver at the other end of the transmission medium, so as to decrease latency in communications affected by the noise/interference. See BENNETT, at Col. 49, ll. 10-17.
Regarding claim 14, the combination of YOO and BENNETT, as applied above, renders obvious the user equipment (UE) of claim 13. YOO further discloses:
wherein the UE communicates with at least one of a moving terminal, a network, and an autonomous vehicle apart from a vehicle including the UE. (¶ 0043: UEs 120 may communicate using peer-to-peer (P2P) communications, device-to-device (D2D) communications, a vehicle-to-everything (V2X) protocol (e.g., which may include a vehicle-to-vehicle (V2V) protocol, a vehicle-to-infrastructure (V2I) protocol, and/or the like), a mesh network, and/or the like)
Claims 3 and 4 are rejected under 35 U.S.C. § 103 as being unpatentable over YOO in view of BENNETT, as applied above, and further in view of US 2020/0259505 (hereinafter, “XU”).
Regarding claim 3, the combination of YOO and BENNETT, as applied above, renders obvious method of claim 1. YOO further discloses:
wherein the neural network model is configured to:
applied weights to the UE simultaneously based on a minimum rate, (¶ 0059:
PNG
media_image2.png
85
238
media_image2.png
Greyscale
[That is, multiple weights w are updated in parallel over a time interval.])
YOO does not explicitly disclose:
based on initial transmission being performed for the data, puncture other weights than the first transmission weight among the applied transmission weights, and
based on retransmission being performed for the data, puncture other weights than the second transmission weight among the applied transmission weights.
In the same field of endeavor, however, XU teaches:
based on initial transmission being performed for the data, puncture other weights than the first transmission weight among the applied transmission weights, and (¶ 0106: [B]its corresponding to the smallest row weights may be punctured [Implying that bits corresponding to some weights are punctured, while other bits are not.])
based on retransmission being performed for the data, puncture other weights than the second transmission weight among the applied transmission weights. (¶ 0106: [B]its corresponding to the smallest row weights may be punctured [Implying that bits corresponding to some weights are punctured, while other bits are not.])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify YOO’s data transmission procedure to provide for puncturing as taught by XU to select bits for modification (e.g., puncturing and/or shortening) based on row weights of a Hadamard matrix, so as to implement an efficient rate-matching scheme for channels using polar codes. See XU, at ¶ 0106.
Regarding claim 4, the combination of YOO, BENNETT, and XU, as applied above, renders obvious method of claim 3. YOO does not explicitly disclose:
wherein a puncturing order of the applied transmission weights is determined, and wherein the puncturing order is determined based on at least one of information on a transmission weight value and performance information based on a transmission weight.
In the same field of endeavor, however, XU teaches:
wherein a puncturing order of the applied transmission weights is determined, and wherein the puncturing order is determined based on at least one of information on a transmission weight value and performance information based on a transmission weight. (¶ 0095: Example Unified Pattern for Puncturing and Shortening Polar Codes - Aspects of the present disclosure relate to a rate-matching scheme for transmission of information using polar codes. Rate matching is a process whereby the number of bits to be transmitted is matched to the available bandwidth of the number of bits allowed to be transmitted (e.g., the number of bits that can be carried in an allocation of transmission resources). In certain instances, the amount of data to be transmitted . . . exceeds the available bandwidth, in which case a certain portion of the data to be transmitted will be omitted from the transmission using a technique called puncturing and/or a technique called shortening)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify YOO’s data transmission procedure to provide a unified pattern for puncturing as taught by XU to select bits for modification (e.g., puncturing and/or shortening) based on row weights of a Hadamard matrix, so as to implement an efficient rate-matching scheme for channels using polar codes. See XU, at ¶ 0106.
Claim 8 is rejected under 35 U.S.C. § 103 as being unpatentable over YOO in view of BENNETT, as applied above, and further in view of US 2020/0162212 (hereinafter, “LIU”).
Regarding claim 8, the combination of YOO and BENNETT, as applied above, renders obvious method of claim 7. YOO does not explicitly disclose:
wherein the UE receives indication of additional weight information related to the data retransmission from the base station through downlink control information (DCI) with the indication of retransmission related to the data transmission.
In the same field of endeavor, however, LIU teaches:
wherein the UE receives indication of additional weight information related to the data retransmission from the base station through downlink control information (DCI) with the indication of retransmission related to the data transmission. (¶ 0030: FIG. 9 schematically illustrates a two-DCI solution for eMBB and URLLC multiplexing according to an embodiment of the present disclosure, wherein two segments are used and the second segment is punctured with retransmission; ¶ 0058: The DCI can be transmitted to the terminal device. With the DCI configuration parameter, the terminal device may learn the time-frequency resource for the DCI from the DCI configuration parameter and monitor the DCI within the time-frequency resource)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify YOO’s data transmission procedure to provide a DCI configuration parameters as taught by LIU to provide the UE with an indication of time-frequency resources for the DCI, so as to support a DCI monitoring occasion change due to numerology and scheduling unit size. See LIU, at ¶ 0019.
Claims 9 and 10 are rejected under 35 U.S.C. § 103 as being unpatentable over YOO in view of BENNETT and LIU, as applied above, and further in view of XU.
Regarding claim 9, the combination of YOO, BENNETT, and LIU, as applied above, renders obvious method of claim 8. YOO does not explicitly disclose:
wherein the additional weight information related to the data retransmission includes information on at least one of start position information of the weight for the data retransmission and length information of the weight for the data retransmission among weight vectors. (¶ 0095: Example Unified Pattern for Puncturing and Shortening Polar Codes - Aspects of the present disclosure relate to a rate-matching scheme for transmission of information using polar codes. Rate matching is a process whereby the number of bits to be transmitted is matched to the available bandwidth of the number of bits allowed to be transmitted (e.g., the number of bits that can be carried in an allocation of transmission resources). In certain instances, the amount of data to be transmitted . . . exceeds the available bandwidth, in which case a certain portion of the data to be transmitted will be omitted from the transmission using a technique called puncturing and/or a technique called shortening)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify YOO’s data transmission procedure to provide a unified pattern for puncturing as taught by XU to select bits for modification (e.g., puncturing and/or shortening) based on row weights of a Hadamard matrix, so as to implement an efficient rate-matching scheme for channels using polar codes. See XU, at ¶ 0106.
Regarding claim 10, the combination of YOO, BENNETT, LIU, and XU, as applied above, renders obvious method of claim 9. YOO further discloses:
wherein the second transmission weight is determined based on the additional weight information. (¶ 0083: The UE and the gNB may save and use previously stored CSI and encode and decode only a change in the CSI from a previous instance)
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Garth D Richmond whose telephone number is (703)756-4559. The Examiner can normally be reached M-F 8 a.m. - 5 p.m. ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Kathy Wang-Hurst can be reached at 571-270-5371. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GARTH D RICHMOND/Examiner, Art Unit 2644
/KATHY W WANG-HURST/Supervisory Patent Examiner, Art Unit 2644