DEIAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
1. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
2. Claim(s) 1, 3, 4, 10, 12 and 13 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Li (US PG Pub. No. 2025/0096972).
As per claim 1:
Li teaches a method performed by an electronic device (see abstract: teaches a processing method performed by a terminal), comprising:
partitioning channel state information (CSI) into one or more discrete elements based on a predetermined dimension (see paragraph [0047], disclose performing grouping preprosessing (construed as said partitioning CSI) on initial channel state information in a target dimension in order to obtain plurality pieces of first channel state information. Paragraph [0051] discloses said target dimension may be frequency domain dimension, the space domain dimension, the layer dimension or the time domain dimension);
categorizing the partitioned CSI into one or more bins (see paragraph [0060], the plurality of pieces of first channel state information, via one-to-one correspondence, are classified into resource granularity groups) having an equal length (see paragraph [0217], each resource granularity group is associated with the same number of continuous subbands. For example, as shown in figure 11, each of the eight groups are made up on 4 continuous subbands);
and encoding the categorized partitioned CSI (see paragraph [0156], after dividing every grouping granularity number of resource granularities into a resource granularity group, the first channel state information in sequence is encoded and feedback to the base station where the plurality of channel state information in sequence are combined).
As per claim 3:
Li teaches the method of claim 1, wherein the predetermined dimension is based on at least one of a frequency (Paragraph [0051] discloses said target dimension may be frequency domain dimension, the space domain dimension, the layer dimension or the time domain dimension), a number of base station antennas (see paragraphs [0210]-[0211], the number of transmitting antennas of a base station), and a number of user equipment antennas (see paragraphs [0210]-[0211], the number of receiving antennas of the user equipment).
As per claim 4:
Li teaches the method of claim 1, further comprising:
zero-padding or interpolating at least one of the one or more categorized partitioned CSI bins to achieve the equal length among each of the bins (see paragraph [0091], by padding with the resource granularity, the numbers of resource granularities in respective resource granularity groups obtained through classification are ensured to be the same).
As per claim 10:
Li teaches an electronic device (see Figure 7, paragraph [0195] is a block diagram of a terminal), comprising:
a memory device (see Figure 7, memory 102),
and a processor configured to execute instructions stored on the memory device, wherein the instructions cause the processor (see paragraph [0197], memory 102 with one or more programs stored thereon, where the one or more programs, upon being executed by the at least one processor 101, causes the at least one processor 101 to implement any one channel state information processing method of the first aspect in the embodiment of the present disclosure) to:
partition channel state information (CSI) into one or more discrete elements based on a predetermined dimension (see paragraph [0047], disclose performing grouping preprosessing (construed as said partitioning CSI) on initial channel state information in a target dimension in order to obtain plurality pieces of first channel state information. Paragraph [0051] discloses said target dimension may be frequency domain dimension, the space domain dimension, the layer dimension or the time domain dimension);
categorize the partitioned CSI into one or more bins (see paragraph [0060], the plurality of pieces of first channel state information, via one-to-one correspondence, are classified into resource granularity groups) having an equal length (see paragraph [0217], each resource granularity group is associated with the same number of continuous subbands. For example, as shown in figure 11, each of the eight groups are made up on 4 continuous subbands);
and encode the categorized partitioned CSI (see paragraph [0156], after dividing every grouping granularity number of resource granularities into a resource granularity group, the first channel state information in sequence is encoded and feedback to the base station where the plurality of channel state information in sequence are combined).
Claim 12 is rejected in the same scope as claim 3.
Claim 13 is rejected in the same scope as claim 4.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claim(s) 2 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Jang (US PG Pub. No. 2013/0343327).
As per claim 2:
Li teaches the method of claim 1 with the exception of:
wherein encoding the categorized partitioned CSI further comprises encoding the categorized partitioned CSI irrespective of a size of the partitioned CSI.
Jang teaches wherein encoding the categorized partitioned CSI further comprises encoding the categorized partitioned CSI irrespective of a size of the partitioned CSI (see paragraphs [0225], [0226], for dual RM encoder, O CQI information bits may be allocated to first and second RM encoders. Paragraph [0243] also discloses, the UE may use the dual RM encoder regardless of the size of the CQI payload).
Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the dual RM encoders for encoding CQI information bits (as disclosed in Jang) into Li as a way of encoding information of arbitrary size (please see paragraphs [0147] and [0243] of Jang).
Claim 11 is rejected in the same scope as claim 2.
5. Claims 5, 6, 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Zhang (US PG Pub. No. 2025/0220480).
As per claim 5:
Li teaches the method of claim 1 with the exception of:
wherein encoding the categorized partitioned CSI further comprises:
obtaining a universal encoding block;
determining at least one characteristic of the partitioned CSI included in the universal encoding block corresponding to a maximum latent vector size;
and encoding the partitioned CSI to obtain encoded data having a length corresponding to a latent vector size.
Zhang teaches wherein encoding the categorized partitioned CSI (see paragraph [0037], discloses separately encoding part 1 – RI, CQI, indication of compression ratio, and/or quantization levels and part 2 – containing the compressed maximum eigen vector(s) of the AI/ML compressed CSI) further comprises:
obtaining a universal encoding block (see paragraph [0035], discloses the input to the auto-encoder is the eigen vector corresponding to a maximum eigen vector value after channel matrix decomposition);
determining at least one characteristic of the partitioned CSI included in the universal encoding block corresponding to a maximum latent vector size (see paragraph [0036], the size of the compressed (maximum) eigen vector(s) is indicated by the RI (rank indicator) and/or quantization level and/or compression ratio which are contained in part 1 CSI);
and encoding the partitioned CSI to obtain encoded data having a length corresponding to a latent vector size (see paragraph [0041], the output of the encoder yields separately encoded CSI (i.e., part 1 and part 2). The output data size corresponds to the compression ratio).
Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the eigen vector(s) (as disclosed in Zhang) into Li as a way of achieving compressed CSIs into multiple parts (please see paragraph [0040] of Zhang. Therefore, implementing such encoding method helps to reduce system overhead, provide a good communication performance and/or provide high reliability (please see paragraph [0003] of Zhang).
As per claim 6:
Li in view of Zhang teaches the method of claim 5.
Li does not clearly teach wherein the maximum latent vector size is a maximum latent vector size capable of being encoded.
Zhang teaches wherein the maximum latent vector size is a maximum latent vector size capable of being encoded (paragraphs [0036], [0037] disclose the size of the compressed eigen vectors corresponds to the separately encoded part 1 and 2 CSI. Therefore, the vector size is capable of being encoded per the AI/ML model).
Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the eigen vector(s) (as disclosed in Zhang) into Li as a way of achieving compressed CSIs into multiple parts (please see paragraph [0040] of Zhang. Therefore, implementing such encoding method helps to reduce system overhead, provide a good communication performance and/or provide high reliability (please see paragraph [0003] of Zhang).
Claim 14 is rejected in the same scope as claim 5.
Claim 15 is rejected in the same scope as claim 6.
6. Claims 7, 8, 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Zhang and further in view of Jin (US PG Pub. No. 2025/0045587).
As per claim 7:
Li in view of Zhang teaches the method of claim 5 with the exception of:
further comprising:
calculating a masking layer based on the latent vector size by bypassing one or more elements positioned towards a front end of a vector output from the universal encoding block, and setting a remaining number of elements included in the vector to zero,
wherein the partitioned CSI is encoded using the masking layer.
Jin teaches further comprising:
calculating a masking layer based on the latent vector size by bypassing one or more elements positioned towards a front end of a vector output from the universal encoding block (see paragraph [0228], for a first order vector/tensor, the hard-masking M=[1, 1, …, 1, 0, 0, …, 0] is a vector having first t bits and being 1 and last LM-t bits being 0), and setting a remaining number of elements included in the vector to zero (as explained earlier in paragraph [0228, the last LM-t bits are set to 0), wherein the partitioned CSI is encoded using the masking layer (see paragraph [0248], discloses random hard-masking module is added between the encoder and decoder. The parameter, t ∈ [0, 8192], of the mask tensor obeys uniform distribution and Adam optimizer is used to train the autoencoder neural network).
Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate masking of the tensor/vector (as disclosed in Jin) into both Li and Zhang as a way of performing dimension transformation on a tensor input to the tensor transformation layer (please see paragraph [0136] of Jin). Therefore, implementing such training model results in low storage overhead, easy deployment and continuous evolution of online training (please see paragraph [0168] of Jin).
As per claim 8:
Li in view of Zhang teaches the method of claim 5 with the exception of:
further comprising:
calculating a masking layer based on the latent vector size, wherein the partitioned CSI is encoded using the masking layer, and wherein the masking layer is configured to support CSI having a plurality of different compression ratios.
Jin teaches further comprising:
calculating a masking layer based on the latent vector size, wherein the partitioned CSI is encoded using the masking layer (see paragraph [0228], for a first order vector/tensor, the hard-masking M=[1, 1, …, 1, 0, 0, …, 0] is a vector having first t bits and being 1 and last LM-t bits being 0), and wherein the masking layer is configured to support CSI having a plurality of different compression ratios (see paragraph [0249], the possible compression ratios are 5/6, 11/12, 23/24 and 47/48. During training, parameter t ∈ {8192, 4096, 2028, 1024} of the hard-masking module obeys equal probability distribution and acts on the output of the encoder).
Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate masking of the tensor/vector (as disclosed in Jin) into both Li and Zhang as a way of performing dimension transformation on a tensor input to the tensor transformation layer (please see paragraph [0136] of Jin). Therefore, implementing such training model results in low storage overhead, easy deployment and continuous evolution of online training (please see paragraph [0168] of Jin).
Claim 16 is rejected in the same scope as claim 7.
Claim 17 is rejected in the same scope as claim 8.
7. Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Li I view of NPL (Titled: SpotTune: Transfer Learning through Adaptive Fine-Tuning, November 21, 2020).
As per claim 9:
Li teaches the method of claim 1 with the exception of:
further comprising:
training at least one parameter over all compression ratios;
freezing the at least one parameter over all the compression ratios corresponding to an output node;
and fine-tuning a second parameter linked to the output node.
NPL teaches further comprising:
training at least one parameter over all compression ratios (please see page 3, Col 1, under 3.1 SpotTune Overview, disclose freezing the original block and creating a new trainable block which is initialized with parameters);
freezing the at least one parameter over all the compression ratios corresponding to an output node (please see page 3, Col 1, under 3.1 SpotTune Overview, during training, given an input image x, the frozen block Fl trained on the source task is left unchanged and the replicated block which is initialize from Fl can be optimized towards the target dataset);
and fine-tuning a second parameter linked to the output node (please see page 3, second column, Il(x)=1, the l-th residual block is fine-tuned by optimizing Fl).
Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate the teachings of the NPL into Li. The motivation for doing so would be to improve accuracy, instead of dropping layers to improve efficiency (please see page 3, Col 1, first paragraph of NPL).
Claim 18 is rejected in the same scope as claim 9.
8. Claims 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Jin.
As per claim 19:
Zhang teaches a method performed by an electronic device (see abstract), comprising:
obtaining a universal encoding block (see paragraph [0035], discloses the input to the auto-encoder is the eigen vector corresponding to a maximum eigen vector value after channel matrix decomposition);
determining at least one characteristic of channel state information (CSI) included in the universal encoding block corresponding to a latent vector size (see paragraph [0036], the size of the compressed (maximum) eigen vector(s) is indicated by the RI (rank indicator) and/or quantization level and/or compression ratio which are contained in part 1 CSI).
Zhang does not teach calculating a masking layer based on the latent vector size;
and encoding the CSI based on the masking layer to obtain encoded data having a length corresponding to the latent vector size.
Jin teaches calculating a masking layer based on the latent vector size (see paragraph [0228], for a first order vector/tensor, the hard-masking M=[1, 1, …, 1, 0, 0, …, 0] is a vector having first t bits and being 1 and last LM-t bits being 0);
and encoding the CSI based on the masking layer to obtain encoded data having a length corresponding to the latent vector size (see paragraph [0248], discloses random hard-masking module is added between the encoder and decoder. The parameter, t ∈ [0, 8192], of the mask tensor obeys uniform distribution and Adam optimizer is used to train the autoencoder neural network).
Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate masking of the tensor/vector (as disclosed in Jin) into both Li and Zhang as a way of performing dimension transformation on a tensor input to the tensor transformation layer (please see paragraph [0136] of Jin). Therefore, implementing such training model results in low storage overhead, easy deployment and continuous evolution of online training (please see paragraph [0168] of Jin).
As per claim 20:
Zhang teaches an electronic device (see Figure 6, system 700), comprising:
a memory device (see Figure 6, memory/storage 740),
and a processor configured to execute instructions stored on the memory device (see paragraph [0088], processor for executing instructions stored in memory), wherein the instructions cause the processor to:
obtain a universal encoding block (see paragraph [0035], discloses the input to the auto-encoder is the eigen vector corresponding to a maximum eigen vector value after channel matrix decomposition);
determine at least one characteristic of channel state information (CSI) included in the universal encoding block corresponding to a latent vector size (see paragraph [0036], the size of the compressed (maximum) eigen vector(s) is indicated by the RI (rank indicator) and/or quantization level and/or compression ratio which are contained in part 1 CSI).
Zhang does not teach calculate a masking layer based on the latent vector size;
and encode the CSI based on the masking layer to obtain encoded data having a length corresponding to the latent vector size.
Jin teaches calculate a masking layer based on the latent vector size (see paragraph [0228], for a first order vector/tensor, the hard-masking M=[1, 1, …, 1, 0, 0, …, 0] is a vector having first t bits and being 1 and last LM-t bits being 0);
and encode the CSI based on the masking layer to obtain encoded data having a length corresponding to the latent vector size (see paragraph [0248], discloses random hard-masking module is added between the encoder and decoder. The parameter, t ∈ [0, 8192], of the mask tensor obeys uniform distribution and Adam optimizer is used to train the autoencoder neural network).
Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to incorporate masking of the tensor/vector (as disclosed in Jin) into both Li and Zhang as a way of performing dimension transformation on a tensor input to the tensor transformation layer (please see paragraph [0136] of Jin). Therefore, implementing such training model results in low storage overhead, easy deployment and continuous evolution of online training (please see paragraph [0168] of Jin).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PRINCE AKWASI MENSAH whose telephone number is (571)270-7183. The examiner can normally be reached Mon-Fri 8:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MICHAEL THIER can be reached at 571-272-2832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
PRINCE AKWASI. MENSAH
Examiner
Art Unit 2474
/PRINCE A MENSAH/Examiner, Art Unit 2474
/HABTE MERED/Primary Examiner, Art Unit 2474