Prosecution Insights
Last updated: April 19, 2026
Application No. 18/963,111

UNIFIED SECONDARY TRANSFORM

Non-Final OA §103§DP
Filed
Nov 27, 2024
Examiner
HANSELL JR., RICHARD A
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Tencent America LLC
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
368 granted / 487 resolved
+17.6% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
45 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
18.0%
-22.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 487 resolved cases

Office Action

§103 §DP
DETAILED ACTION 1. The communication is in response to the application received on 11/27/2024, wherein claims 2-21 are pending and are examined as follows. Please note, claim 1 was previously canceled. The Instant Application is a continuation of 17/514,911 (now U.S. Patent No. 12,200,250), which is a continuation of 17/497,511 (now U.S. Patent No. 11,979,603), which is a continuation of 16/889,738 (now U.S. Patent No. 11,218,728). Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 3. The information disclosure statements (IDS) were submitted on 03/19/2025. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Double Patenting 4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 2-5, 12-15 and 21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 4-7 of U.S. Patent No. 11,979,603, hereinafter referred to as 603. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of 603 are similarly related to determining a secondary transform core based on prediction information, namely a secondary transform index, and thus, anticipate the features found in the instant claims for decoding video. A table is provided below to illustrate the mapping between the claim sets. Claims 2, 4-5, 12, 14, 15, and 21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 4-6 of U.S. Patent No. 11,218,728 B2, hereinafter referred to as 728, in view of Nalci et al. US 2020/0404276 A1 (with reference to Provisional application No. 62/864,939), hereinafter referred to as 728 and Nalci, respectively. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of 728 to include the work of Nalci for reducing the signaling overhead of LFNST related indices/flags for improving the coding efficiency of the CABAC engine used in video compression standards (¶0005). Please refer to the table below which illustrates the mapping between the claim sets. Also presented is the obviousness rationale. Claims 2, 3, 12, 13, and 21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6 and 8 of U.S. Patent No. 12,368,887 B2, hereinafter referred to as 887, in view of Nalci. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of 887 to include the work of Nalci for reducing the signaling overhead of LFNST related indices/flags for improving the coding efficiency of the CABAC engine used in video compression standards (¶0005). Please refer to the table below which illustrates the mapping between the claim sets. Also presented is the obviousness rationale. Claims 2-4, 12-14 and 21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 11 and 14-16 of U.S. Patent No. 12,200,250 B2, hereinafter referred to as 250. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of 250 are similarly related to determining a secondary transform core based on prediction information, namely a secondary transform index, and thus, anticipate the features found in the instant claims for decoding video. A table is provided below to illustrate the mapping between the claim sets. Claims 2, 12, and 21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 2 of copending Application No. 18/958,399, hereinafter referred to as 399, in view of Nalci. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of 144 to include the work of Nalci for reducing the signaling overhead of LFNST related indices/flags for improving the coding efficiency of the CABAC engine used in video compression standards (¶0005). Please refer to the table below which illustrates the mapping between the claim sets. Also presented is the obviousness rationale. This is a provisional nonstatutory double patenting rejection. Claims 2-3, 12-13, and 21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 12 of copending Application No. 19/256,144, hereinafter referred to as 144, in view of Nalci. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of 144 to include the work of Nalci for reducing the signaling overhead of LFNST related indices/flags for improving the coding efficiency of the CABAC engine used in video compression standards (¶0005). Please refer to the table below which illustrates the mapping between the claim sets. Also presented is the obviousness rationale. This is a provisional nonstatutory double patenting rejection. **Note: The items below that are BOLD/UNDERLINED in the Instant Application/Patented claims, respectively, indicate differences in the claim limitation. Instant Application 18/963,111 US Patent No. 11,979,603 B2 Claim 2 A method of video decoding, the method comprising: obtaining prediction information for a current block in a current picture that is part of a coded video bitstream, the prediction information indicating that a secondary transform is used to code the current block and including a syntax element [See claim 4 of 603 for support] that indicates a secondary transform index of a secondary transform core; determining the secondary transform core based at least on the secondary transform index; applying the secondary transform core to a secondary transform coefficient block to generate a primary transform coefficient block, the secondary transform coefficient block being generated by de-quantizing transform coefficients in the prediction information; and reconstructing the current block based on the primary transform coefficient block, wherein the syntax element indicating the secondary transform index is signaled at a transform block level for the secondary transform coefficient block, and the secondary transform coefficient block is of a single color component in a transform unit (TU) [See claims 4-5 of 603 for support] Claim 1 A method for video decoding in a decoder, comprising: decoding prediction information for a current block in a current picture that is a part of a coded video sequence, the prediction information indicating an intra prediction mode and a secondary transform index for the current block; determining a secondary transform core based on the intra prediction mode and the secondary transform index, the secondary transform core having a size of M×N; de-quantizing transform coefficients from the prediction information to generate a secondary transform coefficient block having a size of W×H, wherein one of H or W is less than both M and N; applying the secondary transform core having the size of M×N to the secondary transform coefficient block having the size of W×H by applying a sub-section of size W×H of the secondary transform core to the secondary transform coefficient block and generating a W×H primary transform coefficient block; and reconstructing the current block based on the primary transform coefficient block. Claim 4 The method of claim 1, wherein syntax elements of the secondary transform coefficient block include a syntax element that indicates the secondary transform index. Claim 5 The method of claim 4, wherein the syntax element indicating the secondary transform index is signaled in a transform block level and is determined based on a color component of the current block. Claim 3 The method of claim 2, wherein the secondary transform core has a size of MxN, the primary transform coefficient block has a size of WxH, and the secondary transform coefficient block has the size of WxH, and one of H and W is less than both M and N. See Claim 1 of 603 Claim 4 The method of claim 2, wherein the secondary transform index is signaled after a last non-zero transform coefficient of the secondary transform coefficient block and before one or more of syntax elements related to coefficient coding of the secondary transform coefficient block. Claim 6 The method of claim 4, wherein the secondary transform index is signaled after a last non-zero transform coefficient of the secondary transform coefficient block and before one or more of syntax elements related to coefficient coding of the secondary transform coefficient block. Claim 5 The method of claim 2, wherein whether one or more of syntax elements associated with coding a transform coefficient in the secondary transform coefficient block are signaled is determined based on the secondary transform index and a position of the transform coefficient in the secondary transform coefficient block. Claim 7 The method of claim 4, wherein whether one of the syntax elements is signaled is dependent on the secondary transform index and a transform coefficient associated with the one of the syntax elements. Claim 12 Similar to instant claim 2 See Claims 1, 4, and 5 of 603 Claim 13 Similar to instant claim 3 See Claim 1 of 603 Claim 14 Similar to instant claim 4 See Claim 6 of 603 Claim 15 Similar to instant claim 5 See Claim 7 of 603 Claim 21 Similar to instant claim 2 See Claims 1, 4, and 5 of 603 Instant Application 18/963,111 US Patent No. 11,218,728 B2 Claim 2 A method of video decoding, the method comprising: obtaining prediction information for a current block in a current picture that is part of a coded video bitstream, the prediction information indicating that a secondary transform is used to code the current block and including a syntax element that indicates a secondary transform index of a secondary transform core; determining the secondary transform core based at least on the secondary transform index; applying the secondary transform core to a secondary transform coefficient block to generate a primary transform coefficient block, the secondary transform coefficient block being generated by de-quantizing transform coefficients in the prediction information; and reconstructing the current block based on the primary transform coefficient block, wherein the syntax element indicating the secondary transform index is signaled at a transform block level for the secondary transform coefficient block, and the secondary transform coefficient block is of a single color component in a transform unit (TU). Note: see Nalci for support Claim 1 A method for video decoding in a decoder, comprising: decoding prediction information for a current block in a current picture that is a part of a coded video sequence, the prediction information indicating a first intra prediction mode and a secondary transform index for the current block; determining a secondary transform core based on the first intra prediction mode and the secondary transform index; generating a primary transform coefficient block based on the secondary transform core and a first transform coefficient block of the current block, the first transform coefficient block being de-quantized from the prediction information, and a size of the first transform coefficient block being less than a size of the secondary transform core; determining whether to transpose the primary transform coefficient block based on a type of one-dimensional cross-component linear model; transposing the primary transform coefficient block based on a determination that the primary transform coefficient block is to be transposed; and reconstructing the current block based on the transposed primary transform coefficient block. Claim 4 The method of claim 2, wherein the secondary transform index is signaled after a last non-zero transform coefficient of the secondary transform coefficient block and before one or more of syntax elements related to coefficient coding of the secondary transform coefficient block. Claim 5 The method of claim 4, wherein the secondary transform index is signaled after a last non-zero transform coefficient of the first transform coefficient block and before one or more of the syntax elements related to coefficient coding of the first transform coefficient block. Claim 5 The method of claim 2, wherein whether one or more of syntax elements associated with coding a transform coefficient in the secondary transform coefficient block are signaled is determined based on the secondary transform index and a position of the transform coefficient in the secondary transform coefficient block. Claim 4 The method of claim 1, wherein syntax elements of the first transform coefficient block include a syntax element that indicates the secondary transform index. Claim 6 The method of claim 4, wherein whether one of the syntax elements is signaled is dependent on the secondary transform index and a transform coefficient associated with the one of the syntax elements. Claim 12 Similar to instant claim 2 See Claim 1 of 728 Claim 14 Similar to instant claim 4 See Claim 5 of 728 Claim 15 Similar to instant claim 5 See Claims 4 and 6 of 728 Claim 21 Similar to instant claim 2 See Claim 1 of 728 Instant Application 18/963,111 US Patent No. 12,368,887 B2 Claim 2 A method of video decoding, the method comprising: obtaining prediction information for a current block in a current picture that is part of a coded video bitstream, the prediction information indicating that a secondary transform is used to code the current block and including a syntax element that indicates a secondary transform index of a secondary transform core; determining the secondary transform core based at least on the secondary transform index; applying the secondary transform core to a secondary transform coefficient block to generate a primary transform coefficient block, the secondary transform coefficient block being generated by de-quantizing transform coefficients in the prediction information; and reconstructing the current block based on the primary transform coefficient block, wherein the syntax element indicating the secondary transform index is signaled at a transform block level for the secondary transform coefficient block, and the secondary transform coefficient block is of a single color component in a transform unit (TU). Note: see Nalci for support Claim 6 An apparatus for video decoding, comprising: processing circuitry configured to: decode prediction information for a current block in a current picture that is a part of a coded video sequence, the prediction information indicating an intra prediction mode and a secondary transform index for the current block; select a secondary transform core based on the intra prediction mode and the secondary transform index, the secondary transform core having a size of M×N; de-quantize transform coefficients from the prediction information to generate a secondary transform coefficient block; apply the secondary transform core having the size of M×N to the secondary transform coefficient block by applying a sub-section of the secondary transform core to the secondary transform coefficient block and generate a W×H primary transform coefficient block, wherein one of H or W is less than both M and N; and reconstruct the current block based on the primary transform coefficient block. Claim 8 The apparatus of claim 6, wherein syntax elements of the secondary transform coefficient block include a syntax element that indicates the secondary transform index. Claim 3 The method of claim 2, wherein the secondary transform core has a size of MxN, the primary transform coefficient block has a size of WxH, and the secondary transform coefficient block has the size of WxH, and one of H and W is less than both M and N. See claim 6 Claim 12 Similar to instant claim 2 See claims 6 and 8 of 887 Claim 13 Similar to instant claim 3 See claim 6 Claim 21 Similar to instant claim 2 See claims 6 and 8 of 887 Instant Application 18/963,111 US Patent No. 12,20,250 B2 Claim 2 A method of video decoding, the method comprising: obtaining prediction information for a current block in a current picture that is part of a coded video bitstream, the prediction information indicating that a secondary transform is used to code the current block and including a syntax element that indicates a secondary transform index of a secondary transform core [See claim 14 of 250]; determining the secondary transform core based at least on the secondary transform index; applying the secondary transform core to a secondary transform coefficient block to generate a primary transform coefficient block, the secondary transform coefficient block being generated by de-quantizing transform coefficients in the prediction information; and reconstructing the current block based on the primary transform coefficient block, wherein the syntax element indicating the secondary transform index is signaled at a transform block level for the secondary transform coefficient block, and the secondary transform coefficient block is of a single color component in a transform unit (TU). Claim 11 An apparatus for video decoding, comprising: processing circuitry configured to: acquire prediction information for a current block in a current picture that is part of a coded video bitstream, the prediction information indicating that a secondary transform is used for coding the current block; apply a secondary transform core having a size of M×N to a secondary transform coefficient block to generate a W×H primary transform coefficient block, the secondary transform coefficient block being generated by de-quantizing transform coefficients in the prediction information, and the secondary transform coefficient block having a size of W×H, one of H or W being less than both M and N; and reconstruct the current block based on the primary transform coefficient block and based on a primary transform core. Claim 14 The apparatus of claim 11, wherein the prediction information includes a syntax element that indicates a secondary transform index of the secondary transform core. Claim 15 The apparatus of claim 14, wherein the syntax element indicating the secondary transform index is signaled in a transform block level and is determined based on a color component of the current block. Claim 3 The method of claim 2, wherein the secondary transform core has a size of MxN, the primary transform coefficient block has a size of WxH, and the secondary transform coefficient block has the size of WxH, and one of H and W is less than both M and N. See claim 11 of 250 Claim 4 The method of claim 2, wherein the secondary transform index is signaled after a last non-zero transform coefficient of the secondary transform coefficient block and before one or more of syntax elements related to coefficient coding of the secondary transform coefficient block. Claim 16 The apparatus of claim 11, wherein a syntax element indicating one or more primary transform cores for the current block is signaled after a last non-zero transform coefficient of the secondary transform coefficient block and before one or more syntax elements related to coefficient coding of the secondary transform coefficient block. Claim 12 Similar to instant claim 2 See claims 11, 14, and 15 of 250 Claim 13 Similar to instant claim 3 See claim 11 of 250 Claim 14 Similar to instant claim 4 See claim 16 of 250 Claim 21 Similar to instant claim 2 See claims 11, 14, and 15 of 250 Instant Application 18/963,111 Co-pending Application 18/958,399 Claim 2 A method of video decoding, the method comprising: obtaining prediction information for a current block in a current picture that is part of a coded video bitstream, the prediction information indicating that a secondary transform is used to code the current block and including a syntax element that indicates a secondary transform index of a secondary transform core; determining the secondary transform core based at least on the secondary transform index; applying the secondary transform core to a secondary transform coefficient block to generate a primary transform coefficient block, the secondary transform coefficient block being generated by de-quantizing transform coefficients in the prediction information; and reconstructing the current block based on the primary transform coefficient block, wherein the syntax element indicating the secondary transform index is signaled at a transform block level for the secondary transform coefficient block, and the secondary transform coefficient block is of a single color component in a transform unit (TU). [See Nalci for support] Note 1: Regarding “applying the secondary transform core to a secondary transform coefficient block to generate a primary transform coefficient block, the secondary transform coefficient block being generated by de-quantizing transform coefficients in the prediction information”, this is deemed within the level of skill in the art when applying secondary transforms during video decoding operations. Claim 2 A method for video decoding in a decoder, comprising: decoding prediction information for a current block in a current picture of a coded video bitstream, the prediction information indicating an intra prediction mode for the current block and a secondary transform index for the current block; determining a secondary transform core based on the intra prediction mode and the secondary transform index; and reconstructing the current block based on the determined secondary transform core. Claim 12 Similar to instant claim 2 See claim 2 of 399 and Nalci Claim 21 Similar to instant claim 2 See claim 2 of 399 and Nalci Instant Application 18/963,111 Co-pending Application 19/256,144 Claim 2 A method of video decoding, the method comprising: obtaining prediction information for a current block in a current picture that is part of a coded video bitstream, the prediction information indicating that a secondary transform is used to code the current block and including a syntax element that indicates a secondary transform index of a secondary transform core; determining the secondary transform core based at least on the secondary transform index; applying the secondary transform core to a secondary transform coefficient block to generate a primary transform coefficient block, the secondary transform coefficient block being generated by de-quantizing transform coefficients in the prediction information; and reconstructing the current block based on the primary transform coefficient block, wherein the syntax element indicating the secondary transform index is signaled at a transform block level for the secondary transform coefficient block, and the secondary transform coefficient block is of a single color component in a transform unit (TU). [See Nalci for support] Claim 12 A video decoding method, the video decoding method comprising: decoding prediction information for a current block in a current picture that is a part of a coded video sequence, the prediction information indicating an intra prediction mode and a secondary transform index for the current block; selecting a secondary transform core based on the intra prediction mode and the secondary transform index, the secondary transform core having a size of MxN; de-quantizing transform coefficients from the prediction information to generate a secondary transform coefficient block; applying the secondary transform core having the size of MxN to the secondary transform coefficient block by applying a sub-section of the secondary transform core to the secondary transform coefficient block and generate a WxH primary transform coefficient block, wherein one of H or W is less than both M and N; and reconstructing the current block based on the primary transform coefficient block. Claim 3 The method of claim 2, wherein the secondary transform core has a size of MxN, the primary transform coefficient block has a size of WxH, and the secondary transform coefficient block has the size of WxH, and one of H and W is less than both M and N. See claim 12 of 144 Claim 12 Similar to instant claim 2 See claim 12 of 144 and Nalci Claim 13 Similar to claim 3 See claim 12 of 144 Claim 21 Similar to claim 2 See claim 12 of 144 and Nalci Obviousness Rationale: Regarding claims 2, 12, and 21, these all recite the limitation “wherein the syntax element indicating the secondary transform index is signaled at a transform block level for the secondary transform coefficient block, and the secondary transform coefficient block is of a single color component in a transform unit (TU)” which is not supported in 728, 887, 399, and 144. However, these features are disclosed and/or suggested in Nalci. Please refer to abstract, ¶0006-¶0012, and fig. 16, with respect to a LNFST index being signaled at the TU level. Further, Nalci teachings also show said TU will have a given color component. As such, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosures of 728, 887, 399, and 144 to include the teachings of Nalci for reducing the signaling overhead of LFNST related indices/flags for improving the coding efficiency of the CABAC engine used in video compression standards (¶0005). Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2, 12, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. US 10,491,922 B2, in view of Chiang et al. US 2018/0302631 A1, and in further view of Nalci et al. US 2020/0404276 A1 (with reference to Provisional application No. 62/864,939), hereinafter referred to as Zhao, Chiang, and Nalci, respectively. Regarding claim 2, (New) Given the broadest reasonable interpretation (BRI) of the following limitations, Zhao discloses and/or suggests “A method of video decoding [See video decoder 30 in fig. 9, with details of inverse transform unit 278 depicted in 10B], the method comprising: obtaining prediction information for a current block in a current picture that is part of a coded video bitstream [Zhao describes coded/decoded information (col. 7 lines 59-67, col. 30 lines 29-31, and col. 40 lines 63-65). This includes intra-prediction modes. Also see fig. 9 regarding decoded syntax elements used for motion compensation and intra prediction], the prediction information indicating that a secondary transform is used to code the current block and including a syntax element that indicates a secondary transform index of a secondary transform core [Given the BRI, the claimed “secondary transform index” can be interpreted as the transform set index of Zhao (fig. 7A). Also see col. 36 lines 61-67 and col. 37 lines 34-36, with respect to a syntax element for indicating an index that facilitates selecting a first inverse transform from a subset of non-separable transforms]; determining the secondary transform core based at least on the secondary transform index [Video decoder 30 can select the secondary transform (i.e. core) once the transform set has been determined via the transform set index which corresponds with the luma intra mode (fig 7A)]; applying the secondary transform core to a secondary transform coefficient block to generate a primary transform coefficient block, the secondary transform coefficient block being generated by de-quantizing transform coefficients in the prediction information [Although deemed within the level of skill in the art when coding video via secondary transforms, Zhao does not appear to explicitly address these features. Please see Chiang below for support]; and reconstructing the current block based on the primary transform coefficient block [See fig. 11. A first inverse transform (corresponds to above secondary transform) operates on inverse quantized first coefficient block to generate a second coefficient block from which the decoded video can be reconstructed as shown], wherein the syntax element indicating the secondary transform index is signaled at a transform block level for the secondary transform coefficient block, and the secondary transform coefficient block is of a single color component in a transform unit (TU).” [However Zhao does not address the aforementioned features. Please refer to Nalci below for support] Although Zhao does not explicitly address “applying the secondary transform core to a secondary transform coefficient block to generate a primary transform coefficient block, the secondary transform coefficient block being generated by de-quantizing transform coefficients in the prediction information”, this is deemed within the level of skill in the art. Nonetheless, in the spirit of compact prosecution, Chiang from the same or similar field of endeavor is relied on to teach and/or suggest these features. [See for e.g. figs. 7 and 10 for support] In light of Chiang’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the video coding techniques of Zhao for non-separable secondary transforms to include the teachings of Chiang as above for signaling selections of transform operations (¶0002). Although Zhao are Chiang are both deemed relevant art, they do not address the last limitations of claim 2. Nalci on the other hand from the same or similar field of endeavor is relied on to teach and/or suggest “wherein the syntax element indicating the secondary transform index is signaled at a transform block level for the secondary transform coefficient block, and the secondary transform coefficient block is of a single color component in a transform unit (TU).” [See abstract, ¶0006-¶0012, and fig. 16, with respect to a LNFST index being signaled at the TU level. Further, Nalci teachings show said TU will have a given color component] In light of Nalci’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the video coding techniques of Zhao and Chiang for non-separable secondary transforms to include the teachings of Nalci as above for reducing the signaling overhead of LFNST related indices/flags for improving the coding efficiency of the CABAC engine used in video compression standards (¶0005). Regarding Claim 12, Claim 12 is rejected under the same art and evidentiary limitations as determined for the method of Claim 2, since encoding and decoding are inverse operations that allow compressed video to be decoded and reconstructed at a receiving device. See for e.g. fig. 1 of Zhao. Also refer to figs. 8 and 9 which illustrate video encoder 20 and video decoder 30, respectively. Regarding Claim 21, Claim 21 is rejected under the same art and evidentiary limitations as determined for the method of Claim 2. Allowable Subject Matter 6. Claims 3-11 and 13-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. In light of the specification, the Examiner finds the claimed invention to be patentably distinct from the prior art of records. The prior art of record, taken individually or in combination fail to explicitly teach or render obvious within the context of the respective independent claims the limitations of claims 3-11 and 13-20. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please refer to PTO 892 for additional references. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD A HANSELL JR. whose telephone number is (571)270-0615. The examiner can normally be reached Mon - Fri 10 am- 7 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD A HANSELL JR./Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Nov 27, 2024
Application Filed
Feb 04, 2025
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103, §DP
Mar 31, 2026
Applicant Interview (Telephonic)
Apr 03, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604042
LAYER INFORMATION SIGNALING-BASED IMAGE CODING DEVICE AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12604096
ADAPTIVE BORESCOPE INSPECTION
2y 5m to grant Granted Apr 14, 2026
Patent 12587660
METHOD FOR DECODING IMAGE ON BASIS OF IMAGE INFORMATION INCLUDING OLS DPB PARAMETER INDEX, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12587667
SYSTEMS AND METHODS FOR SIGNALING TEXT DESCRIPTION INFORMATION IN VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12579871
CAMERA DETECTION OF OBJECT MOVEMENT WITH CO-OCCURRENCE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+28.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 487 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month