Prosecution Insights
Last updated: April 19, 2026
Application No. 18/668,876

METHOD AND APPARATUS FOR SECONDARY TRANSFORM WITH ADAPTIVE KERNEL OPTIONS

Final Rejection §103
Filed
May 20, 2024
Examiner
KALAPODAS, DRAMOS
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Tencent America LLC
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
562 granted / 713 resolved
+20.8% vs TC avg
Strong +28% interview lift
Without
With
+28.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
34 currently pending
Career history
747
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 713 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 2. The information disclosure statements (IDS) were submitted on11/05/2025. The submissions are in compliance with the provisions of 37 CFR § 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Status 3. Claims 1-20 are currently pending. Response to Arguments 4. Applicant’s arguments with respect to claims 1-20, have been considered but are moot in view of the new ground(s) of rejection. This Office Action is brought to finality based on the art disclosed in the IDS provided. Claim Objections 5. Claims 18-20 are objected to, for informality, due to missing their marking status e,g., “(Original)”, presently being assumed based on their original recited presentation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application does not currently name joint inventors. 6. Claims 1-20, are rejected under 35 U.S.C. 103 as being obvious over Moonmo Koo et al., (hereinafter Koo) (US 11,425,421) and Louis Kerofsky et al., (hereinafter Kerofsky) (US 2022/0329819) in lieu of Prov. 63/173,879; 63/176,804 and further in view of Seunghwan Kim et al., (hereinafter Kim) (US 2020/0177889). Re Claim 1. (Currently amended) Koo discloses, a method for decoding a video block in a video bitstream (Abstract), comprising: receiving a set of secondary transform coefficients associated with the video block (receiving i.e., parsing secondary transformation information e.g., by index, at step S1620 in Fig.16, Col.68 Lin.44-45); determining an intra prediction mode associated with the video block (determining i.e., by parsing at decoder the intra prediction mode index at S1610 in Fig.16, Col.68 Lin.41-42); and determining a first group of secondary transform kernels when the intra prediction mode is one of Vertical mode (V_PRED), Horizontal mode (H_PRED), Smooth horizontal mode (SMOOTH_H_PRED) and Smooth Vertical mode (SMOOTH_V_PRED), the first group of secondary transform kernels having N number of secondary transform kernels (for the non-directional prediction modes, determined at intra-predictor 222, at Col.7 Lin.52-56, Col.12 Lin.35-40, Col.18 Lin.4-13 used for the NSST transform set of k transform kernels, at Col.18 Lin.1-18); determining a second group of secondary transform kernels when the intra prediction mode is not one of Vertical mode (V_PRED), Horizontal mode (H_PRED), Smooth horizontal mode (SMOOTH_H_PRED) and Smooth Vertical mode (SMOOTH_V_PRED), the second group of secondary transform kernels having K number of secondary transform kernels (determining a second group of secondary transforms for the directional modes at Col.7 Lin.37-64, for the i.e., not the non-directional modes, by determining a second group of secondary transform kernels per Col.18 Lin.1-8 and Lin.21-57 of secondary transforms group k, per Col.19 Lin.1-13); selecting a secondary transform kernel from the first or second group of secondary transform kernels based on at least the intra prediction mode (selecting the secondary transform of either non-directional or directional, i.e., the first or second group od second transform kernels k, based on the intra prediction mode, Col. 17 Lin.45-56, or based on the intra prediction mode applied to the prediction of the target block, being selected from among the transform kernels as taught at, Col.15 Lin.26-62); and performing an inverse secondary transform of the set of secondary transform coefficients to generate primary transform coefficients of the video block based on the selected secondary transform kernel (performing an inverse secondary transform to generate the primary transform, per Fig.4 at S450-S460, of the transform coefficients (2nd ) and Col.19 Lin.30-66…). The art to Kerofsky expressly teaches about grouping the secondary transforms LFNST kernels based on the intra-prediction mode, determining a first group of secondary transform kernels when the intra prediction mode is one of Vertical mode (V_PRED), Horizontal mode (H_PRED), Smooth horizontal mode (SMOOTH_H_PRED) and Smooth Vertical mode (SMOOTH_V_PRED), the first group of secondary transform kernels having N number of secondary transform kernels (by having more than two TU classes, generating a plurality of kernels, where the TUs are grouped in classes and LFNST kernels may be designated for each class, Par.[0088-0090] comprising non-directional intra prediction modes, Par.[0136, 0161]); determining a second group of secondary transform kernels when the intra prediction mode is not one of Vertical mode (V_PRED), Horizontal mode (H_PRED), Smooth horizontal mode (SMOOTH_H_PRED) and Smooth Vertical mode (SMOOTH_V_PRED), the second group of secondary transform kernels having K number of secondary transform kernels (from among the two or mode TU classes, generating a plurality of kernels, where the TUs are grouped in classes and LFNST kernels may be designated for each class, Par.[0088-0090] comprising directional intra prediction modes, Par.[0135, 0161]); selecting a secondary transform kernel from the first or second group of secondary transform kernels based on at least the intra prediction mode (performing inverse LFNST kernel at decoder 300 based on the determined class i.e., kernel group, Par.[0175-0176] according to the plurality of intra prediction modes, Par.[0177-0178]); and performing an inverse secondary transform of the set of secondary transform coefficients to generate primary transform coefficients of the video block based on the selected secondary transform kernel (regenerating the inverse primary transform from the selected secondary LFNST inverse transform kernels, Par.[0175]). However, Koo and Kerofsky do not expressly teach the amended limitation which is being taught by Kim according to, and when that a size of the video block meet a predetermined condition, the first group of secondary transform kernels having N number of secondary transform kernels (where the first group i.e., set of secondary transform is determined according to the predetermined size of the block, e.g., 8x8, having a set of 35 NSSTs, and for other sizes than 8x8 or 4x4, n sets may be configured and k transform kernels may be included in each set, at Par.[0091] according to a an index indicating first NSST kernel from a set at Par.[0097] which may be selected from a mixed NSSTs of different sizes, with a fixed or variable number of NSST kernels, Par.[0102-0103] as mapped at Tables 3 or 4 and 5, Par.[0106-0111] etc.) ; determining a second group of secondary transform kernels [[when]] otherwise (and determining a second NSST kernel index, which may be selected from a mixed NSSTs of different sizes, with a fixed or variable number of NSST kernels, Par.[0102-0103] as mapped at Tables 3 or 4 and 5, Par.[0106-0111] and where the number of NSST kernels included in each transform set may be different Par.[0117] etc.); The ordinary skilled in the art would have considered before the effective filing date of invention, the art to Koo teaching the decoding method of reconstructing the intra predicted coded blocks based on a specific prediction mode differentiating between the non-directional and directional modes by signaling mode kernels, and applying reduced primary transform by processing a secondary transform by which reducing the size of the block hence improving the transmission rate and memory usage at col.22 Lin.36-61 and to signal the transform kernel metric information depending on the intra-prediction mode to decoder for reconstruction. Similarly Kerofsky teaches the kernel grouping and dependency to the plurality of intra prediction modes being applied. Furthermore, Kim teaches the dependency of the number of NSST kernel sets to various block sizes along with the intra-prediction modes applied for prediction, by which enhancing compression efficiency at Par.[0073, 0116], by which the combination is predictable in view of the claimed scope. Re Claim 2. (Original) Koo, Kerofsky and Kim disclose, the method of claim 1, wherein selecting a secondary transform kernel from the first or second group of secondary transform kernels based on the intra prediction mode comprises: Kerofsky teaches about, selecting a group of secondary transform kernels from the first or second group of secondary transform kernels based on the intra prediction mode (grouping LFNST kernels Par.[0090]); and selecting the secondary transform kernel from the selected group of secondary transform kernels based on a kernel index received from the video block in the video bitstream (and indexing the LFNST, secondary transform kernels, Par.[0081, 0113-0114, 0118, 0121] by using the TU class). Re Claim 3. (Original) Koo, Kerofsky and Kim disclose, the method of claim 2, Koo teaches about, wherein a bit size of the kernel index depends on the intra prediction mode (the intra directional and non-directional modes i.e., the first and the second intra modes, are represented by a mode index value, i.e., the bit-size at Col.18 L8n.21-41). Kerofsky teaches about, wherein a bit size of the kernel index depends on the intra prediction mode (Par.[0114-0119]). Re Claim 4. (Original) Koo, Kerofsky and Kim disclose, the method of claim 2, Koo teaches about, wherein the kernel index is entropy-coded in the video bitstream using different context models depending on which of the first group and second group of secondary transform kernel is used (entropy coding the RST and the corresponding kernel matrices, Col.89 Lin.56-67 to Col.90 Lin.1-16). Kerofsky teaches the entropy encoding (at least at Par.[0149]). Re Claim 5. (Original) Koo, Kerofsky and Kim disclose, the method of claim 2, Koo teaches about, wherein, when numbers of kernels in the selected group of secondary transform kernels used for different video blocks in the video bitstream are different, entropy coding of binarized codewords of the kernel indexes of the different video blocks share context modeling for at least one bin of the binarized codewords of the kernel indexes of the different video blocks share context modeling for at least one bin of the binarized codewords (encoding bins of a syntax element bin string based on context information, that is, a context model on a bin string of the transform index, Col.90 Lin.8-16). Re Claim 6. (Original) Koo, Kerofsky and Kim disclose, the method of claim 1, Koo teaches about, wherein N and K are different non-negative integers between 0 and 6 (e.g., to select a mode based transform kernel, three non-separable secondary transform kernels may be configured per transform, i.e., three is a positive integer between 0-6, Col.17 Lin.45-46). Kerofsky teaches this limitation, (for the secondary transform sets the number of kernels is based on the transform dimension in classification, where the size of the set can be smaller or larger than seven transform shaped classes -i.e., 0 to 6 range- created for the 4x4 LFNST case, Par.[0103]). Re Claim 7. (Currently amended) This claim represents the video encoder comprising a memory for storing computer code and at least one processor KOO: a memory and processor at Col.90 Lin.40-54 and the entropy encoder, 240 Col.90 Lin.8-9) implementing at the encoder prediction loop each and every limitation of the method claim 1, hence it is rejected on the same mapped probe mutatis mutandis. Re Claim 8. (Original) Koo, Kerofsky and Kim disclose, the video encoder of claim 7, Koo teaches about, wherein the at least one processor is configured to execute the computer code to further determine a range of the kernel index based on the intra prediction mode (transform kernels are used according to a transform index range, being a range per Col.28 Lin.17-32 and Tables 5 and 6). Re Claim 9. Koo, Kerofsky and Kim disclose, the video encoder of claim 8, Kerofsky teaches, wherein the at least one processor is configured to execute the computer code to further determine a bit size of the kernel index for encoding based on the range (the bit size, e.g., 15-bit, Par.0151]). Re Claim 10. (Original) This claim represents the video encoder comprising a memory for storing computer code and at least one processor KOO: a memory and processor at Col.90 Lin.40-54 and the entropy encoder, 240 Col.90 Lin.8-9) implementing at the encoder prediction loop each and every limitation of the method claim 6, hence it is rejected on the same mapped probe mutatis mutandis. Re Claim 11. (Original) This claim represents the video encoder comprising a memory for storing computer code and at least one processor KOO: a memory and processor at Col.90 Lin.40-54 and the entropy encoder, 240 Col.90 Lin.8-9) implementing at the encoder prediction loop each and every limitation of the method claim 4, hence it is rejected on the same mapped probe mutatis mutandis. Re Claim 12. (Original) This claim represents the video encoder comprising a memory for storing computer code and at least one processor KOO: a memory and processor at Col.90 Lin.40-54 and the entropy encoder, 240 Col.90 Lin.8-9) implementing at the encoder prediction loop each and every limitation of the method claim 5, Re Claim 13. (Currently amended) This claim represents the method for processing a video block, performing each and every limitation at the encoder (Koo: generating a bitstream at encoder, Col.3 Lin.4-8, Col.6 Lin.7-12) of the apparatus claim 7, hence it is rejected on the same mapped probe mutatis mutandis. Re Claim 14. (Original) Koo, Kerofsky and Kim disclose, the method of claim 13, Koo teaches about, wherein the encoded syntax element for indicating the intra prediction mode associated with the video block enables a video decoder to select the group of secondary transform kernels from the first and second group of secondary transform kernels (the video/image information signaled by encoder to decoder 300, via APS, PPS, SPS, VPS at Col.11 Lin.7-21 include the intra prediction mode selection of non-directional or directional modes per Col.12 Lin.32-40). Re Claim 15. (Original) Koo, Kerofsky and Kim disclose, the method of claim 13, Koo teaches about, wherein the encoded kernel index enables a video decoder to select the secondary transform kernel from the group of secondary transform kernels (selecting the secondary transform of either non-directional or directional, i.e., the first or second group od second transform kernels k, based on the intra prediction mode, Col. 17 Lin.45-56, or based on the intra prediction mode applied to the prediction of the target block, being selected from among the transform kernels as taught at, Col.15 Lin.26-62, as determined from the kernel index, Col.15 Lin.37-40 of the respective transform kernel, or based on the intra prediction mode index value, Col.18 Lin.23-57). Re Claim 16. (Original) Koo, Kerofsky and Kim disclose, the method of claim 13, Koo teaches about, wherein a bit size of the kernel index is determined based on the intra prediction mode (the intra directional and non-directional modes i.e., the first and the second intra modes, are represented by a mode index value, i.e., the bit-size at Col.18 L8n.21-41). Kerofsky teaches this limitation at (Par.[0114-0119]). Re Claim 17. (Original) This claim represents the method for processing a video block, performing each and every limitation at the encoder (Koo: generating a bitstream at encoder, Col.3 Lin.4-8, Col.6 Lin.7-12) of the apparatus claim 10, hence it is rejected on the same mapped probe mutatis mutandis. Re Claim 18. This claim represents the method for processing a video block, performing each and every limitation at the encoder (Koo: generating a bitstream at encoder, Col.3 Lin.4-8, Col.6 Lin.7-12) of the apparatus claim 11, hence it is rejected on the same mapped probe mutatis mutandis. Re Claim 19. This claim represents the method for processing a video block, performing each and every limitation at the encoder (Koo: generating a bitstream at encoder, Col.3 Lin.4-8, Col.6 Lin.7-12) of the apparatus claim 12, hence it is rejected on the same mapped probe mutatis mutandis. Re Claim 20. This claim represents the video decoder comprising a memory for storing computer code and at least one processor per (KOO: a memory and processor at Col.90 Lin.40-54 and the entropy encoder, 240 Col.90 Lin.8-9) implementing each and every limitation of the method claim 1, hence it is rejected on the same mapped evidentiary probe, mutatis mutandis. Conclusion 7. Applicant's submission of an information disclosure statement under 37 CFR 1.97(c) with the fee set forth in 37 CFR 1.17(p) on 12/17/2025, prompted the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 609.04(b). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVE J CZEKAJ. The examiner can normally be reached on 8-6:00 Monday-Thursday and every other Friday. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571)272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DRAMOS KALAPODAS/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

May 20, 2024
Application Filed
Aug 05, 2025
Non-Final Rejection — §103
Nov 05, 2025
Response Filed
Jan 29, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604039
SIGN PREDICTION FOR BLOCK-BASED VIDEO CODING
2y 5m to grant Granted Apr 14, 2026
Patent 12598327
RESIDUAL CODING CONSTRAINT FLAG SIGNALING
2y 5m to grant Granted Apr 07, 2026
Patent 12598301
BDPCM-BASED IMAGE CODING METHOD AND DEVICE THEREFOR
2y 5m to grant Granted Apr 07, 2026
Patent 12593044
DEEP CONTEXTUAL VIDEO IMAGE COMPRESSION
2y 5m to grant Granted Mar 31, 2026
Patent 12593022
STEREOSCOPIC DISPLAY SYSTEM AND LIQUID CRYSTAL SHUTTER DEVICE
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+28.2%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 713 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month