Prosecution Insights
Last updated: April 19, 2026
Application No. 17/651,956

COUNTER-BASED INTRA PREDICTION MODE

Final Rejection §103
Filed
Feb 22, 2022
Examiner
XU, XIAOLAN
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Bytedance (Hk) Limited
OA Round
8 (Final)
74%
Grant Probability
Favorable
9-10
OA Rounds
2y 11m
To Grant
87%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
247 granted / 334 resolved
+16.0% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
371
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 334 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 03/17/2026 have been fully considered but they are not persuasive. AKIYUKI discloses a frequency table (page 2 paragraph 3, a frequency information table storing mode information indicating each of the one or more prediction modes) that records frequencies of one or more intra prediction modes used for coding video blocks processed before the current video block and without spatial positional constraint relative to the current video block in order of frequency (page 5 paragraph 4, the frequency information table generation unit 202 tabulates the frequency of the prediction information 209 of the pixel block coded up to now (The pixel block coded up to now has no spatial positional constraint relative to the current video block.); page 5 next to last paragraph, the prediction mode with high frequency among the prediction modes used for encoding exists in the higher level in the table; page 11 paragraph 2, By updating the frequency information table, since the most frequent information always comes to the top of the table; page 11 paragraph 3, select the prediction mode existing in the upper part of the table with high frequency). KO discloses that each entry in a frequency table indicates a number of times a corresponding intra prediction mode was used for coding video blocks processed before a current video block ([0200] An order for padding a candidate mode corresponding from idx1 to idxN to the MPM list may be adaptively determined on the basis of a number of frequencies of an intra mode of neighbor blocks; [0227] a number of occurrence frequencies of each intra mode of neighbor blocks may be checked. Inherently, a frequency table is established; [0067] a neighbor block may mean a block adjacent to a current block. The block adjacent to the current block may mean a block that comes into contact with a boundary of the current block, or a block positioned within a predetermined distance from the current block. The neighbor block may mean a block adjacent to a vertex of the current block. Herein, the block adjacent to the vertex of the current block may mean a block vertically adjacent to a neighbor block that is horizontally adjacent to the current block, or a block horizontally adjacent to a neighbor block that is vertically adjacent to the current block). Therefore, AKIYUKI in view of KO discloses a frequency table that records frequencies of one or more intra prediction modes in order of frequency and each entry in the frequency table indicates a number of times a corresponding intra prediction mode was used for coding video blocks processed before a current video block and without spatial positional constraint relative to the current video block. JUN discloses the frequency table and the IPM table are in one-to-one correspondence through indexes and sorting of the IPM table is updated when the frequency table is updated ([0198] the intra-prediction mode of the block with the highest occurrence frequency among the neighboring intra-prediction blocks based on a size of the current block may be obtained; [0240] The secondary IPM candidate modes may be constructed including a mode having a high occurrence frequency of being selected as the intra-prediction mode; [0248] The secondary IPM candidate modes may include up to N candidate modes according to intra-prediction modes that have high occurrence frequency by deriving statistics of actual intra prediction modes encoded in a previously encoded picture or slice; [0250] L and A may be obtained from intra-prediction modes that have the highest occurrence frequency in a left block and an upper block based on a current block and are already encoded (inherently the IPM candidate modes/intra-prediction modes that have high occurrence frequency are updated for each picture or slice or block, i.e., the frequency table and the IPM table are in one-to-one correspondence through indexes)). Jun discloses the frequency table, because Jun discloses that the secondary IPM candidate modes may include up to N candidate modes according to intra-prediction modes that have high occurrence frequency. Therefore, the secondary IPM candidate modes includes the frequency table, and they are in one-to-one correspondence through indexes, otherwise, the secondary IPM candidate modes cannot be generated. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-6, 10-14, 16, 19-20, 23-29 are rejected under 35 U.S.C. 103 as being unpatentable over AKIYUKI et al. (WO2008123254A1) in view of KO et al. (US 20200275124 A1) and JUN et al. (US 20180316913 A1). Regarding claims 1, 14, 19-20. AKIYUKI discloses A method of processing video data (page 1 paragraph 1, an image coding and decoding method and apparatus for moving images), comprising: maintaining, for a conversion between a video region comprising one or more video blocks and a bitstream of the video region (figure 1, figure 5A, figure 5B, page 3 paragraphs 2-4), a frequency table (page 2 paragraph 3, a frequency information table storing mode information indicating each of the one or more prediction modes) that records frequencies of one or more intra prediction modes in order of frequency (page 5 paragraph 4, The frequency information table generation unit 202 tabulates the frequency of the prediction information 209 of the pixel block coded up to now), determining, for the current video block of the one or more video blocks, that a first intra prediction mode is applied to derive prediction samples of the current video block based on the frequency table (page 7 next to last paragraph, as a prediction method of the predictor 204, An example of using intraframe prediction of 264 is shown; page 7 paragraph 4, predictor 204, the control unit 210 sets the prediction mode corresponding to the prediction mode selected by the frequency information table extraction unit 201 by the prediction mode setting unit 203), and performing the conversion based on the frequency table (page 7 paragraphs 4-5), wherein the one or more intra prediction modes comprises the first intra prediction mode (page 7 next to last paragraph, as a prediction method of the predictor 204, An example of using intraframe prediction of 264 is shown), wherein the frequency table is updated based on the first intra prediction mode (page 5 paragraph 4, When coding the pixel block, the frequency information table of the frequency information table generation unit 202 is updated (rearranged) according to the prediction information 209 given from the control unit 210; page 5 paragraph 5, The frequency information table is updated every time the mode determination of one pixel block is completed according to the number of the selected prediction mode), wherein the updated frequency table is used to derive an intra prediction mode for a video block subsequent to the current video block (figure 7, page 5 paragraph 5, updating of the frequency information table. The frequency information table is updated every time the mode determination of one pixel block is completed according to the number of the selected prediction mode; page 5 next to last paragraph, L = 1 ⟨⟨ N (3) In the frequency information table, since the prediction mode with high frequency among the prediction modes used for encoding exists in the higher level in the table, prediction of the prediction mode is more likely to be predicted And generates a predicted image only in the prediction mode. Prediction using this method is hereinafter referred to as flexible mode prediction (inherently, the table is used to predict the prediction mode for subsequent blocks)). KO discloses that each entry in a frequency table indicates a number of times a corresponding intra prediction mode was used for coding video blocks processed before a current video block ([0200] An order for padding a candidate mode corresponding from idx1 to idxN to the MPM list may be adaptively determined on the basis of a number of frequencies of an intra mode of neighbor blocks; [0227] a number of occurrence frequencies of each intra mode of neighbor blocks may be checked. Inherently, a frequency table is established). Also, AKIYUKI discloses that the frequency table includes the frequency information for coding video blocks processed before the current video block and without spatial positional constraint relative to the current video block (page 5 paragraph 4, the frequency information table generation unit 202 tabulates the frequency of the prediction information 209 of the pixel block coded up to now (The pixel block coded up to now has no spatial positional constraint relative to the current video block.)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of AKIYUKI according to the invention of KO, to establish a frequency table wherein each entry in the frequency table indicates a number of times a corresponding intra prediction mode was used for coding video blocks processed before the current video block and without spatial positional constraint relative to the current video block, in order to implement the flexible mode prediction more conveniently. JUN discloses an intra prediction mode (IPM) table is constructed for the current video block based on the frequency table, the IPM table is updated after the frequency table is updated, the IPM table comprises X entries, X is an integer which is not smaller than two, and the X entries represents X modes with highest frequency in the frequency table ([0198] the intra-prediction mode of the block with the highest occurrence frequency among the neighboring intra-prediction blocks based on a size of the current block may be obtained; [0240] The secondary IPM candidate modes may be constructed including a mode having a high occurrence frequency of being selected as the intra-prediction mode; [0248] The secondary IPM candidate modes may include up to N candidate modes according to intra-prediction modes that have high occurrence frequency by deriving statistics of actual intra prediction modes encoded in a previously encoded picture or slice; [0250] L and A may be obtained from intra-prediction modes that have the highest occurrence frequency in a left block and an upper block based on a current block and are already encoded). JUN discloses wherein a prediction mode list comprising intra prediction modes is constructed for the current video block based on the IPM table when a frequency-based intra mode is enabled, and wherein the first intra prediction mode is determined based on the prediction mode list ([0198] the intra-prediction mode of the block with the highest occurrence frequency among the neighboring intra-prediction blocks based on a size of the current block may be obtained; [0240] The secondary IPM candidate modes may be constructed including a mode having a high occurrence frequency of being selected as the intra-prediction mode; [0248]; [0250] (it’s obvious to combine the concept of two tables into the concept of one table, wherein modes with highest frequency are included)), wherein a size of the prediction mode list is N, and N intra prediction modes in the prediction mode list are derived from N modes with higher frequency in the IPM table, wherein N is an integer number ([0198] the intra-prediction mode of the block with the highest occurrence frequency among the neighboring intra-prediction blocks based on a size of the current block may be obtained; [0240] The secondary IPM candidate modes may be constructed including a mode having a high occurrence frequency of being selected as the intra-prediction mode; [0248]; [0250]), and wherein the frequency table and the IPM table are in one-to-one correspondence through indexes and sorting of the IPM table is updated when the frequency table is updated ([0198] the intra-prediction mode of the block with the highest occurrence frequency among the neighboring intra-prediction blocks based on a size of the current block may be obtained; [0240] The secondary IPM candidate modes may be constructed including a mode having a high occurrence frequency of being selected as the intra-prediction mode; [0248] The secondary IPM candidate modes may include up to N candidate modes according to intra-prediction modes that have high occurrence frequency by deriving statistics of actual intra prediction modes encoded in a previously encoded picture or slice; [0250] L and A may be obtained from intra-prediction modes that have the highest occurrence frequency in a left block and an upper block based on a current block and are already encoded (inherently the IPM candidate modes/intra-prediction modes that have high occurrence frequency are updated for each picture or slice or block)). AKIYUKI also discloses rearrangement (sorting) is performed on prediction modes of pixel blocks adjacent above and to the left of the encoding target pixel block (page 5 paragraph 5). Since the prediction mode with high frequency among the prediction modes used for encoding exists in the higher level in the table, prediction of the prediction mode is more likely to be predicted (page 5 next to last paragraph). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of AKIYUKI, KO and JUN, to construct an intra prediction mode (IPM) table/prediction mode list based on the frequency table, in order to implement the flexible mode prediction more conveniently. Regarding claims 3, 23. AKIYUKI discloses The method of claim 1, wherein the current video block comprises a luma coding block (page 1 paragraph 3, three kinds of intraframe prediction methods are specified for the luminance signal). Regarding claims 4, 16, 28. AKIYUKI discloses The method of claim 1, wherein the frequency table is associated with M entries, wherein M is an integer number, and wherein each entry is associated with a frequency of one intra prediction mode among M intra prediction modes (page 5 paragraph 4, The frequency information table generation unit 202 tabulates the frequency of the prediction information 209 of the pixel block coded up to now). Regarding claims 5, 24. AKIYUKI discloses The method of claim 4, wherein the M intra prediction modes exclude a wide-angular intra prediction mode (figure 9, page 2 paragraph 4, FIG. 9 is a table showing the name of the intra prediction method, wherein wide-angular mode is not included). Regarding claims 6, 25. AKIYUKI discloses The method of claim 4, wherein the M intra prediction modes include at least one of direct current (DC) mode, a horizontal mode, a vertical mode, or a bilinear intra prediction mode (figure 9, page 2 paragraph 4, FIG. 9 is a table showing the name of the intra prediction method; page 13 paragraph 6, generate a new predicted image signal from the two predicted image signals). Regarding claim 12. AKIYUKI discloses The method of claim 1, wherein the conversion includes encoding the current video block into the bitstream (page 1 paragraph 1, an image coding and decoding method and apparatus for moving images). Regarding claim 13. AKIYUKI discloses The method of claim 1, wherein the conversion includes decoding the current video block from the bitstream (page 1 paragraph 1, an image coding and decoding method and apparatus for moving images). Regarding claims 10, 26, 29. JUN discloses The method of claim 1, wherein N and X are both equal to 2 ([0218] construct M (for example, 3, 4, 5, 6, etc.) MPM candidate modes by obtaining neighboring IPM values). The same motivation has been stated in claim 1. Regarding claims 11, 27. AKIYUKI discloses The method of claim 1, wherein indications of whether the frequency-based intra mode is enabled are included in a sequence parameter set and a picture header for the current video block (page 9 next to last paragraph – page 10 first paragraph, seq_flexble_mode_prediction_flag, pic_flexble_mode_prediction_flag; page 5 next to last paragraph, L = 1 ⟨⟨ N (3) In the frequency information table, since the prediction mode with high frequency among the prediction modes used for encoding exists in the higher level in the table, prediction of the prediction mode is more likely to be predicted And generates a predicted image only in the prediction mode. Prediction using this method is hereinafter referred to as flexible mode prediction). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached on (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOLAN XU/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Feb 22, 2022
Application Filed
Sep 28, 2023
Non-Final Rejection — §103
Jan 03, 2024
Response Filed
Mar 21, 2024
Final Rejection — §103
May 28, 2024
Response after Non-Final Action
Jul 26, 2024
Request for Continued Examination
Jul 30, 2024
Response after Non-Final Action
Jul 31, 2024
Response after Non-Final Action
Aug 10, 2024
Non-Final Rejection — §103
Nov 15, 2024
Response Filed
Dec 14, 2024
Final Rejection — §103
Feb 19, 2025
Response after Non-Final Action
Mar 18, 2025
Request for Continued Examination
Mar 21, 2025
Response after Non-Final Action
Mar 26, 2025
Non-Final Rejection — §103
Jul 01, 2025
Response Filed
Jul 26, 2025
Final Rejection — §103
Oct 10, 2025
Examiner Interview Summary
Oct 10, 2025
Applicant Interview (Telephonic)
Oct 17, 2025
Response after Non-Final Action
Nov 11, 2025
Request for Continued Examination
Nov 16, 2025
Response after Non-Final Action
Dec 13, 2025
Non-Final Rejection — §103
Mar 16, 2026
Response Filed
Apr 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598315
IMAGE ENCODING/DECODING METHOD AND DEVICE FOR DETERMINING SUB-LAYERS ON BASIS OF REQUIRED NUMBER OF SUB-LAYERS, AND BIT-STREAM TRANSMISSION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586255
CONFIGURABLE POSITIONS FOR AUXILIARY INFORMATION INPUT INTO A PICTURE DATA PROCESSING NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12587652
IMAGE CODING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12581120
Method and Apparatus for Signaling Tile and Slice Partition Information in Image and Video Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12581092
TEMPORAL INITIALIZATION POINTS FOR CONTEXT-BASED ARITHMETIC CODING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
74%
Grant Probability
87%
With Interview (+13.3%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 334 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month