Prosecution Insights
Last updated: April 19, 2026
Application No. 19/019,035

METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING

Non-Final OA §102§112
Filed
Jan 13, 2025
Examiner
FEREJA, SAMUEL D
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
86%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
458 granted / 614 resolved
+16.6% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
66 currently pending
Career history
680
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
64.1%
+24.1% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 614 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) were submitted on 01/13/205 7 12/17/2025. The submission are in compliance with the provisions of 37 CFR § 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 5 & 13 is objected to because of the following informalities: Claim 5 recite “TIMD & DIMD” acronym used without prior definition. The claim also recites “a TIMD based prediction derivation, or a DIMD based mode derivation” twice. Claim 13 recites the limitation “MPM” acronym without definition The MPM acronym also needs to be defined Appropriate correction is required for the purpose of clarity. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 13 is rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. Specifically claim 13 recites the limitation “the MPM list” which is insufficient antecedent basis for this limitation in the claims. The MPM acronym also needs to be defined. Claim 20 is rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. The claim/claims is/are directed to “storing instructions” and/or “storing bitstreams” but claim/claims does not have any steps related to “storing instructions” and “storing bitstreams”, therefore, the scope of the claim/claims are/is vague and indefinite. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a) (2) as being anticipated by LI et al. (US 20260019569, hereinafter LI). Regarding Claim 1, LI discloses a method of video processing, comprising: generating, for a conversion between a video unit of a video and a bitstream of the video, an intra mode for the video unit based on coding information associated with the video unit ([0085], FIG. 5, intra coded block uses predictive information from previously reconstructed parts of the current picture provided by an intra picture prediction unit (552); [0007]), wherein the video unit is an intra template matching (TM) coded block ([0022], application of a template matching based block vector refinement for coding a current block in a current picture referencing (CPR) mode) or the video unit is an intra copy block (IBC) coded block ([0023] current picture referencing (CPR) mode is an intra block copy (IBC) mode); and performing the conversion based on the generated intra mode ([0080], information used to manage operation of the video decoder (510), and potentially information to control a rendering device such as a render device (512)). Regarding Claim 2, LI discloses the method of claim 1, wherein generating the intra mode comprises: generating the intra mode based on coding information of the intra TM coded block ([0022], processing circuitry extracts, from a bitstream, a signal indicative of an application of a template matching based block vector refinement for coding a current block in a current picture referencing (CPR) mode) Regarding Claim 3, LI discloses the method of claim 2, wherein the generated intra mode is derived based on a gradient of the intra TM coded block, and/or wherein the generated intra mode is derived based on a block vector of the intra TM coded block ([0022], the first refined block vector includes a first block vector refinement offset applied on a first block vector associated with the current block). Regarding Claim 4, LI discloses the method of claim 3, wherein based on the gradient, a converted or mapped intra mode is generated and stored as the intra mode of the intra TM coded block, and/or wherein a direction of the block vector is utilized, and/or wherein based on the direction or an angle of the block vector, a converted or mapped intra mode is generated and stored as the intra mode of the intra TM coded block ([0022], the first refined block vector includes a first block vector refinement offset applied on a first block vector associated with the current block. The processing circuitry reconstructs the current block according to a first reference block in a same picture as the current block, the first reference block being indicated by the first refined block vector). Regarding Claim 5, LI discloses the method of claim 1, wherein the generated intra mode of the intra TM coded block is stored in a buffer, and/or wherein an intra TM based prediction block is combined with a second prediction block, and/or wherein a block vector of the intra TM coded block is used for coding of a latter block, wherein the coding of the latter block comprises at least one of the following processes: an IBC prediction list generation, a TIMD based mode derivation, a TIMD based prediction derivation, or a DIMD based mode derivation, a DIMD based prediction derivation ([0023] [0023], the CPR mode is an intra block copy (IBC) mode. The processing circuitry decodes, from the bitstream, a block vector difference associated with the current block with a first precision indicated by an adaptive motion vector resolution (AMVR) syntax. ). Regarding Claim 6, LI discloses the method of claim 5, wherein a prediction block of the intra TM coded block is not generated based on the generated intra mode, and/or wherein the stored intra mode of the intra TM coded block is used for coding of a latter block, and/or wherein the intra TM based prediction block is combined with a decoder side intra mode derivation (DIMD) prediction block, and/or wherein the intra TM based prediction block is combined with a template-based intra mode derivation (TIMD) prediction block, and/or wherein the intra TM based prediction block is combined with a multi-hypothesis prediction (MHP) prediction block, and/or ([0204] A template matching (TM) technique can be used in video/image coding. To further improve the compression efficiency of VVC standard, for example, TM can be used to refine an MV. In an example, the TM is used at a decoder side) wherein the intra TM based prediction block is combined with an intra block copy (IBC) prediction block, and/or wherein the IBC prediction list generation comprises at least one of: an advanced motion vector prediction (AMVP) list for regular IBC mode, a merge list for regular IBC mode, an AMVP list for IBC-TM mode, a merge list for IBC-TM mode, an AMVP list for IBC-merge mode with motion vector difference (MMVD) mode, a merge list for IBC-MMVD mode, an AMVP list for reconstruction reordered IBC (RRIBC), or a merge list for RRIBC ([0023], the CPR mode is an intra block copy (IBC) mode. The processing circuitry decodes, from the bitstream, a block vector difference associated with the current block with a first precision indicated by an adaptive motion vector resolution (AMVR) syntax. The first block vector refinement offset is finer or equal to the first precision). Regarding Claim 7, LI discloses the method of claim 1, wherein an intra TM is used for a chroma block in at least one of: a single tree or a dual tree ([0167], when the current coding tree type is SINGLE_TREE, a chroma block always has a corresponding luma block. In the IBC mode, the BV of the chroma block can be derived from the BV of the corresponding luma block, with proper scaling according to the chroma sampling format (e.g., 4:2:0, 4:2:2) and chroma BV precision. Regarding Claim 8, LI discloses the method of claim 7, wherein in the dual tree, a block vector of a chroma intra TM is derived based on a block vector of a corresponding luma intra TM, and/or wherein in the dual tree, an intra TM flag of the chroma block is inherited from a corresponding luma block ([0144] Chroma sample interpolation can be performed in the IBC mode. In some examples, chroma sample interpolation is only necessary when a chroma BV is a non-integer when the chroma BV is derived from a corresponding luma BV. In some examples, luma sample interpolation and chroma samples interpolation can be performed in the regular inter prediction mode). Regarding Claim 9, LI discloses the method of claim 1, wherein generating the intra mode comprises: generating the intra mode based on coding information of a reference block of the intra TM coded block ([0204] A template matching (TM) technique can be used in video/image coding. To further improve the compression efficiency of VVC standard, for example, TM can be used to refine an MV. In an example, the TM is used at a decoder side. With the TM mode, an MV can be refined by constructing a template (e.g., a current template) of a block (e.g., a current block) in a current picture and determine the closest matching between the template of the block in the current picture and a plurality of possible templates (e.g., a plurality of possible reference templates) in a reference picture. In an embodiment, the template of the block in the current picture can include left neighboring reconstructed samples of the block and above neighboring reconstructed samples of the block. The TM can be used in video/image coding beyond VVC). Regarding Claim 10, LI discloses the method of claim 9, wherein the reference block is derived based on a block vector of the intra TM coded block, and/or wherein the generated intra mode of the intra TM coded block is derived based on at least one of the followings: an intra mode of the reference block, a gradient of the reference block, a prediction mode of the reference block, or coding information of a coding unit that covers the reference block, and/or wherein if a coding unit (CU) that covers the reference block comprises a plurality of candidate blocks, one intra mode of a plurality of intra modes is selected for the generated intra mode, and/or wherein if at least one of: the reference block or the CU which covers the reference block is not coded by at least one of: an intra mode, an IBC mode, or an intra TM mode, a planar mode is used as the generated intra mode ([0204] A template matching (TM) technique can be used in video/image coding. To further improve the compression efficiency of VVC standard, for example, TM can be used to refine an MV. In an example, the TM is used at a decoder side. With the TM mode, an MV can be refined by constructing a template (e.g., a current template) of a block (e.g., a current block) in a current picture and determine the closest matching between the template of the block in the current picture and a plurality of possible templates (e.g., a plurality of possible reference templates) in a reference picture. In an embodiment, the template of the block in the current picture can include left neighboring reconstructed samples of the block and above neighboring reconstructed samples of the block. The TM can be used in video/image coding beyond VVC). Regarding Claim 11, LI discloses the method of claim 10, wherein a rule of the selection is based on prediction modes of the plurality of candidate blocks, and/or wherein a rule of the selection is based on a pre-defined candidate blocks check order, and/or wherein a rule of the selection is based on a location of the plurality of candidate blocks relative to the intra TM block, and/or wherein a rule of the selection is based on template costs or ([0204] A template matching (TM) technique can be used in video/image coding. To further improve the compression efficiency of VVC standard, for example, TM can be used to refine an MV. In an example, the TM is used at a decoder side). Regarding Claim 12, LI discloses the method of claim 1, wherein the generated intra mode of the intra TM coded block is used for coding of a latter block, wherein the coding of the latter block comprises at least one of the following processes: a most probable mode (MPM) list generation, a TIMD mode derivation, a TIMD prediction derivation, a DIMD based mode derivation, a DIMD based prediction derivation, a fusion based mode derivation, a fusion based prediction derivation, or a deblocking filter, and/or wherein an IBC based prediction block is combined with a second prediction block ([0204] A template matching (TM) technique can be used in video/image coding. To further improve the compression efficiency of VVC standard, for example, TM can be used to refine an MV and is used at a decoder side. With the TM mode, an MV can be refined by constructing a template (e.g., a current template) of a block (e.g., a current block) in a current picture and determine the closest matching between the template of the block in the current picture and a plurality of possible templates (e.g., a plurality of possible reference templates) in a reference picture. In an embodiment, the template of the block in the current picture can include left neighboring reconstructed samples of the block and above neighboring reconstructed samples of the block). Regarding Claim 13, LI discloses the method of claim 12, wherein the MPM list generation comprises at least one of: a MPM list for regular intra mode, a MPM list for geometric partitioning mode (GPM)-intra-inter mode, a MPM list for spatial GPM intra mode, a MPM list for multi linear regression intra prediction (MIP) mode, a MPM list for TIMD mode, a first MPM list used for video coding, or a second MPM list used for video coding, and/or wherein the deblocking filter comprises a deblocking strength ([0017] mapping of intra prediction direction bits that represent the direction in the coded video bitstream can be different from video coding technology to video coding technology. Such mapping can range, for example, from simple direct mappings, to codewords, to complex adaptive schemes involving most probable modes, and similar techniques), or wherein the deblocking filter comprises a filter strength of a deblocking process ([0142], special handling of the IBC mode may be necessary for implementation and performance reasons, and the IBC mode and the inter prediction mode (e.g., the regular inter prediction mode) can differ, such as described below. In an example, reference samples used in the IBC mode are unfiltered (e.g., reconstructed samples before in-loop filtering processes, such as a DBF and a sample adaptive offset (SAO) filter are applied). Other inter prediction modes (e.g., the regular inter prediction mode) of HEVC can use filtered samples, for example, reference samples that are filtered by the in-loop filtering processes). Regarding Claim 14, LI discloses the method of claim 12, wherein the IBC based prediction block is combined with a DIMD prediction block, and/or wherein the IBC based prediction block is combined with a TIMD prediction block, and/or wherein the IBC based prediction block is combined with a MHP prediction block, and/or wherein the IBC based prediction block is combined with an intra TM prediction block ([0204] A template matching (TM) technique can be used in video/image coding. To further improve the compression efficiency of VVC standard, for example, TM can be used to refine an MV. In an example, the TM is used at a decoder side. With the TM mode, an MV can be refined by constructing a template (e.g., a current template) of a block (e.g., a current block) in a current picture and determine the closest matching between the template of the block in the current picture and a plurality of possible templates (e.g., a plurality of possible reference templates) in a reference picture. In an embodiment, the template of the block in the current picture can include left neighboring reconstructed samples of the block and above neighboring reconstructed samples of the block. The TM can be used in video/image coding beyond VVC). Regarding Claim 15, LI discloses the method of claim 1, further comprising: determining at least one coding tool to be disabled for a screen content coding of the video unit ([0135] Some IBC coding tools are used in the HEVC Screen Content Coding (SCC) extensions as current picture referencing (CPR)). Regarding Claim 16, LI discloses the method of claim 15, wherein the at least one coding tool is from the followings: a decoder side intra mode derivation (DIMD), a variant of DIMD, a template-based intra mode derivation (TIMD), a variant of TIMD, an overlapped block motion compensation (OBMC), a variant of OBMC, a local illumination compensation (LIC), a variant of LIC, a multi-hypothesis prediction (MHP), a variant of MHP, a combined intra/inter prediction mode (CIIP), a variant of CIIP, a fusion process of two predictions, a deblocking, a variant of deblocking, an adaptive loop filter (ALF), a variant of ALF, a screen content coding (SCC), a variant of SCC, a bilateral filter, or a variant of bilateral filter, and/or wherein a high-level syntax is indicated at one of the following levels to indicate whether the at least one coding tool is disabled for the screen content coding: a sequence parameter set (SPS), a picture parameter set (PPS),a picture header (PH), or a sequency header (SH) ([0135] Some IBC coding tools are used in the HEVC Screen Content Coding (SCC) extensions as current picture referencing (CPR). The IBC mode can use coding technologies that are used for inter prediction where a current picture is used as a reference picture in the IBC mode. A benefit of using the IBC mode is a referencing structure of the IBC mode where a two-dimensional (2D) spatial vector can be used as the representation of an addressing mechanism to reference samples. A benefit of an architecture of the IBC mode is that the integration of IBC requires relatively minor changes to the specification and can case the implementation burden if manufacturers have already implemented certain inter prediction technologies, such as the HEVC version 1). Regarding Claim 17, LI discloses the method of claim 1, wherein the conversion includes encoding the video unit into the bitstream, or wherein the conversion includes decoding the video unit from the bitstream (Abstract, provide methods and apparatuses for video encoding/decoding). Regarding Claim 18, Apparatus claim 18 of using the corresponding method claimed in claim 1, and the rejections of which are incorporated herein for the same reasons as used above. Regarding Claim 19, Computer-readable claim 19 of using the corresponding method claimed in claim 1, and the rejections of which are incorporated herein for the same reasons as used above. Regarding Claim 20, Computer-readable claim 20 of using the corresponding method claimed in claim 1, and the rejections of which are incorporated herein for the same reasons as used above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samuel D Fereja whose telephone number is (469)295-9243. The examiner can normally be reached 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID CZEKAJ can be reached at (571) 272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAMUEL D FEREJA/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Jan 13, 2025
Application Filed
Mar 05, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597264
Method for Calibrating an Assistance System of a Civil Motor Vehicle
2y 5m to grant Granted Apr 07, 2026
Patent 12598318
METHOD AND SYSTEM-ON-CHIP FOR PERFORMING MEMORY ACCESS CONTROL WITH LIMITED SEARCH RANGE SIZE DURING VIDEO ENCODING
2y 5m to grant Granted Apr 07, 2026
Patent 12593018
SYSTEM AND METHOD FOR CONTROLLING PERCEPTUAL THREE-DIMENSIONAL ELEMENTS FOR DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12593036
METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
2y 5m to grant Granted Mar 31, 2026
Patent 12591123
METHOD FOR DETERMINING SLOPE OF SLIDE IN SLIDE SCANNING DEVICE, METHOD FOR CONTROLLING SLIDE SCANNING DEVICE AND SLIDE SCANNING DEVICE USING THE SAME
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
86%
With Interview (+11.8%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 614 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month