Prosecution Insights
Last updated: April 19, 2026
Application No. 19/009,881

METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING

Non-Final OA §102§103
Filed
Jan 03, 2025
Examiner
NAWAZ, TALHA M
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
538 granted / 604 resolved
+31.1% vs TC avg
Minimal -1% lift
Without
With
+-0.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
29 currently pending
Career history
633
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
48.1%
+8.1% vs TC avg
§102
24.9%
-15.1% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 604 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application discloses and claims only subject matter disclosed in prior application, and names the inventor or at least one joint inventor named in the prior application. Accordingly, this application may constitute a continuation or divisional. Should applicant desire to claim the benefit of the filing date of the prior application, attention is directed to 35 U.S.C. 120, 37 CFR 1.78, and MPEP § 211 et seq. The presentation of a benefit claim may result in an additional fee under 37 CFR 1.17(w)(1) or (2) being required, if the earliest filing date for which benefit is claimed under 35 U.S.C. 120, 121, 365(c), or 386(c) and 1.78(d) in the application is more than six years before the actual filing date of the application. Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed. Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/19/2025 and 01/03/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 4-7, 11-20 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zheng et al. (US20120219064) (hereinafter Zheng). Regarding claim 1, Zheng discloses a method for video processing, comprising: determining, for a conversion between a current video block of a video and a bitstream of the video, a plurality of co-located frames of the current video block, the current video block being in a current frame co-located with the plurality of co-located frames [0036, 0051-0057, 0063; coding information from a bitstream video data with a plurality of frames including neighboring frames]. performing the conversion based on the plurality of co-located frames [0068-0075, 0096; performing coding based on video data including reference lists including frame data]. Regarding claim 2, Zheng discloses wherein performing the conversion comprises: determining at least one motion vector (MV) associated with at least one of the plurality of co-located frames; and performing the conversion based on the at least one MV [0033-0036, 0047-0054; neighboring video data including motion vectors]. Regarding claim 4, Zheng discloses wherein the plurality of co-located frames comprises reconstructed frames in a reference list [Figs. 4-7, 0049-0057, 0069-0075; neighboring video data including reference lists]. Regarding claim 5, Zheng discloses wherein the plurality of co-located frames is selected from at least one reference list associated with the current frame [0069-0075; neighboring and current video block data including reference lists]. Regarding claim 6, Zheng discloses wherein the plurality of co-located frames comprises a first reference frame with a first index in a first reference list and a second reference frame with the first index in a second reference list, the first and second reference frames are different [0049-0057, 0069-0075; neighboring video data including reference lists]. Regarding claim 7, Zheng discloses herein a first picture order count (POC) value of the first reference frame is different from a second POC value of the second reference frame [Figs. 4-7, 0065-0068; POC of plurality of frames as part of coded data]. Regarding claim 11, Zheng discloses wherein contents in a first reference list of the current frame are same with contents in a second reference list of the current frame, and the plurality of co-located frames is selected from one of the first or second reference list [Figs. 4-7, 0069-0075; frame data and respective reference lists utilized in coding process]. Regarding claim 12, Zheng discloses further comprising: determining a set of temporal motion vector predictions (TMVPs) based on at least a partial of the plurality of co-located frames; adding the set of TMVPs in a joint candidate group, the joint candidate group further comprising a candidate of a further candidate type; and determining a merge candidate list from the joint candidate group based on a metric [0032-0036, 0047, 0055, 0067-0075; candidate lists with a variety of metrics including specified temporal or spatial distance of candidates]. Regarding claim 13, Zheng discloses further comprising: determining a first temporal motion vector prediction (TMVP) from a first co-located frame of the plurality of co-located frames; determining a second TMVP from a second co-located frame of the plurality of co-located frames; and determining a prediction of the current video block based on the first and second TMVPs [0032-0036, 0047, 0055, 0067-0075; candidate lists with a variety of metrics including specified temporal or spatial distance of candidates]. Regarding claim 14, Zheng discloses wherein determining the prediction comprises: determining the prediction based on an average or a weighted average of the first and second TMVPs [0032-0036, 0047, 0055, 0067-0075; candidate lists with a variety of weighted metrics including specified temporal or spatial distance of candidates]. Regarding claim 15, Zheng discloses further comprising: determining a motion vector (MV) or a motion vector prediction (MVP) of the current video block based on an average or a weighted average of the first and second TMVPs [0032-0036, 0047, 0055, 0067-0075; candidate lists with a variety of weighted metrics including specified temporal or spatial distance of candidates]. Regarding claim 16, Zheng discloses wherein the conversion includes encoding the current video block into the bitstream [0032-0036, 0047, 0055, 0067-0075; coding video data from bitstream]. Regarding claim 17, Zheng discloses wherein the conversion includes decoding the current video block from the bitstream [0032-0036, 0047, 0055, 0067-0075; coding video data from bitstream]. Regarding claim 18, Zheng discloses an apparatus for video processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform: determine, for a conversion between a current video block of a video and a bitstream of the video, a plurality of co-located frames of the current video block, the current video block being in a current frame co-located with the plurality of co-located frames [0036, 0051-0057, 0063; CRM with memory and processor for coding information from a bitstream video data with a plurality of frames including neighboring frames]. perform the conversion based on the plurality of co-located frames [0068-0075, 0096; performing coding based on video data including reference lists including frame data]. Regarding claim 19, Zheng discloses a non-transitory computer-readable storage medium storing instructions that cause a processor to perform: determine, for a conversion between a current video block of a video and a bitstream of the video, a plurality of co-located frames of the current video block, the current video block being in a current frame co-located with the plurality of co-located frames [0036, 0051-0057, 0063; CRM with memory and processor for coding information from a bitstream video data with a plurality of frames including neighboring frames]. perform the conversion based on the plurality of co-located frames [0068-0075, 0096; performing coding based on video data including reference lists including frame data]. Regarding claim 20, Zheng discloses a non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises: determining a plurality of co-located frames of a current video block of the video, the current video block being in a current frame co-located with the plurality of co-located frames [0036, 0051-0057, 0063; CRM with memory and processor for coding information from a bitstream video data with a plurality of frames including neighboring frames]. generating the bitstream based on the plurality of co-located frames [0032-0036, 0047, 0055, 0067-0075; coding video data from bitstream]. Additionally, regarding claim 20, claim 20 claims a product by process claim limitation where the product is the bitstream and the process is the method steps to generate the bitstream. MPEP §2113 recites “Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps”. Thus, the scope of the claim is the storage medium storing the bitstream (with the structure implied by the method steps). The structure includes the information and samples manipulated by the steps. “To be given patentable weight, the printed matter and associated product must be in a functional relationship. A functional relationship can be found where the printed matter performs some function with respect to the product to which it is associated”. MPEP §2111.05(I)(A). When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists. MPEP §2111.05(III). The memory storing the claimed bitstream in claim 20 merely services as a support for the storage of the bitstream and provides no functional relationship between the stored bitstream and storage medium. Therefore, the bitstream, which scope is implied by the method steps, is non-functional descriptive material and given no patentable weight. MPEP §2111.05(III). Thus, the claim scope is just a storage medium storing data and is anticipated by Wang which recites a storage medium storing a bitstream. Zheng discloses, a bitstream of compressed video data, including a computer readable storage medium storing the compressed non-transitory video data (0036, 0053-0054, encoder including memory for storing bitstream data). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3, 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US20120219064) in view of Kang et al. (US20190364298) (hereinafter Kang). Regarding claim 3, Zheng discloses the limitations of the claim. However, Zheng does not explicitly disclose wherein the plurality of co-located frames comprises reconstructed frames in a decoding picture buffer (DPB). Kang more explicitly discloses wherein the plurality of co-located frames comprises reconstructed frames in a decoding picture buffer (DPB) [0143; storing decoded video data in buffer]. It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Zheng with the teachings of Kang as stated above. By incorporating the teachings as such, improved coding efficiency is achieved (see Kang 0006-0013). Regarding claim 8, Zheng discloses the limitations of the claim. However, Zheng does not explicitly disclose wherein determining the plurality of co-located frames comprises: for a reference list in a plurality of candidate reference lists, selecting at least one candidate reference frame from the reference list based on indexes of reference frames in the reference list; and adding the at least one candidate reference frame into the plurality of co-located frames based on a comparison between the at least one candidate reference frame and a further co-located frames in the plurality of co- located frames. Kang discloses wherein determining the plurality of co-located frames comprises: for a reference list in a plurality of candidate reference lists, selecting at least one candidate reference frame from the reference list based on indexes of reference frames in the reference list; and adding the at least one candidate reference frame into the plurality of co-located frames based on a comparison between the at least one candidate reference frame and a further co-located frames in the plurality of co- located frames [0122-0127, 0154, 0168-0175; an adder generating reconstructed block by adding residual block]. It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Zheng with the teachings of Kang as stated above. By incorporating the teachings as such, improved coding efficiency is achieved (see Kang 0006-0013). Regarding claim 9, Zheng discloses the limitations of the claim. However, Zheng does not explicitly disclose wherein determining the plurality of co-located frames comprises: adding a reference frame in a reference list into the plurality of co-located frames. Kang discloses wherein determining the plurality of co-located frames comprises: adding a reference frame in a reference list into the plurality of co-located frames [0122-0127, 0154, 0168-0175, 0180, 0220; an adder generating reconstructed block by adding residual block]. It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Zheng with the teachings of Kang as stated above. By incorporating the teachings as such, improved coding efficiency is achieved (see Kang 0006-0013). Regarding claim 10, Zheng discloses the limitations of the claim. However, Zheng does not explicitly disclose wherein determining the plurality of co-located frames comprises: selecting a reference list from at least one reference list of the current frame; and adding a reference frame in the selected reference list into the plurality of co-located frames. Kang discloses wherein determining the plurality of co-located frames comprises: selecting a reference list from at least one reference list of the current frame; and adding a reference frame in the selected reference list into the plurality of co-located frames [0122-0127, 0154, 0168-0175, 0180, 0220; an adder generating reconstructed block by adding residual block]. It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Zheng with the teachings of Kang as stated above. By incorporating the teachings as such, improved coding efficiency is achieved (see Kang 0006-0013). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TALHA M NAWAZ whose telephone number is (571)270-5439. The examiner can normally be reached Flex, M-R 6:30am-3:30pm; F 8:30am-12:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joe G Ustaris can be reached at 571-272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TALHA M NAWAZ/ Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Jan 03, 2025
Application Filed
Feb 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593023
Electronic Device with Reliable Passthrough Video Fallback Capability and Hierarchical Failure Detection Scheme
2y 5m to grant Granted Mar 31, 2026
Patent 12587631
Motion Dependent Display
2y 5m to grant Granted Mar 24, 2026
Patent 12587673
METHOD FOR DECODER-SIDE MOTION VECTOR DERIVATION USING SPATIAL CORRELATION
2y 5m to grant Granted Mar 24, 2026
Patent 12581024
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573203
MEDICAL OBSERVATION SYSTEM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
88%
With Interview (-0.8%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 604 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month