Prosecution Insights
Last updated: April 19, 2026
Application No. 19/043,168

METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING

Non-Final OA §102
Filed
Jan 31, 2025
Examiner
SENFI, BEHROOZ M
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
858 granted / 1039 resolved
+24.6% vs TC avg
Moderate +10% lift
Without
With
+10.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
20 currently pending
Career history
1059
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
42.6%
+2.6% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1039 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim 20 is directed to a non-transitory computer readable recording medium storing a bitstream. The body of the claim appears to indicate how the bitstream is being process. These elements or steps are not performed by an intended computer, and the bitstream is not a form of programming that causes functions to be performed by an intended computer. This shows that the computer-readable medium merely serves as support and/or storing the bitstream/data and provides no functional relationship between the elements of the claim by intended computer system. Therefore, those claim elements are not given patentable weight. And, the claim as a whole considered as a storage media, such as memory for string a bitstream, and therefore can be rejected under anticipation, See below. Claim Rejections - 35 USC § 102 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 5. Claims 1-2,6,10-11 and 13-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wang et al. (US 2025/0287017). As for claim 20, in view of the above 112 rejection, it is noted that, the claim as a whole considered as a storage media or medium for string a bitstream, and therefore; Wang, discloses a recording medium to store a bitstream (e.g., paragraph 0009-0012). Regarding claim 1, Wang discloses a method for video processing, comprising; determining, for a conversion between a current video block of a video and a bitstream of the video, a first intra prediction sample of the current video block (e.g., figs. 1-4, abstract, paragraphs 0005-0006, also intra prediction units, disclosed throughout the disclosure), determining an adjustment value associated with a further video block in the video (e.g., figs. 44a-44b, also residual generation unit 207), the further video block being coded before the current video block (e.g., paragraphs 0092,0094), and performing the conversion based on the first intra prediction sample and the adjustment value (e.g., figs. 2-4 and 49-50, paragraphs 0540+). Regarding claim 2, Wang discloses the method of claim 1, wherein the conversion includes decoding the current video block from the bitstream (e.g., figs. 2-4), and wherein performing the conversion comprises; determining an updated residue value by applying at least one of an inverse transform or a dequantization to a residue value in the bitstream (e.g., figs. 2-4, paragraphs 0081,0085,0095,0097,0099, etc.), determining a second intra prediction sample based on the first intra prediction sample and the adjustment value (e.g., paragraphs figs. 2-4, 0094-0095,0112,0371,0558, etc.), determining a reconstructed sample of the current video block based on the second intra prediction sample and the updated residue value (e.g., figs. 2-4, paragraphs 0081,0099,0104,0114, 0371,0539), and decoding the current video block based on the reconstructed sample (e.g., decoding operation disclosed throughout the disclosure, also figs. 2-4). Regarding claim 6, Wang discloses the method of claim 1, wherein the conversion includes encoding the current video block into the bitstream (e.g., fig. 2) wherein performing the conversion comprises; determining a second intra prediction sample based on the first intra prediction sample and the adjustment value (see claim 2 above), determining a residue value based on a sample in the bitstream and the second intra prediction sample (e.g., residual generation unit disclosed throughout the disclosure), determining an updated residue value by applying at least one of a quantization or a transformation to the residue value (e.g., figs. 2 and 4), and including the updated residue value in the bitstream (e.g., figs. 2 and 4). Regarding claim 10, Wang discloses the method of claim 1, wherein determining the adjustment value based on the further video block comprises; determining a prediction of the further video block (e.g., figs. 2 and 4), determining a sample value of the further video block (e.g., figs. 44a-46), and determining the adjustment value based on the prediction and the sample value of the further video block, wherein the further video block comprises a pseudo block, and/or wherein the prediction of the further video block comprises an intra prediction of the further video block (e.g., the limitation is in an alternative format, and fig. 28, paragraphs 0131-0132,0277-0286). Regarding claim 11, Wang discloses the method of claim 10, wherein the intra prediction of the further video block is based on a reference sample of the further video block, the reference sample being adjacent or non-adjacent to the further video block, and/or wherein the intra prediction of the further video block is of at least one of: a Planer intra prediction mode, a DC intra prediction mode, an angular intra prediction mode, a wide angle intra prediction mode, a position dependent intra prediction combination (PDPC) mode, a multiple reference line (MRL) mode, an intra sub-partition (ISP) mode, a matrix weighted intra prediction (MIP) mode, a cross-component linear model (CCLM) mode, or an intra fusion mode, and/or wherein determining the intra prediction of the further video block comprises; applying an intra prediction mode to the further video block, the intra prediction mode being applied to the current video block (e.g., figs. 2 and 4, paragraphs 0371,0374,0559). Regarding claim 13, Wang discloses the method of claim 1, wherein the further video block and the current video block are in a same video region or different video regions, wherein a video region comprises at least one of; a picture, a slice or a tile (e.g., figs. 2-4,9,18-19, also paragraphs 0076,0082, 0085-0087, etc.), and/or wherein the further video block is in a reference picture of the current video block, and/or wherein the further video block and the current video block have a same width and a same height (e.g., the above limitations are in an alternative format, and figs. 9,19,21,25, meets one of the above alternatives). Regarding claim 14, Wang discloses the method of claim 1, further comprising at least one of; determining the further video block based on a block vector (BV) associated with the current video block; or determining location information of the further video block, wherein determining the location information of the further video block comprises: determining the location information based on template matching (e.g., figs. 18b-19,21,25,33,44a-45, paragraph 0224, also section 2.23). Regarding claim 15, Wang discloses the method of claim 1, wherein a reconstructed sample associated with the further video block is in at least one of; the further video block, or a reference sample of the further video block, the reference sample being adjacent or non-adjacent to the further video block, and/or wherein a process is previously applied to the reference sample of the further video block, and/or wherein the method further comprises; determining the reference sample of the further video block; and applying the process to the reference sample of the further video block, wherein the process comprises at least one of; a filtering process, or a value mapping process, wherein the value mapping process comprises a luma mapping with chroma scaling process (e.g., the above limitations are in an alternative format, and figs. 2-4, paragraphs 0085,0099-0100,0109,0120,0135,0371, Meets one of the above alternatives). Regarding claim 16, Wang discloses the method of claim 1, wherein at least one of the further video block or a reference sample of the further video block is in a video region, the reference sample being adjacent or non-adjacent to the further video block, wherein the video region is not overlapped with the current video block, and/or wherein the video region is inside at least one of; a picture, a slice or a tile containing the current video block, and/or wherein the video region is in at least one predefined coding tree unit (CTU) or at least one CTU line, and/or wherein the video region is a valid region of a prediction block for intra block copy (IBC) (e.g., the above limitations are in an alternative format, and figs. 42-43, paragraphs 0057,0108,0355,0356,0525, etc. Meets one of the above alternatives). Regarding claim 17, Wang discloses the method of claim 1, wherein location information of the further video block is included in the bitstream, wherein the location information comprises at least one of; a block vector (BV) associated with the current video block, a prediction of the BV, or a BV difference (BVD) associated with the current video block, and/or wherein the method further comprises; determining the BV from a plurality of candidate BVs of the current video block, the plurality of candidate BVs being in a merge mode, wherein a coding tool for including the BV in the bitstream is used for including a further BV for an intra block copy (IBC) mode in the bitstream, and/or wherein a first arithmetic coding context model is used for the further video block, and a second arithmetic coding context model is used for the IBC mode, the first arithmetic coding context model and the second arithmetic coding context model being different (e.g., the above limitations are in an alternative format, and figs. 18b-19,21,42,44a-44b, abstract, paragraphs 0005,0082,0088,0089,0092,0130,0357-0358, etc. Meets one of the above alternatives). Claims 18-20 are substantially similar to claim 1 above, and therefore the claims are rejected for the same reason as set forth in the above claim 1. Allowable Subject Matter 6. Claims 3-5,7-9 and 12 are objected to as being dependent upon a rejected base claim, but appears to be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, and after overcoming the above 112 rejection. Conclusion 7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li et al. (US 2020/0304826) Li et al. (US 2022/0007024) Contact Information 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Behrooz Senfi, whose telephone number is (571)272-7339. The examiner can normally be reached on Monday-Friday 10:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Christopher Kelley can be reached on 571 272 7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786- 9199 (IN USA OR CANADA) or 571 -272-1000. /BEHROOZ M SENFI/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Jan 31, 2025
Application Filed
Mar 30, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581050
OPTICAL ASSEMBLIES FOR MACHINE VISION CALIBRATION
2y 5m to grant Granted Mar 17, 2026
Patent 12574493
DISPLAY DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12568287
GENERATING THREE-DIMENSIONAL VIDEOS BASED ON TEXT USING MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 03, 2026
Patent 12563170
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND DISPLAY DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12556676
IMAGE SENSOR, CAMERA AND IMAGING SYSTEM WITH TWO OR MORE FOCUS PLANES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
93%
With Interview (+10.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 1039 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month