Prosecution Insights
Last updated: April 19, 2026
Application No. 18/913,882

METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING

Final Rejection §102§103
Filed
Oct 11, 2024
Examiner
WALKER, JARED T
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
414 granted / 490 resolved
+26.5% vs TC avg
Moderate +10% lift
Without
With
+10.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
18 currently pending
Career history
508
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
58.1%
+18.1% vs TC avg
§102
19.3%
-20.7% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 490 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 19 February 2026 have been fully considered but they are not persuasive. Regarding the arguments on pages 6-7, it is stated that “From the above disclosure of Li, it can be clearly seen that the triangle flag is used to indicate whether the current coding unit uses the triangle mode. That means, the triangle_flag of Li is an enablement flag for the triangle mode. However, the triangle mode (i.e., the triangle prediction unit mode) disclosed in Li is used to predict CU by splitting the CU into two triangular prediction units. This predicting process has nothing to do with "adjustment of samples of the current video block" as recited in claim 1. Therefore, the triangle flag used to indicate whether the current coding unit uses triangle mode, as disclosed by Li, is totally different from "a syntax element associated with adjustment of samples of the current video block' as recited in claim 1. However, the examiner disagrees and asserts that Li’s teaching of a triangle mode teaches the adjustment of samples of the current block. If a unit is coded using triangle mode, the current block is divided (adjustment of samples) into triangles [88,94,113,125-126]. In contrast, if the block is not coded in triangle mode, the current coding units would by remain as square units. Therefore, the rejection, as set forth in the previous office action, stands. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1,2,4,5,6,7,8,16,17,18,19, and 20 is/are rejected under 35 U.S.C. 102(a)(1)(2) as being anticipated by Li US 20200154101. Regarding claim 1, Li meets the claim limitations, as follows: A method for video processing, comprising: performing a conversion between a current video block of a video and a bitstream of the video (i.e. encoder converts video into a bitstream) [38,49;], wherein a syntax element associated with adjustment of samples of the current video block is coded with a bypass mode or a context model (i.e. triangle flag used to signal context model) [88,94,113,125-126]. Regarding claim 2, Li meets the claim limitations, as follows: The method of claim 1, wherein the adjustment comprises at least one of the following: reordering the samples, flipping the samples, shifting the samples, rotating the samples, or transforming the samples (i.e. coding order can be signaled and samples can be shifted) [116,140-142]. Regarding claim 4, Li meets the claim limitations, as follows: The method of claim 1, wherein the context model is determined based on at least one neighboring video block of the current video block (i.e. context model can be chosen based on statistics of neighboring coding information) [113]. Regarding claim 5, Li meets the claim limitations, as follows: The method of claim 4, wherein the context model is determined based on coding information of the at least one neighboring video block (i.e. context model can be chosen based on statistics of neighboring coding information) [113]. Regarding claim 6, Li meets the claim limitations, as follows: The method of claim 4, wherein the at least one neighboring video block comprises at least one of the following: an above neighboring video block of the current video block, a left neighboring video block of the current video block, an above-right neighboring video block of the current video block, or a left-bottom neighboring video block of the current video block (i.e. context model can be chosen based on statistics of above and left neighboring coding information) [113]. Regarding claim 7, Li meets the claim limitations, as follows: The method of claim 1, wherein an index of the context model is determined based on coding information of an above neighboring video block of the current video block and coding information of a left neighboring video block of the current video block (i.e. context model can be chosen based on statistics of above and left neighboring coding information and this determines the index) [113]. Regarding claim 8, Li meets the claim limitations, as follows: The method of claim 5, wherein the coding information comprises at least one of the following: an availability, a prediction mode, a prediction scheme, or a scheme for adjusting samples (i.e. context model of neighboring blocks would be part of the prediction scheme) [113]. Regarding claim 16, Li meets the claim limitations, as follows: The method of claim 1, wherein the conversion includes encoding the current video block into the bitstream (i.e. encoder encodes the video into a bitstream) [58; fig. 4]. Regarding claim 17, Li meets the claim limitations, as follows: The method of claim 1, wherein the conversion includes decoding the current video block from the bitstream (i.e. decoder decodes the bitstream into a video) [86; fig 6]. Claim 18 is rejected using similar rationale as claim 1. Claim 19 is rejected using similar rationale as claim 1. Claim 20 is rejected using similar rationale as claim 1. Claim(s) 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lee et al. (US 2021/0227222) (hereinafter Lee). In regard to claim 20, claim 20 is directed to a non-transitory computer-readable medium having stored therein a bitstream generated by acts. Significantly, the claimed non-transitory computer readable medium is NOT implementing any actual method; no instructions/steps are being executed. Instead, the claimed storage medium merely stores the data output from and/or generated by a series of acts. In other words, these claims are directed to a mere machine-readable medium storing data content (a bitstream generated by a method). Applicant therefore seeks to patent the storage of a bitstream in the abstract. In other words, the claim seeks to patent the content of the information (bitstream comprising video information) and not the process itself. Moreover, this stored bitstream does not impose any definitive physical organization on the data as there is no functional relationship between the bitstream and the storage medium. In conclusion, claim 13 and any claims depending therefrom are directed to mere data content (bitstream generated by a series of acts) stored as a bitstream on a computer-readable storage medium. Under MPEP 2111.05(III), such claims are merely machine-readable media. Furthermore, the Examiner found and continues to find that there is no disclosed or claimed functional relationship between the stored data and medium. Instead, the medium is merely a support or carrier for the data being stored. Therefore, the data stored and the way such data is generated should not be given patentable weight. See MPEP 2111.05 applying In re Lowry, 32 F.3d 1579, 1583-84, 32 USPQ2d 1031, 1035 (Fed. Cir. 1994) and In re Ngai, 367 F.3d 1336, 70 USPQ2d 1862 (Fed. Cir. 2004). As such, this claim is subject to a prior art rejection based on any non-transitory computer readable medium known before the earliest effective filing date of the present application. Therefore, claim 20 is anticipated by Lee, as Lee discloses a computer readable medium storing a coded bitstream. Lee discloses: a non-transitory computer readable storage medium having stored therein a bitstream comprising video information generated by acts [¶0024; computer-readable recording medium storing a bitstream generated by a video coding method] comprising: Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Huang US 20210314598. Regarding claim 3, Li do/does not explicitly disclose(s) the following claim limitations: wherein the syntax element indicates at least one of the following: whether the samples of the current video block are adjusted, or how to adjust the samples of the current video block However, in the same field of endeavor Huang discloses the deficient claim limitations, as follows: wherein the syntax element indicates at least one of the following: whether the samples of the current video block are adjusted, or how to adjust the samples of the current video block (i.e. syntax elements specify max number of subblocks) [135; fig. 7]. It would have been obvious to one with ordinary skill in the art at the time of filing to modify the teachings of Li with Huang to have the syntax element indicate at least one of the following: whether the samples of the current video block are adjusted, or how to adjust the samples of the current video block. It would be advantageous because "it may be desirable to limit a number of candidates included in the list of subblock based merge candidates.” [5]. Therefore, it would have been obvious to one with ordinary skill, in the art at the time of filing, to modify the teachings of Li with Huang to obtain the invention as specified in claim 3. Claim(s) 9-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Kang US 20210266531. Regarding claim 9, Li do/does not explicitly disclose(s) the following claim limitations: wherein if samples of a further video block of the video different from the current video block are disallowed to be adjusted, a further syntax element associated with adjustment of the samples of the further video block is not indicated in the bitstream for the further video block, or if the samples of the further video block are allowed to be adjusted, the further syntax element is indicated in the bitstream for the further video block However, in the same field of endeavor Kang discloses the deficient claim limitations, as follows: wherein if samples of a further video block of the video different from the current video block are disallowed to be adjusted, a further syntax element associated with adjustment of the samples of the further video block is not indicated in the bitstream for the further video block, or if the samples of the further video block are allowed to be adjusted, the further syntax element is indicated in the bitstream for the further video block (i.e. Another aspect of the present disclosure relates to a technique for signaling of high-level syntaxes for controlling on/off of various tools described above. The above-described affine motion prediction, sample-by-sample adjustment for affine motion prediction samples, adaptive motion vector resolution, and illumination compensation are coding tools used to improve the video encoding efficiency. However, for specific content such as, for example, screen content, the aforementioned various coding tools may not contribute to improving compression performance. Accordingly, a coding unit based signaling of whether to apply each coding tool or a coding unit based decision of whether to apply each coding tool may rather degrade coding efficiency or increase computational complexity. The present disclosure provides a signaling technique for efficiently controlling the above-described coding tools.) [199,204]. It would have been obvious to one with ordinary skill in the art at the time of filing to modify the teachings of Li with Kang to have the following: if samples of a further video block of the video different from the current video block are disallowed to be adjusted, a further syntax element associated with adjustment of the samples of the further video block is not indicated in the bitstream for the further video block, or if the samples of the further video block are allowed to be adjusted, the further syntax element is indicated in the bitstream for the further video block It would be advantageous because "The present disclosure provides a signaling technique for efficiently controlling the above-described coding tools.” [199]. Therefore, it would have been obvious to one with ordinary skill, in the art at the time of filing, to modify the teachings of Li with Kang to obtain the invention as specified in claim 9. Regarding claim 10, Kang meets the claim limitations, as follows: The method of claim 1, wherein coding information of the current video block is stored for coding a target video block of the video different from the current video block (i.e. coding information from a neighboring block is used to decode a current block.) [47]. Regarding claim 11, Kang meets the claim limitations, as follows: The method of claim 10, wherein the current video block is neighboring to the target video block, or wherein the current video block is adjacent to the target video block, or wherein the current video block is at a left side of the target video block, or the current video block is at a top side of the target video block, or the current video block is at a top-right side of the target video block, or the current video block is at a left-bottom side of the target video block, or wherein the current video block is non-adjacent to the target video block (i.e. context model can be chosen based on statistics of above and left neighboring coding information. This would be done for the current block as well and would be used for blocks that are coded later. If the blocks are processed in a normal order, either the top or left block would be the previously coded block to the current block. Fig. 8 shows an example where the neighboring blocks are to the left or top of the current block.) [113; fig. 8]. Regarding claim 12, Kang meets the claim limitations, as follows: The method of claim 10, wherein the target video block is coded with samples of the target video block being adjusted, or wherein the target video block is coded without samples of the target video block being adjusted, or wherein the current video block is coded with the samples of the current video block being adjusted, or wherein the current video block is coded without the samples of the current video block being adjusted (i.e. Another aspect of the present disclosure relates to a technique for signaling of high-level syntaxes for controlling on/off of various tools described above. The above-described affine motion prediction, sample-by-sample adjustment for affine motion prediction samples, adaptive motion vector resolution, and illumination compensation are coding tools used to improve the video encoding efficiency. However, for specific content such as, for example, screen content, the aforementioned various coding tools may not contribute to improving compression performance. Accordingly, a coding unit based signaling of whether to apply each coding tool or a coding unit based decision of whether to apply each coding tool may rather degrade coding efficiency or increase computational complexity. The present disclosure provides a signaling technique for efficiently controlling the above-described coding tools.) [199,204]. Regarding claim 13, Kang meets the claim limitations, as follows: The method of claim 10, wherein the coding information of the current video block comprises at least one of the following: a dimension of the current video block, a width of the current video block, a height of the current video block, a coordinate of a top-left position of the current video block, a coordinate of a center position of the current video block, information regarding whether the samples of the current video block are adjusted, information regarding how to adjust the samples of the current video block, or motion information for the current video block (i.e. Another aspect of the present disclosure relates to a technique for signaling of high-level syntaxes for controlling on/off of various tools described above. The above-described affine motion prediction, sample-by-sample adjustment for affine motion prediction samples, adaptive motion vector resolution, and illumination compensation are coding tools used to improve the video encoding efficiency. However, for specific content such as, for example, screen content, the aforementioned various coding tools may not contribute to improving compression performance. Accordingly, a coding unit based signaling of whether to apply each coding tool or a coding unit based decision of whether to apply each coding tool may rather degrade coding efficiency or increase computational complexity. The present disclosure provides a signaling technique for efficiently controlling the above-described coding tools.) [199,204]. Regarding claim 14, Kang meets the claim limitations, as follows: The method of claim 13, wherein the motion information comprises at least one of the following: a motion vector for the current video block, a block vector for the current video block, or a reference index for the current video block (i.e. reference index used for coding and decoding) [69]. Regarding claim 15, Kang meets the claim limitations, as follows: The method of claim 10, wherein the coding information of the current video block is stored in at least one buffer, or wherein the coding information of the current video block is stored in a history motion vector table (i.e. buffer stores the video sequence which would contain coding information) [66]. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to JARED T WALKER whose telephone number is (571)272-1839. The examiner can normally be reached M-F: 7:00 - 3:00 Mountain. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached on 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jared Walker/Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Oct 11, 2024
Application Filed
Nov 13, 2025
Non-Final Rejection — §102, §103
Feb 19, 2026
Response Filed
Mar 24, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586380
IMAGE ANALYSIS DEVICE, IMAGE ANALYSIS METHOD FOR TELECOMMUTING WORK SECURITY AND TERMINAL DEVICE INCLUDING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12581178
Camera Assembly Arrangement for Vehicle Rear View Cover and Rear View Device Therewith
2y 5m to grant Granted Mar 17, 2026
Patent 12563304
MEASUREMENT DEVICE, MEASUREMENT METHOD, PROGRAM
2y 5m to grant Granted Feb 24, 2026
Patent 12555383
VIDEO SURVEILLANCE SYSTEM FOR CAMERA-RICH AREAS
2y 5m to grant Granted Feb 17, 2026
Patent 12556718
ELECTRONIC DEVICE AND METHOD WITH IMAGE ENCODING AND DECODING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
94%
With Interview (+10.0%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 490 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month