Prosecution Insights
Last updated: April 19, 2026
Application No. 18/286,209

METHOD, DEVICE, AND MEDIUM FOR VIDEO PROCESSING

Non-Final OA §103
Filed
Oct 09, 2023
Examiner
MIKESKA, NEIL R
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
81%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
363 granted / 491 resolved
+15.9% vs TC avg
Moderate +7% lift
Without
With
+7.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
7 currently pending
Career history
498
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
28.1%
-11.9% vs TC avg
§112
4.6%
-35.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 491 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Applicant’s response filed 21 Nov 2025 provides claims 72-92 pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 72-78 and 89-92 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2023/0015638) in view of Chang (US 2021/0314623). For claims 72, 90, and 91, Lee discloses a method of processing video data, comprising: during a conversion between a current video block of a video and a bitstream of the video ([0056] Encoder: means an apparatus performing encoding. That is, means an encoding apparatus. [0057] Decoder: means an apparatus performing decoding. That is, means an decoding apparatus.), obtaining a geometric partitioning mode (GPM) block associated with the current video block ([0060] The unit may have various sizes and forms, and particularly, the form of the unit may be a two-dimensional geometrical figure such as a square shape, a rectangular shape, a trapezoid shape, a triangular shape, a pentagonal shape, etc.); and Lee does not expressly disclose performing the conversion based on a motion-compensated prediction sample refinement process applied to the GPM block. Chang teaches performing the conversion based on a motion-compensated prediction sample refinement process applied to the GPM block ([0130] e.g. sps_gpm_enabled_flag specifies whether geometric partition based motion compensation can be used for inter prediction.) It would be obvious to a person with ordinary skill in the art to combine the geometric partition based motion compensation teachings of Chang with the teaching of Lee for the predictable improvement providing an encoded representation of the video data conforming to a video coding standard. For claim 73, Lee discloses wherein performing the conversion comprises: applying the motion-compensated prediction sample refinement process for at least one prediction sample of the GPM block by at least one technique comprising: an overlapped block-based motion compensation, a multi-hypothesis prediction, a local illumination compensation, a combined inter-intra prediction, or a bi-directional optical-flow based motion refinement ([0209] The derivation method of the motion information may be different depending on the prediction mode of the current block. For example, a prediction mode applied for inter prediction includes an AMVP mode, a merge mode, a skip mode, a merge mode with a motion vector difference, a subblock merge mode, a geometric partitioning mode, an combined inter intra prediction mode, affine mode, and the like. Herein, the merge mode may be referred to as a motion merge mode.). For claim 74, Lee discloses the wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the overlapped block-based motion compensation comprises: refining the at least one prediction sample by using neighboring block's motion information with a weighted prediction, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the multi-hypothesis prediction comprises: weighting the at least one prediction sample from accumulating more than one prediction signals from multiple hypothetical motion data, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the local illumination compensation comprises: compensating illumination change for the at least one prediction sample by using a linear model, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the combined inter-intra prediction comprises: refining the at least one prediction sample by an intra-prediction, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the bi-directional optical-flow based motion refinement comprises: in accordance with a determination that a bi-prediction is used, performing a pixel-wise motion refinement on top of a block-wise motion compensation, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the bi-directional optical-flow based motion refinement comprises: in accordance with a determination that two motion vectors of two parts of the GPM block are from two different directions, performing the bi-directional optical-flow based motion, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the overlapped block-based motion compensation comprises: performing the overlapped block-based motion compensation for all subblocks of the GPM block, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the overlapped block-based motion compensation comprises: performing the overlapped block-based motion compensation for a portion of subblocks of the GPM block or the at least one sample of the GPM block, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the overlapped block-based motion compensation comprises: performing the overlapped block-based motion compensation for at least one subblocks of the GPM block at block boundaries of the GPM block, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the overlapped block-based motion compensation comprises: performing the overlapped block-based motion compensation for the at least one prediction sample at block boundaries of the GPM block, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the overlapped block-based motion compensation comprises: applying the overlapped block-based motion compensation based on a reference subblock based motion data of the GPM block and a neighboring GPM block, or wherein applying the motion-compensated prediction sample refinement process for the at least one prediction sample of the GPM block by the overlapped block-based motion compensation comprises: applying the overlapped block-based motion compensation based on motion data derived from GPM merge candidates ([0295] Whether the current block is a geometric partitioning mode may be determined depending on one or more inter prediction modes ([0296] For example, when at least one or all of a subblock merge mode, a regular merge mode or a combined intra inter mode is not performed with respect to the current block, the inter prediction mode of the current block may be determined as a geometric partitioning mode. That is, when all of a subblock merge flag, a regular merge flag and a combined intra inter prediction flag indicate not to perform (merge_subblock_flag=0 && regular_merge_flag=0 && ciip_flag=0), the inter prediction mode of the current block may be determined as a geometric partitioning mode, and at least one of a partitioning index, a merge list index or a weight may be signaled.). For claim 75, Lee discloses the wherein applying the overlapped block-based motion compensation based on the reference subblock based motion data comprises: determining blending weights of the overlapped block-based motion compensation based on motion similarities between the reference subblock based motion of a GPM subblock of the GPM block and motion of neighbor subblocks of the neighboring GPM block ([0134] In general, mode selection unit 202 also controls the components thereof (e.g., motion estimation unit 222, motion compensation unit 224, and intra-prediction unit 226) to generate a prediction block for a current block (e.g., a current CU, or in HEVC, the overlapping portion of a PU and a TU).). For claim 76, Lee discloses the method of claim 72, wherein the method further comprises: determining whether a feature or tool is to be applied on top of the GPM block based on a temporal layer identifier (ID) of a current picture among a structure of a group of pictures (GOP) ([0347] The above embodiments of the present invention may be applied depending on a temporal layer. In order to identify a temporal layer to which the above embodiments may be applied, a corresponding identifier may be signaled, and the above embodiments may be applied to a specified temporal layer identified by the corresponding identifier); and in accordance with a determination that a current picture locates at pre-defined layer identifiers, applying the feature or tool to the GPM block without an additional signalling, or wherein the method further comprises:determining whether a feature or tool is to be applied on top of the GPM block based on a temporal layer identifier (ID) of a current picture among a structure of a group of pictures (GOP); and in accordance with a determination that a signalling indicating layer identifiers of pictures associated with the GPM block to be applied with the feature or tool is obtained, applying the feature or tool on the GPM block ([0347] Herein, the identifier may be defined as the lowest layer or the highest layer or both to which the above embodiment may be applied, or may be defined to indicate a specific layer to which the embodiment is applied. In addition, a fixed temporal layer to which the embodiment is applied may be defined.). For claim 77, Lee discloses the method of claim 76, wherein the feature or tool is applied based on one of technique comprises: a merge mode with motion vector differences, an overlapped block-based motion compensation, a multi-hypothesis prediction, a local illumination compensation a combined inter-intra prediction, a non-adjacent spatial merge candidate, or a decoder side motion refinement or derivation ([0296] For example, when at least one or all of a subblock merge mode, a regular merge mode or a combined intra inter mode is not performed with respect to the current block, the inter prediction mode of the current block may be determined as a geometric partitioning mode). For claim 78, Lee discloses the method of claim 72, further comprising: applying a motion vector difference (MVD) [0223] The decoding apparatus 200 may correct the derived motion information by itself. The decoding apparatus 200 may search the predetermined region on the basis of the reference block indicated by the derived motion information and derive the motion information having the minimum SAD as the corrected motion information. to at least one portion of merge candidates of the GPM block if the MVD is allowed to be used to the GPM block (GMVD) ([0293] The geometric partitioning mode may mean a mode in which each motion information is derived by partitioning a current block, each prediction sample is derived using the derived motion information and a prediction sample of the current block is derived by weighted-summing the derived predicted samples. Here, the geometric partitioning mode may mean a mode in which prediction is performed by partitioning a current block into asymmetric subblocks.). For claim 89, Lee discloses the method of claim 72, wherein the conversion comprises decoding the current video block from the bitstream of the video, or wherein the conversion comprises encoding the current video block into the bitstream of the video ([0307] FIG. 13 is a view illustrating an image decoding method according to an embodiment of the present invention.). For claim 92, Lee discloses the method of claim 72, further comprising: storing the bitstream in a non-transitory computer-readable recording medium ([0307] FIG. 13 is a view illustrating an image decoding method according to an embodiment of the present invention.). Allowable Subject Matter Claims 79-88 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant's arguments filed 22 Oct 2025 have been fully considered but they are not persuasive. Applicant argues Chang fails to disclose or suggest the claimed limitation of a “motion-compensated prediction sample refinement process" and “performing the conversion based on a motion-compensated prediction sample refinement process applied to the GPM block” because Chang is “. . . is completely silent about refining a result of the geometric partition based motion compensation.” However, the broadly claimed “performing the conversion based on a motion-compensated prediction sample refinement process applied to the GPM block” does not exclude Chang teachings of performing the conversion based on a motion-compensated prediction sample refinement process ([0130]: e.g. “inter prediction” describes amotion-compensated prediction sample refinement process) applied to the GPM block ([0130] e.g. sps_gpm_enabled_flag specifies whether geometric partition based blocks are used). Accordingly, Chang teaches the claimed limitation. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Deshpande; Sachin G. US 20210329303 A1 SYSTEMS AND METHODS FOR SIGNALING DECODING CAPABILITY INFORMATION IN VIDEO CODING SAMUELSSON; Jonatan et al. US 20210321138 A1 SYSTEMS AND METHODS FOR SIGNALING SCALING WINDOW INFORMATION IN VIDEO CODING LIAO; Ruling et al. US 20210051335 A1 BLOCK PARTITIONING METHODS FOR VIDEO CODING BORDES; Philippe et al. US 20220060701 A1 METHOD AND APPARATUS FOR DEBLOCKING AN IMAGE LEE; Ha Hyun et al. US 20220174286 A1 IMAGE ENCODING/DECODING METHOD AND APPARATUS, AND RECORDING MEDIUM FOR STORING BITSTREAM Huang; Han et al. US 20200344486 A1 SIZE CONSTRAINT FOR TRIANGULAR PREDICTION UNIT MODE Chen; Chun-Chia et al. US 20210160528 A1 Selective Switch For Parallel Processing Chang; Yao-Jen et al. US 20210314623 A1 GENERAL CONSTRAINT INFORMATION SYNTAX IN VIDEO CODING Any inquiry concerning this communication or earlier communications from the examiner should be directed to NEIL MIKESKA whose telephone number is (571)272-3917. The examiner can normally be reached M-F: 6a - 2p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at (571) 272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NEIL R MIKESKA/Primary Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Oct 09, 2023
Application Filed
Mar 04, 2025
Non-Final Rejection — §103
Jun 09, 2025
Response Filed
Aug 20, 2025
Final Rejection — §103
Oct 22, 2025
Response after Non-Final Action
Nov 21, 2025
Request for Continued Examination
Dec 06, 2025
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604017
ENCODING METHOD, ENCAPSULATION METHOD, DISPLAY METHOD, APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12604003
METHODS AND APPARATUS OF ENCODING/DECODING VIDEO PICTURE PARTITIONED IN CTU GRIDS
2y 5m to grant Granted Apr 14, 2026
Patent 12587687
HIGH-LEVEL SYNTAX DESIGN FOR GEOMETRY-BASED POINT CLOUD COMPRESSION
2y 5m to grant Granted Mar 24, 2026
Patent 12581071
INTERACTION OF MULTIPLE PARTITIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12563192
CONSTRAINTS ON PARTITIONING OF VIDEO BLOCKS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
81%
With Interview (+7.0%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 491 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month