Prosecution Insights
Last updated: April 19, 2026
Application No. 18/961,050

SCHEMES FOR ADJUSTING ADAPTIVE RESOLUTION FOR MOTION VECTOR DIFFERENCE

Non-Final OA §103
Filed
Nov 26, 2024
Examiner
HABIB, IRFAN
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Tencent America LLC
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
96%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
637 granted / 721 resolved
+30.3% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
36 currently pending
Career history
757
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
70.0%
+30.0% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
3.6%
-36.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 721 resolved cases

Office Action

§103
DETAILED ACTION 1. This office action is in response to U.S. Patent Application No.: 18/961,050 filed on 11/26/2024 with effective filing date 10/21/2021. Claims 1-20 are pending. Claim Rejections - 35 USC § 103 2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 4. Claim(s) 1-3, & 6-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. US 2021/0352314 A1 in view of Chen et al. US 2018/0098089 (IDS). Per claims 1, 11 & 17, Zhang et al. discloses a method of video decoding, the method comprising: receiving a video bitstream that includes a current frame composed of a plurality of video blocks, the plurality of video blocks including a current block; determining that the current block is inter-coded based on a prediction block and a motion vector (MV) (para: 83, e.g. if the inter prediction is uni-directional, the signaled distance offset is applied on the offset direction for each control point predictor. Results will be the MV value of each control point), wherein the MV is derived from a reference motion vector (RMV) and a motion vector difference (MVD) for the current block (para: 73 & 441, e.g. making(1002) a decision regarding applying a merge with motion vector difference (MMVD) mode to a current block of video based on a set of MMVD side information, wherein the current block is split into at least two partitions; and performing(1004) a conversion between the current block of the video and a bitstream representation of the video using the MMVD mode); and when the MVD is coded with an adaptive MVD mode: identifying a maximum allowed MVD value for the current block (para: 110, e.g. for a CU that has at least one non-zero MVD components, a first flag is signaled to indicate whether quarter luma sample MV precision is used in the CU. When the first flag (equal to 1) indicates that quarter luma sample MV precision is not used, another flag is signaled to indicate whether integer luma sample MV precision or four luma sample MV precision is used). Zhang et al. fails to explicitly disclose the remaining claim limitations. Chen et al. however in the same field of endeavor teaches determining an adaptive MVD value for the current block based on the maximum allowed MVD value (para: 72-74, e.g. resolution selection unit 48 may be configured to compare an error difference (e.g., the difference between a reconstructed block and the original block) between using a one-quarter-pixel precision motion vector to encode a block and using a one-eighth-pixel precision motion vector to encode the block..when the difference exceeds a threshold, resolution selection unit 48 may select the one-eighth-pixel precision motion vector for encoding the block) and deriving the MVD for the current block with a first precision according to at least one MVD parameter signaled in the video bitstream for the current block and the adaptive MVD value (para: 61-62, e.g. video decoder 30 may use to determine the selected motion vector precision… video encoder 20 and/or video decoder 30 may be configured to determine a motion vector difference (MVD) for a current block of video data using one of a plurality of MVD precisions, including a larger than one integer pixel MVD precision (e.g., 2, 3, 4, or greater pixel precision), and code the MVD for the current block of video data). Therefore, in view of disclosures by Chen et al., it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Zhang et al. and Chen et al. in order to receive an encoded block of video data that is encoded using an inter-prediction mode. Syntax elements indicating a motion vector difference (MVD) is received associated with the encoded block of video data. Current MVD precision is determined, from MVD precisions. Per claims 2, 12 & 18, Chen et al. further teaches the method of claim 1, wherein, when the MVD is coded with the adaptive MVD, a ⅛-pel MVD value is disallowed (para: 74, e.g. when the difference exceeds a threshold, resolution selection unit 48 may select the one-eighth-pixel precision motion vector for encoding the block). Per claims 3, 13 & 19, Chen et al. further teaches the method of claim 1, further comprising determining an MV class for the MV from a predefined set of MV classes, wherein the maximum allowed MVD value is based on the MV class (para: 74, e.g. When the difference exceeds a threshold, resolution selection unit 48 may select the one-eighth-pixel precision motion vector for encoding the block). Per claims 6 & 14, Chen et al. further teaches the method of claim 1, wherein the at least one MVD parameter is signaled at a frame level (para: 171, e.g. video encoder 20 may signal such syntax elements in, e.g., the PPS or slice header, to indicate that only a portion of all allowed MVD precisions are used for the blocks in a picture or slice). Per claims 7, 15 & 20, Chen et al. further teaches the method of claim 1, further comprising determining a magnitude of the MVD, wherein the maximum allowed MVD value for the current block depends on the magnitude of the MVD (para: 172, e.g. information of the set of allowed MVD precisions that are used for a picture, and/or a slice, and/or a tile, and/or a sequence may be implicitly derived without signaling. The derivation of set of MVD precisions may depend on the sequence resolution, quantization parameters, coding modes, temporal level of a picture). Per claims 8 & 16, Chen et al. further teaches the method of claim 1, wherein the maximum allowed MVD value is predefined (para: 74, e.g. when the difference exceeds a threshold, resolution selection unit 48 may select the one-eighth-pixel precision motion vector for encoding the block). Per claim 9, Chen et al. further teaches the method of claim 8, wherein the maximum allowed MVD value is ¼ pel (para: 74, e.g. when the difference exceeds a threshold, resolution selection unit 48 may select the one-eighth-pixel precision motion vector for encoding the block). Per claim 10, Chen et al. further teaches the method of claim 1, further comprising determining whether the MVD is coded with the adaptive MVD mode based on a flag signaled in the video bitstream (para: 147, e.g. a flag/value may be used to indicate the motion vector precision, such as integer precision, half-pixel precision, quarter-pixel precision, or other precisions. When motion vector precision is signaled for one block or one region/slice, all smaller blocks within this block/region/slice may share the same motion vector precision). Allowable Subject Matter 4. Claims 4-5 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 5. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Furht et al. US 11,477,469, B2, e.g. a method includes receiving a reference frame, determining, for a current block, a scaling constant, determining a scaled reference block using the reference frame and the scaling constant, determining a scaled prediction block using the scaled reference block, and reconstructing pixel data of the current block and using the rescaled prediction block. Related apparatus, systems, techniques and articles are also described. Deng et al. US 11,418,794, restrictions on motion vector difference are disclosed. In one example method of video processing, determining, for a conversion between a first block of video and a bitstream representation of the first block, a range of motion vector difference (MVD) component associated with the first block, wherein the range of MVD component is [−2.sup.M, 2.sup.M−1], where M=17; constraining value of the MVD component to be in the range of MVD component; and performing the conversion based on the constrained MVD component. Liu et al. US 11,330,289 a method for video processing is provided. The method includes determining that a conversion between a current video block of a video and a coded representation of the current video block is based on a non-affine inter AMVR mode; and performing the conversion based on the determining, wherein the coded representation of the current video block is based on a context based coding, and wherein a context used for coding the current video block is modeled without using an affine AMVR mode information of a neighboring block during the conversion 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRFAN HABIB whose telephone number is (571)270-7325. The examiner can normally be reached Mon-Th 9AM-7PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 5712722988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Irfan Habib/Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Nov 26, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §103
Apr 07, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593047
METHOD AND APPARATUS FOR IMAGE ENCODING AND DECODING USING TEMPORAL MOTION INFORMATION
2y 5m to grant Granted Mar 31, 2026
Patent 12569313
HANDS-FREE CONTROLLER FOR SURGICAL MICROSCOPE
2y 5m to grant Granted Mar 10, 2026
Patent 12568241
IMPROVEMENT OF BI-PREDICTION WITH CU LEVEL WEIGHT (BCW)
2y 5m to grant Granted Mar 03, 2026
Patent 12568198
3D Display Method AND 3D Display Device
2y 5m to grant Granted Mar 03, 2026
Patent 12563216
METHODS AND DEVICES FOR ENHANCING BLOCK ADAPTIVE WEIGHTED PREDICTION WITH BLOCK VECTOR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
96%
With Interview (+7.8%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 721 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month