Prosecution Insights
Last updated: April 19, 2026
Application No. 19/026,115

AFFINE MOTION VECTOR PREDICTOR AND AFFINE MERGED MOTION VECTOR BY USING LOOKAHEAD/LOOKBEHIND MOTION VECTOR

Non-Final OA §103
Filed
Jan 16, 2025
Examiner
NAVAS JR, EDEMIO
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Tencent America LLC
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
384 granted / 540 resolved
+13.1% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
571
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
23.5%
-16.5% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 540 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 9, 11-17, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (“Zhang”) (U.S. PG Publication No. 2022/0224897) in view of Kadono et al. (“Kadono”) (U.S. PG Publication No. 2015/0110196). In regards to claim 1, Zhang teaches a method of video decoding, the method comprising: receiving a video bitstream including coded information of a current block in a current picture (See FIG. 36 and 38) and of a plurality of reference pictures of the current block in a reference list (See ¶0065-0066 and 1158 with regards to reference picture lists and their corresponding indeces), the coded information indicating that the current block is coded in an affine mode (See ¶0143, 0149-0151 and 0346); determining a first control point motion vector (CPMV) of a first control point of the current block (See ¶0149-0151 and FIG. 10 in view of 0121), the first CPMV being associated with a sum of a plurality of intermediate vectors, the plurality of intermediate vectors including an initial CPMV and an intermediate motion vector (MV), the initial CPMV being between the current picture and an initial reference picture of the plurality of reference pictures, and the intermediate MV being between two respective reference pictures of the plurality of reference pictures; and reconstructing the current block based on the first CPMV of the first control point of the current block (See ¶1152 in view of 0149-0150). Zhang, however, fails to teach the first CPMV being associated with a sum of a plurality of intermediate vectors, the plurality of intermediate vectors including an initial CPMV and an intermediate motion vector (MV), the initial CPMV being between the current picture and an initial reference picture of the plurality of reference pictures, and the intermediate MV being between two respective reference pictures of the plurality of reference pictures. In a similar endeavor Kadono teaches the first CPMV being associated with a sum of a plurality of intermediate vectors, the plurality of intermediate vectors including an initial CPMV and an intermediate motion vector (MV), the initial CPMV being between the current picture and an initial reference picture of the plurality of reference pictures, and the intermediate MV being between two respective reference pictures of the plurality of reference pictures (See ¶0072-0073 in view of FIG. 7 wherein an initial motion vector of the current block MB1 of picture 1501 leads to MB2 in reference picture 1500, then an “intermediate” motion vector MV1 is determined from MB2 of reference picture 1500 to reference picture 1503 [and thus being between two respective reference pictures of the plurality of reference pictures], thus adding the two motion vectors together gives the final motion vector MVb of the current block MB1; additional examples of such a technique may be seen in FIG. 1 and 10). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Kadono into Zhang because it allows for derivation of motion vectors even from a reference block which itself refers to another reference picture, thus allowing for proper continuation of motion vector data across a plurality of reference pictures and consistency via use of the current picture with those reference pictures as seen in at least FIG. 7. In regards to claim 2, Zhang fails to teach the method of claim 1, wherein the determining comprises: determining the initial CPMV from the first control point in the current picture to a sample in the initial reference picture of the plurality of reference pictures, and determining the intermediate MV from the sample in the initial reference picture of the plurality of reference pictures to a first sample in a second reference picture of the plurality of reference pictures. In a similar endeavor Kadono teaches determining the initial CPMV from the first control point in the current picture to a sample in the initial reference picture of the plurality of reference pictures (See ¶0072-0073 in view of FIG. 7), and determining the intermediate MV from the sample in the initial reference picture of the plurality of reference pictures to a first sample in a second reference picture of the plurality of reference pictures (See ¶0072-0073 in view of FIG. 7). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Kadono into Zhang because it allows for derivation of motion vectors even from a reference block which itself refers to another reference picture, thus allowing for proper continuation of motion vector data across a plurality of reference pictures and consistency via use of the current picture with those reference pictures as seen in at least FIG. 7. In regards to claim 3, Zhang teaches the method of claim 2, further comprising: determining a second CPMV of a second control point of the current block, the second CPMV being from the second control point of the current block to a second sample in the second reference picture of the plurality of reference pictures (See ¶0149-0151 in view of FIG. 5 and 10 wherein a second CPMV may be used), and determining a third CPMV of a third control point of the current block, the third CPMV being from the third control point of the current block to a third sample in the second reference picture of the plurality of reference pictures (See ¶0149-0151 in view of FIG. 5 and 10 wherein a third CPMV may be used, additionally various reference pictures may be used). In regards to claim 4, Zhang teaches the method of claim 1, wherein the plurality of intermediate vectors comprises: a block vector (BV) from the first sample to a second sample in the initial reference picture of the plurality of reference pictures (See ¶0587-0592). Zhang, however, fails to teach the initial CPMV from the first control point in the current picture to a first sample in the initial reference picture of the plurality of reference pictures, and the intermediate MV from the second sample in the initial reference picture of the plurality of reference pictures to a first sample in a second reference picture of the plurality of reference pictures. In a similar endeavor Kadono teaches the initial CPMV from the first control point in the current picture to a first sample in the initial reference picture of the plurality of reference pictures (See ¶0072-0073 in view of FIG. 7), and the intermediate MV from the second sample in the initial reference picture of the plurality of reference pictures to a first sample in a second reference picture of the plurality of reference pictures (See ¶0072-0073 in view of FIG. 7). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Kadono into Zhang because it allows for derivation of motion vectors even from a reference block which itself refers to another reference picture, thus allowing for proper continuation of motion vector data across a plurality of reference pictures and consistency via use of the current picture with those reference pictures as seen in at least FIG. 7. In regards to claim 5, Zhang teaches the method of claim 4, wherein: whether the BV is included in the plurality of intermediate vectors is based on a syntax element included in the coded information (See ¶0560 wherein an IBC flag may set the prediction mode for the current block, from here 0557 then shows when application of IBC is used, a motion vector may be renamed as the block vector [BV], with a list of BV predictors being included as seen in ¶0587-0592), the syntax element being positioned in one of a sequence parameter set (SPS), a picture parameter set (PPS), an adaptation parameter set (APS), a picture head, and a slice header (See the Table of ¶0193 wherein, for example, the ibc enabling flag is part of the sequence parameter set [SPS]). In regards to claim 6, Zhang teaches the method of claim 1, wherein the first CPMV is one of an affine motion vector predictor (MVP) candidate and an affine merge candidate of the first control point (See ¶0149-0151). In regards to claim 7, Zhang teaches the method of claim 3, wherein: the method further comprises constructing an affine motion vector predictor (MVP) candidate list that includes a plurality of affine MVP candidates (See ¶0354 wherein the candidate list includes those of ATMVP candidates as well as affine merge candidates, with 0137-0138 specifying that the candidates are shared for both modes, thus candidates from one mode may be used with the other), and the plurality of affine MVP candidates includes: one or more affine MVP candidates based on un-scale MVs from spatial coded blocks (See ¶0061 and 0067-0075 wherein merge mode [of which affine is a part of] uses spatial motion vector data [from neighbors], temporal motion vector data, as well as other candidates, see ¶0077 for more information specifically on spatial candidate derivation), one or more affine MVP candidates based on un-scale MVs form temporal coded blocks (See ¶0061 and 0067-0075 wherein merge mode [of which affine is a part of] uses spatial motion vector data [from neighbors], temporal motion vector data, as well as other candidates, see ¶0078 for more information specifically on temporal candidate derivation), and the first CPMV, the second CPMV, and the third CPMV (See ¶0149-0151 wherein the CPMVs may also be part of the overall candidates, of which it is specifically described that they may be signaled or derived on-the-fly, with there being at least three potential CMVPs which may also be directly seen in FIG. 10). In regards to claim 9, Zhang teaches the method of claim 1, wherein the reference list is one of a forward reference list and a backward reference list with respect to the current picture (See ¶0213 wherein list0 and list1 reference picture lists are used, one indicating backward reference pictures and the other indicating future reference pictures as per coding standards, with bi-prediction having both available as described in ¶0062-0063). In regards to claim 11, the claim is rejected under the same basis as claim 1 by Zhang in view of Kadono wherein the encoder version may be taught as seen in ¶0009, wherein the methods are implemented by a video encoder apparatus, as a decoder acts as the decoding inverse of the encoder as seen in FIG. 36, additionally the lookahead and/or lookbehind MV is taught by Kadono as seen in FIG. 7 and 10. In regards to claims 12-17 and 19, the claims are rejected under the same basis as claims 1-7 and 9, respectively, by Zhang in view of Kadono. In regards to claim 20, the claim is rejected under the same basis as claim 1 by Zhang wherein the decoder is described as seen in ¶0008. Claim(s) 8 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (“Zhang”) (U.S. PG Publication No. 2022/0224897) in view of Kadono et al. (“Kadono”) (U.S. PG Publication No. 2015/0110196) and Zhang et al. (“Zhang2”) (U.S. PG Publication No. 2024/0146908). In regards to claim 8, Zhang fails to teach the method of claim 7, further comprising: reordering the plurality of affine MVP candidates based on a subblock-level template-matching reordering in which the plurality of affine MVP candidates is reordered based on template costs of subblocks of each of the plurality of affine MVP candidates. In a similar endeavor Zhang2 teaches reordering the plurality of affine MVP candidates based on a subblock-level template-matching reordering in which the plurality of affine MVP candidates is reordered based on template costs of subblocks of each of the plurality of affine MVP candidates (See ¶0443-0445 in view of 0450 and 0457-0459). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhang2 into Zhang because it allows for reorganization according to cost values, thus refining candidates to affine motion vectors as described in at least ¶0474, improving accuracy. In regards to claim 18, the claim is rejected under the same basis as claim 8 by Zhang in view of Kadono and Zhang2. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (“Zhang”) (U.S. PG Publication No. 2022/0224897) in view of Kadono et al. (“Kadono”) (U.S. PG Publication No. 2015/0110196) and Lee et al. (“Lee”) (U.S. PG Publication No. 2022/0256189). In regards to claim 10, Zhang fails to teach the method of claim 1, wherein a total number of the plurality of intermediate vectors is defined according to a maximum trace depth. In a similar endeavor Lee teaches wherein a total number of the plurality of intermediate vectors is defined according to a maximum trace depth (Given the broadest reasonable interpretation consistent with applicant’s specification, the maxmimum trace depth is taught as the total/maximum number of vectors, see ¶0011, 0014, 0016 and 0394 wherein a maximum number of candidates is for a candidate list is taught). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Lee into Zhang because it allows for proper processing efficiency by preventing overpopulation of candidate values through a set max amount as seen in at least 0011, thus improving overall performance. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDEMIO NAVAS JR whose telephone number is (571)270-1067. The examiner can normally be reached M-F, ~ 9 AM -6 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. EDEMIO NAVAS JR Primary Examiner Art Unit 2483 /EDEMIO NAVAS JR/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Jan 16, 2025
Application Filed
Feb 03, 2026
Non-Final Rejection — §103
Apr 14, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598398
Terminal Detection Platform
2y 5m to grant Granted Apr 07, 2026
Patent 12598283
METHOD AND DISPLAY APPARATUS FOR CORRECTING DISTORTION CAUSED BY LENTICULAR LENS
2y 5m to grant Granted Apr 07, 2026
Patent 12593141
INFORMATION MANAGEMENT DEVICE, INFORMATION MANAGEMENT METHOD, AND STORAGE MEDIUM FOR MANAGING INFORMATION PROVIDED TO A MOBILE OBJECT AND DEVICE USED BY A USER IN LOCATION DIFFERENT FROM THE MOBILE OBJECT
2y 5m to grant Granted Mar 31, 2026
Patent 12587686
SIGNALING FOR GENERAL CONSTRAINT INFORMATION IN VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12587643
IMAGE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM IN WHICH BITSTREAM IS STORED FOR BLOCK DIVISION AT PICTURE BOUNDARY
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
96%
With Interview (+24.7%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 540 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month