Prosecution Insights
Last updated: April 19, 2026
Application No. 19/027,404

SUB-BLOCK MOTION DERIVATION AND DECODER-SIDE MOTION VECTOR REFINEMENT FOR MERGE MODE

Non-Final OA §103§DP
Filed
Jan 17, 2025
Examiner
HUBER, JEREMIAH CHARLES
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Interdigital Vc Holdings Inc.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
456 granted / 659 resolved
+11.2% vs TC avg
Moderate +13% lift
Without
With
+13.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
34 currently pending
Career history
693
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
18.7%
-21.3% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 659 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 23-30 and 32-43 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,212,773. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations of the claims of the instant application are entirely encompassed by those of the ‘773 application as follows: 12,212,773 Instant Application 1. A device for video decoding, comprising: a processor configured to: obtain a collocated picture associated with a current video block; determine a constrained region in the collocated picture based on a location of the current video block, wherein a size of the constrained region is greater than a size of the current video block; determine a first location associated with a collocated block based on the location of the current video block and a temporal motion vector (MV); determine whether the first location associated with the collocated block is within the constrained region in the collocated picture, wherein, based on the first location associated with the collocated block being outside the constrained region, the processor is configured to determine a second location associated with the collocated block using a clipping operation associated with the constrained region in the collocated picture, wherein, the clipping operation changes the first location associated with the collocated block based on a boundary of the constrained region; obtain, in the collocated picture, the collocated block associated with the current video block at the second location associated with the collocated block; obtain an MV associated with the collocated block; and decode the current video block based on the MV associated with the collocated block. 2. The device of claim 1, wherein the current video block is an advanced temporal motion vector prediction (ATMVP) subblock, and the collocated block is a collocated subblock associated with the ATMVP subblock, and wherein the processor is further configured to: obtain an MV associated with the collocated subblock; and predict the ATMVP subblock based on the MV associated with the collocated subblock, wherein the current video block is decoded based on the prediction of the ATMVP subblock. 23. A device for video decoding, comprising: a processor configured to: obtain a collocated picture associated with a current video block; determine a location associated with a collocated block in the collocated picture based on a location of the current video block and a temporal motion vector (MV), wherein a clipping operation constrains the location within a constrained region in the collocated picture; determine a MV of a collocated subblock in the collocated block; predict a subblock of the current video block based on the MV of the collocated subblock; and decode the current video block based on the predicted subblock. 1. ….determine whether the first location associated with the collocated block is within the constrained region in the collocated picture, wherein, based on the first location associated with the collocated block being outside the constrained region, the processor is configured to determine a second location associated with the collocated block using a clipping operation associated with the constrained region in the collocated picture, wherein, the clipping operation changes the first location associated with the collocated block based on a boundary of the constrained region 24. The device of claim 23, wherein the clipping operation is applied to ensure that the location is within the constrained region. 3. The device of claim 1, wherein the processor is further configured: determine a picture order count (POC) difference between the collocated picture and a reference picture of a neighboring block of the current video block; and determine the temporal MV based on the POC difference, wherein the constrained region in the collocated picture is determined further based on the temporal MV. 25. The device of claim 23, wherein the temporal MV is obtained based on a MV of a video block that is adjacent to the current video block. 1. … wherein, the clipping operation changes the first location associated with the collocated block based on a boundary of the constrained region; 26. The device of claim 23, wherein the clipping operation constrains the location based on a boundary of the constrained region. 1. … wherein a size of the constrained region is greater than a size of the current video block; 27. The device of claim 23, wherein a size of the constrained region is greater than a size of the current video block. 3. The device of claim 1, wherein the processor is further configured: determine a picture order count (POC) difference between the collocated picture and a reference picture of a neighboring block of the current video block; and determine the temporal MV based on the POC difference, wherein the constrained region in the collocated picture is determined further based on the temporal MV. 28. The device of claim 23, wherein the processor is further configured: determine a picture order count (POC) difference between the collocated picture and a reference picture of a neighboring block of the current video block; and determine the temporal MV based on the POC difference. 4. The device of claim 1, wherein a coding tree block (CTU) comprises the current video block, and the constrained region is further determined based on a location of the CTU in the collocated picture. 29. The device of claim 23, wherein a coding tree block (CTU) comprises the current video block, and the constrained region is further determined based on a location of the CTU in the collocated picture. 1. … wherein a size of the constrained region is greater than a size of the current video block; 4. The device of claim 1, wherein a coding tree block (CTU) comprises the current video block, and the constrained region is further determined based on a location of the CTU in the collocated picture. 30. The device of claim 23, wherein a coding tree block (CTU) comprises the current video block, and wherein an area of the constrained region is determined to be equal to or greater than an area of the CTU. 1. … wherein, the clipping operation changes the first location associated with the collocated block based on a boundary of the constrained region; 2. The device of claim 1, wherein the current video block is an advanced temporal motion vector prediction (ATMVP) subblock, and the collocated block is a collocated subblock associated with the ATMVP subblock, and wherein the processor is further configured to: obtain an MV associated with the collocated subblock; and predict the ATMVP subblock based on the MV associated with the collocated subblock, wherein the current video block is decoded based on the prediction of the ATMVP subblock. 32. The device of claim 23, wherein the processor is further configured to: determine a location of the collocated subblock based on the constrained region; and obtain the collocated subblock at the determined location of the collocated subblock. 1. … determine a first location associated with a collocated block based on the location of the current video block and a temporal motion vector (MV); determine whether the first location associated with the collocated block is within the constrained region in the collocated picture, wherein, based on the first location associated with the collocated block being outside the constrained region, the processor is configured to determine a second location associated with the collocated block using a clipping operation associated with the constrained region in the collocated picture, wherein, the clipping operation changes the first location associated with the collocated block based on a boundary of the constrained region; 33. The device of claim 23, wherein the location associated with the collocated block is a second location, and the processor is further configured to: determine a first location associated with the collocated block based on the location of the current video block and the temporal MV, wherein the second location is determined based on the first location being outside the constrained region. Claims 34-42 similarly correspond to claims 9-20 of the '773 patent. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 23-27, and 29-42 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al (2013/0163668 hereafter Chen-668) in view of Chen et al (2015/0085929 hereafter Chen-929). In regard to claim 23 discloses a device for video decoding comprising: a processor (Chen Fig. 1 and par. 21 note source and destination device 12 and 14 may be various computing devices which include processors) configured to: obtain a collocated picture associated with a video block (Chen-668 par. 66 note reference frame identified by a calculated motion vectors); determine a location associated with a collocated block in the collocated picture based on a location of the current video block and a temporal motion vector (MV) (Chen-668 pars. 33-34 note motion vectors are used to obtain a collocated CU in a reference frame [collocated frame], further note Fig. 6 and par. 106 motion vectors for a current block may be obtained for spatial or temporally neighboring blocks), wherein a clipping operation constrains the location within a constrained region in the collocated picture (Chen-668 par. 79 note clipping a motion vector to a horizontal and vertical range of -32768 to 32767 in quarter pixel resolution, thus providing an area spanning -8192 to 8191 integer pixels). It is noted that Chen-668 does not disclose details related to subblock prediction. However Chen-929 discloses a method of predicting subblocks of a current video block including: determining an MV of a collocated subblock in the collocated block (Chen-929 Figs 14A&B and par. 164 note Advanced TMVP (ATVMP) mode that begins by identifying a corresponding block in a reference picture, and then obtains motion information of the subblocks within the corresponding block); predicting a subblock of the current video block (Chen 929 note assigning motion information of the reference subblocks to subblocks of the current block); and decoding the current video block based on the MV of the collocated subblock (Chen-929 par. 164 note motion compensating each subblock of the current block, further note Fig 16 and pars 325-331 particularly par. 325 for motion compensation during decoding). It is therefore considered obvious that one of ordinary skill in the art before the effective filing date of the invention would recognize the advantage of incorporating an ATMVP prediction mode as taught by Chen-929 in the invention of Chen-668 in order to gain the advantage of separately motion compensating each subblock of a video block as suggested by Chen-929 (Chen-929 par. 164). In regard to claim 24 refer to the statements made in the rejection of claim 23 above. Chen-668 further discloses that the clipping operation is applied to ensure that the location is within the constrained region (Chen-668 pars. 79-80 note clipping scaled motion vectors to ensure the locations they point to are within displacement limits). In regard to claim 25 refer to the statements made in the rejection of claim 23 above. Chen-668 further discloses that the temporal MV is obtained based on an MV of a video block that is adjacent to the current video block (Chen-668 par. 106 note obtaining motion vectors from neighboring blocks). In regard to claim 26 refer to the statements made in the rejection of claim 23 above. Chen-668 the clipping operation constrains the location based on a boundary of the constrained region (Chen-668 pars. 79-80 note clipping scaled motion vectors to ensure they are within the boundary formed by displacement limits). In regard to claim 27 refer to the statements made in the rejection of claim 23 above. Chen-668 and Chen-929 further discloses a size of the constrained region is greater than a size of the current video block (Chen-668 par 79 note the constrained region spans -8192 to 8191 integer pixels, further note Chen-929 maximum CU size is 64x64 which is smaller than the constrained region in Chen-668) . In regard to claim 29 refer to the statements made in the rejection of claim 23 above. Chen-668 further discloses coding tree block (CTU) comprises the current video block (Chen-668 par. 31 note CU that may be split into sub-CUs as a coding tree unit), and the constrained region is further determined based on a location of the CTU in the collocated picture (Chen-668 par. 34 note MV identifying a CU in a reference frame, further note par. 79 MV are constrained to be within a range of the same location as current CU in the reference frame). In regard to claim 30 refer to the statements made in the rejection of claim 23 above. Chen 668 and Chen-929 further disclose a CTU comprises the current video block (Chen par. 31 note current block is a CU or a sub-CU), and wherein an area of the constrained region is determined to be equal to or greater than an area of the CTU (Chen-668 par 79 note the constrained region spans -8192 to 8191 integer pixels, further note Chen-929 maximum CU size is 64x64 which is smaller than the constrained region in Chen-668) In regard to claim 31 refer to the statements made in the rejection of claim 23 above. Chen-929 further discloses: obtaining motion information associated with the collocated subblock (Chen-929 par. 164 note obtaining motion information from the subblocks of a collocated block); and performing temporal motion vector scaling on the motion information to obtain a reference index and the MV of the subblock, wherein the subblock is predicted further based on the reference index (Chen-929 pars 183-190 note temporally scaling the subblock MV information based on the references used by the subblock MVs). In regard to claim 32 refer to the statements made in the rejection of claim 23 above. Chen-668 further discloses: determining a location of the collocated subblock based on the constrained region (Chen-668 par. 79-80 note clipping MV in the horizontal or vertical direction to be within the displacement limit); and obtain the collocated subblock at the determined location of the collocated subblock (Chen-668 note of the MV is clipped the collocated block will be determined based on the clipped MV value which is within the displacement limit). In regard to claim 33 refer to the statements made in the rejection of claim 23 above. Chen further discloses that the location associated with the collocated block is a second location, and the processor is further configured to: determine a first location associated with the collocated block based on the location of the current video block and the temporal MV (Chen-668 pars. 33-34 note motion vectors are used to obtain a collocated CU in a reference frame), wherein the second location is determined based on the first location being outside the constrained region (Chen-668 par. 79 note motion vectors are clipped in the horizontal and vertical direction to point to new collocated blocks within the displacement limit). Claims 37-39 describe a decoding method that substantially corresponds to the process steps performed by the decoding device of claims 23-27 and 29-33 above. Refer to the statements made in regard to claims 23-27 and 29-33 above for the rejection of claims 37-39 which will not be repeated here for brevity. Claims 34-36 and 40-42 describe a device and method for video encoding substantially corresponds to the process steps performed by the decoding device of claims 23-27 and 29-33 above. Refer to the statements made in regard to claims 23-27 and 29-33 above for the rejection of claims 34-36 and 40-42 which will not be repeated here for brevity. Chen-668 further discloses encoding a video block based on a collocated block (Chen Fig. 1 #20 and pars 39-43 for encoding using collocated blocks) Claim(s) 28 is rejected under 35 U.S.C. 103 as being unpatentable over Chen-668 in view of Chen-929 as applied to claim 23 above and in further view of Seregin et al (2013/0272412). In regard to claim 28 refer to the statements made in the rejection of claim 23 above. Chen-668 further discloses scaling the temporal motion vector according to picture order count (POC) information including a POC of a reference picture of a neighboring block (Chen-668 par. 72-73 note POCmvp_blk_ref) and a POC of the collocated picture (Chen-668 par. 73 note POCref). It is noted that neither Chen-668 nor Chen-929 disclose determining a difference between these particular POC values and determining the temporal MV based on the difference. However Seregin discloses determining a temporal motion vector without scaling when the reference picture of a neighboring block and the collocated reference picture is the same, thus indicating that the difference between the POCs of a reference picture of a neighboring block and a collocated reference picture is zero, and determining the temporal MV with scaling if the POC difference is non-zero (Seregin Fig. 7 and pars 126 note determining if reference pictures have a POC difference and determining whether or not to perform MV scaling based on the determination). It is therefore considered obvious that one of ordinary skill in the art would recognize the advantage of determining whether the POC difference between a reference picture for a neighboring block and a POC of the collocated block was zero or greater than zero as suggested by Seregin in the motion vector scaling of Chen-668 and Chen-929 in order to avoid unnecessary scaling computations when not required as suggested by Seregin (Seregin par. 126 note skipping scaling for references with the same POC). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20200045307 A1 JANG; Hyeongmoon US 20190313112 A1 Han; Jong-Ki et al. US 20190222837 A1 LEE; Jin Ho et al. US 20190182505 A1 CHUANG; Tzu-Der et al. US 20180249176 A1 LIM; Jaehyun et al. US 20180199052 A1 HE; Dake US 20180098072 A1 Zhang; Li et al. US 20180084260 A1 Chien; Wei-Jung et al. US 20180070100 A1 Chen; Yi-Wen et al. US 20180007395 A1 Ugur; Kemal et al. US 20170332099 A1 Lee; Sungwon et al. US 20170332075 A1 Karczewicz; Marta et al. US 20170223350 A1 Xu; Yaowu et al. US 20160330475 A1 ZHOU; Minhua US 20160234492 A1 Li; Xiang et al. US 20150139329 A1 Nakamura; Hiroya et al. US 20140355688 A1 Lim; Sung Chang et al. US 20140241436 A1 Laroche; Guillaume et al. US 20140044171 A1 Takehara; Hideki et al. US 20130083853 A1 Coban; Muhammed Zeyd et al. US 20100316129 A1 ZHAO; XU GANG et al. US 20050152452 A1 Suzuki, Yoshinori Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMIAH CHARLES HALLENBECK-HUBER whose telephone number is (571)272-5248. The examiner can normally be reached Monday to Friday from 9 A.M. to 5 P.M. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEREMIAH C HALLENBECK-HUBER/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Jan 17, 2025
Application Filed
Feb 26, 2025
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604012
CODING METHOD, ENCODER, AND DECODER
2y 5m to grant Granted Apr 14, 2026
Patent 12604026
MOVING PICTURE CODING METHOD, MOVING PICTURE DECODING METHOD, MOVING PICTURE CODING APPARATUS, MOVING PICTURE DECODING APPARATUS, AND MOVING PICTURE CODING AND DECODING APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12593043
VIDEO COMPRESSION AT SCENE CHANGES FOR LOW LATENCY INTERACTIVE EXPERIENCE
2y 5m to grant Granted Mar 31, 2026
Patent 12593046
SUB-BLOCK DIVISION-BASED IMAGE ENCODING/DECODING METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12587670
VIDEO CODING AND DECODING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
82%
With Interview (+13.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 659 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month