Prosecution Insights
Last updated: April 19, 2026
Application No. 18/963,170

MV RESTRICTION OF BI-PREDICTION FOR OUT-OF-FRAME BOUNDARY CONDITIONS

Non-Final OA §DP
Filed
Nov 27, 2024
Examiner
HABIB, IRFAN
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Tencent America LLC
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
96%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
637 granted / 721 resolved
+30.3% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
36 currently pending
Career history
757
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
70.0%
+30.0% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
3.6%
-36.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 721 resolved cases

Office Action

§DP
DETAILED ACTION 1. This office action is in response to U.S. Patent Application No.: 18/963,170 filed on 2/7/2025 with effective filing date 1/12/2022. Claims 2-21 are pending. Double Patenting 2. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. 3. Claims 9-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12206861. Although the claims at issue are not identical, they are not patentably distinct from each other. Current Application US 12206861 2. (New) A method for video decoding, the method comprising: determining motion information of a current block predicted with an inter prediction mode, the motion information indicating one or more reference blocks of the current block associated with respective one or more reference pictures, the inter prediction mode being a non- merge mode; when a motion information constraint indicates that the one or more reference blocks are within picture boundaries of the respective one or more reference pictures, reconstructing the current block based on the one or more reference blocks, the one or more reference blocks having a same size as the current block; and when a first motion vector (MV) indicated by the motion information points from a region in the current block to a first reference region that is outside a picture boundary of a first reference picture of the one or more reference pictures, the first reference region being a region of a first reference block in the one or more reference blocks, the picture boundaries including the picture boundary of the first reference picture, determining a first clipped MV pointing from the region in the current block to an updated first reference region by clipping the first MV such that the updated first reference region is in an updated first reference block that is within the picture boundary of the first reference picture; and reconstructing the region in the current block based on the updated first reference region. 1. A method for video decoding in a video decoder, the method comprising: determining motion information of a current block predicted with inter prediction, the motion information indicating one or more reference blocks of the current block associated with respective one or more reference pictures; when a motion information constraint indicates that the one or more reference blocks are within picture boundaries of the respective one or more reference pictures, reconstructing the current block based on the one or more reference blocks, wherein the one or more reference blocks have a same size as the current block; and when a first motion vector (MV) indicated by the motion information points from a region in the current block to a first reference region, the first reference region being a region of a first reference block in the one or more reference blocks and being outside a picture boundary of a first reference picture of the one or more reference pictures, the picture boundaries including the picture boundary of the first reference picture, determining a first clipped MV pointing from the region in the current block to an updated first reference region by clipping the first MV such that the updated first reference region is in an updated first reference block that is within the picture boundary of the first reference picture; and reconstructing the region in the current block based on the updated first reference region. 9. (New) A method for video encoding in a video encoder, the method comprising: determining motion information of a current block based on an inter prediction mode, the motion information indicating one or more reference blocks of the current block associated with respective one or more reference pictures, the inter prediction mode being a non-merge mode; when the motion information indicates that the one or more reference blocks are within picture boundaries of the respective one or more reference pictures, encoding the current block based on the one or more reference blocks, the one or more reference blocks having a same size as the current block; and when a first motion vector (MV) indicated by the motion information points from a region in the current block to a first reference region that is outside a picture boundary of a first reference picture of the one or more reference pictures, the first reference region being a region of a first reference block in the one or more reference blocks, the picture boundaries including the picture boundary of the first reference picture, determining a first clipped MV pointing from the region in the current block to an updated first reference region by clipping the first MV such that the updated first reference region is in an updated first reference block that is within the picture boundary of the first reference picture; and encoding the region in the current block based on the updated first reference region. 10. A method for video encoding in a video encoder, the method comprising: determining motion information of a current block based on inter prediction, the motion information indicating one or more reference blocks of the current block associated with respective one or more reference pictures; when a motion information constraint indicates that the one or more reference blocks are within picture boundaries of the respective one or more reference pictures, encoding the current block based on the one or more reference blocks, wherein the one or more reference blocks have a same size as the current block; and when a first motion vector (MV) indicated by the motion information points from a region in the current block to a first reference region, the first reference region being a region of a first reference block in the one or more reference blocks and being outside a picture boundary of a first reference picture of the one or more reference pictures, the picture boundaries including the picture boundary of the first reference picture, determining a first clipped MV pointing from the region in the current block to an updated first reference region by clipping the first MV such that the updated first reference region is in an updated first reference block that is within the picture boundary of the first reference picture; and encoding the region in the current block based on the updated first reference region. 16. (New) A method of processing visual media data, the method comprising: processing a bitstream that includes the visual media data according to a format rule, wherein the bitstream includes coded information of a current block; and the format rule specifies that: motion information of the current block predicted with an inter prediction mode is determined, the motion information indicating one or more reference blocks of the current block associated with respective one or more reference pictures, the inter prediction mode being a non- merge mode; when a motion information constraint indicates that the one or more reference blocks are within picture boundaries of the respective one or more reference pictures, the current block is reconstructed based on the one or more reference blocks, the one or more reference blocks having a same size as the current block; and when a first motion vector (MV) indicated by the motion information points from a region in the current block to a first reference region that is outside a picture boundary of a first reference picture of the one or more reference pictures, the first reference region being a region of a first reference block in the one or more reference blocks, the picture boundaries including the picture boundary of the first reference picture, a first clipped MV pointing from the region in the current block to an updated first reference region is determined by clipping the first MV such that the updated first reference region is in an updated first reference block that is within the picture boundary of the first reference picture; and the region in the current block is reconstructed based on the updated first reference region. 19. A method of processing visual media data, the method comprising: processing a bitstream that includes the visual media data according to a format rule, wherein the bitstream includes coded information of a current block; and the format rule specifies that: motion information of the current block predicted with inter prediction is determined, the motion information indicating one or more reference blocks of the current block associated with respective one or more reference pictures; when a motion information constraint indicates that the one or more reference blocks are within picture boundaries of the respective one or more reference pictures, the current block is processed based on the one or more reference blocks, wherein the one or more reference blocks have a same size as the current block; and when a first motion vector (MV) indicated by the motion information points from a region in the current block to a first reference region, the first reference region being a region of a first reference block in the one or more reference blocks and being outside a picture boundary of a first reference picture of the one or more reference pictures, the picture boundaries including the picture boundary of the first reference picture, a first clipped MV pointing from the region in the current block to an updated first reference region is determined by clipping the first MV such that the updated first reference region is in an updated first reference block that is within the picture boundary of the first reference picture; and the region in the current block is processed based on the updated first reference region. Allowable Subject Matter 4. After analyzing the current application examiner concluded that the novelty of the current application involves determining motion information of a current block predicted with inter prediction, where the motion information indicating a set of reference blocks of the current block associated with respective set of reference pictures. The current block is reconstructed based on the set of reference blocks in response to a motion information constraint indicating that the set of reference blocks are within picture boundaries of the respective set of reference pictures. A first clipped Motion Vector (MV) pointing is determined from the region in the current block to an updated first reference region by clipping the first MV such that the updated first reference region is in an updated first reference block that is within the picture boundary of the first reference picture. The region in the current block is reconstructed based on the updated first reference region. The prior art of record in particular, XU et al. US 2021/0385464 A1 in view of Park et al. US 2022/0248028 A1 does not disclose, with respect to claim 1, when a first motion vector (MV) indicated by the motion information points from a region in the current block to a first reference region that is outside a picture boundary of a first reference picture of the one or more reference pictures, the first reference region being a region of a first reference block in the one or more reference blocks, the picture boundaries including the picture boundary of the first reference picture, determining a first clipped MV pointing from the region in the current block to an updated first reference region by clipping the first MV such that the updated first reference region is in an updated first reference block that is within the picture boundary of the first reference picture; and reconstructing the region in the current block based on the updated first reference region as claimed. Rather, XU et al. discloses the method involves decoding prediction information of a current block from a bitstream, where the prediction information is indicative of an intra block copy mode. A flag that is signaled to select one resolution from two resolutions is decoded. A block vector difference and a block vector predictor are determined for the current block based on the information and the selected resolution. The block vector is determined by combining the predictor and the vector difference for the block. A sample of the block is reconstructed (S1040) according to the block vector, which is in the resolution selected by decoding the flag. Similarly, Park et al. discloses involves determining a center motion vector of the current block by using the basic motion vector of the current block when an affine model-based inter prediction is performed in the current block. A reference range of a referenceable area of the current block based on the size of the current block is determined. When a reference region having a size of the reference range in the reference picture of the current block centered on a point which is indicated by the center motion vector of the current block is out of or has the boundary of the reference picture. The reference region is referred to as the current picture. The reference area by translating it inward is changed. The prediction samples of sub-blocks of the current block within the changed reference region of the reference picture are determined. The reconstructed samples of the current block using prediction samples of the current block is determined. The same reasoning applies to claim 9 & 16 mutatis mutandis. Conclusion 5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRFAN HABIB whose telephone number is (571)270-7325. The examiner can normally be reached Mon-Th 9AM-7PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 5712722988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Irfan Habib/ Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Nov 27, 2024
Application Filed
Feb 07, 2025
Response after Non-Final Action
Feb 20, 2026
Non-Final Rejection — §DP
Mar 23, 2026
Applicant Interview (Telephonic)
Mar 27, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593047
METHOD AND APPARATUS FOR IMAGE ENCODING AND DECODING USING TEMPORAL MOTION INFORMATION
2y 5m to grant Granted Mar 31, 2026
Patent 12569313
HANDS-FREE CONTROLLER FOR SURGICAL MICROSCOPE
2y 5m to grant Granted Mar 10, 2026
Patent 12568241
IMPROVEMENT OF BI-PREDICTION WITH CU LEVEL WEIGHT (BCW)
2y 5m to grant Granted Mar 03, 2026
Patent 12568198
3D Display Method AND 3D Display Device
2y 5m to grant Granted Mar 03, 2026
Patent 12563216
METHODS AND DEVICES FOR ENHANCING BLOCK ADAPTIVE WEIGHTED PREDICTION WITH BLOCK VECTOR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
96%
With Interview (+7.8%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 721 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month