DETAILED ACTION
1. This Office Action is sent in response to Applicant’s communication received on 12/20/2024 for application number 18/990,826. The Office herby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract, Oath/Declaration, and claims.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
3. Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed.
Information Disclosure Statement
4. The information disclosure statement (IDS) submitted on 12/20/2024 and 08/18/2025 is in accordance with provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Preliminary Amendments
5. The preliminary amendments filed 12/20/2024 has been entered and made of record.
Double Patenting
6. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
7. A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
8. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
9. Claims 16-24 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-15 of U.S. Patent No. 12,219,131. Although the claims at issue are not identical, they are not patentably distinct from each other: Table 1 shows comparison between the instant claims and the U.S. Patent 12,219,131 B1 claims.
Application 18/990,826
16. A method of decoding a video, the method comprising: deriving an affine merge candidate from a neighboring block adjacent to a current block; generating a merge candidate list including the affine merge candidate; selecting one of a plurality of affine merge candidates included in the merge candidate list; deriving corner motion vectors of the current block based on the selected affine merge candidate, a number of the corner motion vectors of the current block being 2 or 3; deriving a motion vector of a sub-block in the current block based on the corner motion vectors of the current block; and performing motion compensation for the sub-block based on the motion vector, wherein, based on whether the neighboring block adjoins the current block at a CTU (Coding Tree Unit) boundary, it is determined whether a top-left corner motion vector and a top-right corner motion vector of the neighboring block are used to derive the affine merge candidate.
Patent No.: 12,219,131
1. A method of decoding a video, the method comprising: deriving an affine merge candidate from a neighboring block adjacent to a current block; generating a merge candidate list including the affine merge candidate; determining one of a plurality of merge candidates in the merge candidate list based on index information signaled from a bitstream, the index information specifying the one of the plurality of merge candidates; deriving first motion vectors of the current block based on the determined merge candidate; deriving a second motion vector of the current block based on the first motion vectors of the current block, the second motion vector being derived in units of sub-blocks in the current block; generating prediction samples of the current block by performing inter prediction on the current block based on the second motion vector; obtaining residual samples of the current block; and reconstructing the current block by summing the prediction samples and the residual samples, wherein the affine merge candidate is derived based on corner motion vectors of the neighboring block, and wherein, based on whether the neighboring block is included in the same CTU (Coding Tree Unit) as the current block, positions of corners corresponding to the corner motion vectors used to derive the affine merge candidate are differently determined.
17. The method of claim 16, wherein the merge candidate list comprises a constructed affine merge candidate which is generated by combining translation motion vectors of a plurality of neighboring blocks.
3. The method of claim 2, wherein the merge candidate list further includes a constructed affine merge candidate which is generated by combining translation motion vectors of a plurality of neighboring blocks.
18. The method of claim 16, wherein when the neighboring block is included in the same CTU as the current block, the top-left corner motion vector and the top-right corner motion vector of the neighboring block are used to derive the affine merge candidate, and wherein when the neighboring block is not included in the same CTU as the current block, motion vectors of a bottom-left corner and a bottom-right corner of the neighboring block are used to derive the affine merge candidate.
2. The method of claim 1, wherein when the neighboring block is included in the same CTU as the current block, a top-left corner motion vector and a top-right corner motion vector of the neighboring block are used to derive the affine merge candidate, and wherein when the neighboring block is not included in the same CTU as the current block, a bottom-left corner motion vector and a bottom-right corner motion vector of the neighboring block are used to derive the affine merge candidate.
19. The method of claim 16, wherein the affine merge candidate is derived from one of top neighboring blocks adjacent to the current block, and wherein the affine merge candidate is derived from available one which is found firstly by searching the top neighboring blocks in a pre-defined order.
4. The method of claim 3, wherein the affine merge candidate is derived from one of top neighboring blocks adjacent to the current block, and wherein the affine merge candidate is derived from available one which is found firstly by searching the top neighboring blocks in a pre-defined order.
20. A method of encoding a video, the method comprising: deriving an affine merge candidate from a neighboring block adjacent to a current block; generating a merge candidate list including the affine merge candidate; specifying one of a plurality of affine merge candidates included in the merge candidate list; deriving corner motion vectors of the current block based on the specified affine merge candidate, a number of the corner motion vectors of the current block being 2 or 3; deriving a motion vector of a sub-block in the current block based on the corner motion vectors of the current block; and performing motion compensation for the sub-block based on the motion vector, wherein, based on whether the neighboring block adjoins the current block at a CTU (Coding Tree Unit) boundary, it is determined whether a top-left corner motion vector and a top-right corner motion vector of the neighboring block are used to derive the affine merge candidate.
5. A method of encoding a video, the method comprising: deriving an affine merge candidate from a neighboring block adjacent to a current block; generating a merge candidate list including the affine merge candidate; deriving first motion vectors of the current block based on one of a plurality of merge candidates in the merge candidate list; deriving a second motion vector of the current block based on the first motion vectors of the current block, the second motion vector being derived in units of sub-blocks in the current block; generating prediction samples of the current block by performing inter prediction on the current block based on the second motion vector; obtaining residual samples of the current block by subtracting the prediction samples from original samples; and encoding residual coefficients derived from the residual samples, wherein index information specifying the one of the plurality of merge candidates is encoded into a bitstream, wherein the affine merge candidate is derived based on corner motion vectors of the neighboring block, and wherein, based on whether the neighboring block is included in the same CTU (Coding Tree Unit) as the current block, positions of corners corresponding to the corner motion vectors used to derive the affine merge candidate are differently determined.
21. The method of claim 20, wherein the merge candidate list comprises a constructed affine merge candidate which is generated by combining translation motion vectors of a plurality of neighboring blocks.
7. The method of claim 6, wherein the merge candidate list further includes a constructed affine merge candidate which is generated by combining translation motion vectors of a plurality of neighboring blocks.
22. The method of claim 20, wherein when the neighboring block is included in the same CTU as the current block, the top-left corner motion vector and the top-right corner motion vector of the neighboring block are used to derive the affine merge candidate, and wherein when the neighboring block is not included in the same CTU as the current block, motion vectors of a bottom-left corner and a bottom-right corner of the neighboring block are used to derive the affine merge candidate.
6. The method of claim 5, wherein when the neighboring block is included in the same CTU as the current block, a top-left corner motion vector and a top-right corner motion vector of the neighboring block are used to derive the affine merge candidate, and wherein when the neighboring block is not included in the same CTU as the current block, a bottom-left corner motion vector and a bottom-right corner motion vector of the neighboring block are used to derive the affine merge candidate.
23. The method of claim 20, wherein the affine merge candidate is derived from one of top neighboring blocks adjacent to the current block, and wherein the affine merge candidate is derived from available one which is found firstly by searching the top neighboring blocks in a pre-defined order.
8. The method of claim 7, wherein the affine merge candidate is derived from one of top neighboring blocks adjacent to the current block, and wherein the affine merge candidate is derived from available one which is found firstly by searching the top neighboring blocks in a pre-defined order.
24. A method of transmitting data for a video signal, the method comprising: obtaining a bitstream for the video signal, wherein obtaining the bitstream comprises: deriving an affine merge candidate from a neighboring block adjacent to a current block; generating a merge candidate list including the affine merge candidate; specifying one of a plurality of affine merge candidates included in the merge candidate list; deriving corner motion vectors of the current block based on the specified affine merge candidate, a number of the corner motion vectors of the current block being 2 or 3; deriving a motion vector of a sub-block in the current block based on the corner motion vectors of the current block; and performing motion compensation for the sub-block based on the motion vector, transmitting the data including the bitstream, wherein, based on whether the neighboring block adjoins the current block at a CTU (Coding Tree Unit) boundary, it is determined whether a top-left corner motion vector and a top-right corner motion vector of the neighboring block are used to derive the affine merge candidate.
9. A method of transmitting data for a video signal, the method comprising: obtaining a bitstream for the video signal, wherein obtaining the bitstream comprises: deriving an affine merge candidate from a neighboring block adjacent to a current block; generating a merge candidate list including the affine merge candidate; deriving first motion vectors of the current block based on one of a plurality of merge candidates in the merge candidate list; deriving a second motion vector of the current block based on the first motion vectors of the current block, the second motion vector being derived in units of sub-blocks in the current block; generating prediction samples of the current block by performing inter prediction on the current block based on the second motion vector; obtaining residual samples of the current block by subtracting the prediction samples from original samples; and encoding residual coefficients derived from the residual samples; and transmitting the data including the bitstream, wherein index information specifying the one of the plurality of merge candidates is encoded into the bitstream, wherein the affine merge candidate is derived based on corner motion vectors of the neighboring block, and wherein, based on whether the neighboring block is included in the same CTU (Coding Tree Unit) as the current block, positions of corners corresponding to the corner motion vectors used to derive the affine merge candidate are differently determined.
Allowable Subject Matter
10. Claims 16-24 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
11. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20210211679 A1 discloses: The updating process may be invoked under further conditions, e.g., only for the right and/or bottom affine sub-blocks of one CTU. In this case, the filtering process may depend on the un-updated motion information and the update motion information may be used for subsequent coded/decoded blocks in current slice/tile or other pictures.
US 20210211646 A1 discloses: Motion information of an inherited merge candidate may be obtained based on motion information of a candidate block. In an example, a reference picture index and prediction direction of an inherited merge candidate may be set the same as a candidate block. Affine vectors of the inherited merge candidate may be derived based on affine vectors of the candidate block.
US 20210203977 A1 discloses: when the top boundary of the current block is in contact with the boundary of a coding tree unit, a merge candidate, an affine seed vector prediction candidate, or an affine seed vector of the current block is derived using the third affine seed vector for the bottom-left control point and the fourth affine seed vector for tire bottom-right control point of the affine neighboring block positioned on tire top of the current block.
US 20200244978 A1 discloses: a second inherited affine candidate based on second regular motion information of two second minimum blocks in a bottom row of minimum blocks above a CTU row including the current CTU when the current block is adjacent to the top boundary of the current CTU.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HOWARD D BROWN JR whose telephone number is (571)272-4371. The examiner can normally be reached Monday - Friday 7:30AM - 5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sathyanarayanan Perungavoor can be reached at 5712727455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
HOWARD D. BROWN JR
Primary Examiner
Art Unit 2488
/HOWARD D BROWN JR/Examiner, Art Unit 2488