DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-12 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-12 of copending Application No. 18/991,049 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in the instant application are covered by the scope of the limitations recited in the co-pending application, see below:
Instant Application No. 18/991,049
Co-pending Application No. 18/991,069
1. An image decoding method performed by a decoding apparatus, the method comprising: obtaining image information comprising information related to motion compensation from a bitstream; deriving merge candidates of a current block; generating a merge candidate list of the current block based on the derived merge candidates; determining motion information of the current block based on the merge candidate list; and generating a predicted block of the current block based on the motion information of the current block, wherein the information related to the motion compensation includes motion vector resolution information, wherein the motion vector resolution information indicates a specific motion vector resolution among motion vector resolution candidates including an integer pel resolution, a 1/4 pel resolution and a 1/16 pel resolution, and wherein the motion vector resolution information is based on a truncated unary binarization.
1. An image decoding method performed by a decoding apparatus, the method comprising: obtaining image information comprising information related to motion compensation from a bitstream; deriving merge candidates of a current block; generating a merge candidate list of the current block based on the derived merge candidates; determining motion information of the current block based on the merge candidate list; and generating a predicted block of the current block based on the motion information of the current block, wherein the information related to the motion compensation includes motion vector resolution information, wherein the motion vector resolution information indicates a specific motion vector resolution among motion vector resolution candidates including an integer pel resolution, a 1/4 pel resolution and a 1/16 pel resolution, wherein the motion vector resolution information is signaled at a coding unit level, and wherein the motion vector resolution information is based on a truncated unary binarization.
Claim 2 of the instant application corresponds to claim 2 of Co-pending Application No. 18/991,069.
Claim 3 of the instant application corresponds to claim 3 of Co-pending Application No. 18/991,069.
Claim 4 of the instant application corresponds to claim 4 of Co-pending Application No. 18/991,069.
Claim 5 of the instant application corresponds to claim 5 of Co-pending Application No. 18/991,069.
Claim 6 of the instant application corresponds to claim 6 of Co-pending Application No. 18/991,069.
Claim 7 of the instant application corresponds to claim 7 of Co-pending Application No. 18/991,069.
Claim 8 of the instant application corresponds to claim 8 of Co-pending Application No. 18/991,069.
Claim 9 of the instant application corresponds to claim 9 of Co-pending Application No. 18/991,069.
Claim 10 of the instant application corresponds to claim 10 of Co-pending Application No. 18/991,069.
Claim 11 of the instant application corresponds to claim 11 of Co-pending Application No. 18/991,069.
Claim 12 of the instant application corresponds to claim 12 of Co-pending Application No. 18/991,069.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claims 1-9 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-9 of U.S. Patent No. 12,309,390 B2 (herein referenced as “Jun”) in view of Lim et al., (U.S. Pub. No. 2019/0297325 A1).
As per claim 1, Jun teaches an image decoding method performed by a decoding apparatus (claim 1), the method comprising: obtaining image information comprising information related to motion compensation from a bitstream (claim 1, col. 55 lines 59-60); deriving merge candidates of a current block (claim 1, col. 55 lines 61); generating a merge candidate list of the current block based on the derived merge candidates (claim 1, col. 55 lines 62-63); determining motion information of the current block based on the merge candidate list (claim 1, col. 55 lines 64-65); and generating a predicted block of the current block based on the motion information of the current block (claim 1, col. 55 lines 65-67), wherein the information related to the motion compensation includes motion vector resolution information (claim 1, col. 56 lines 1-2), wherein the motion vector resolution information indicates a specific motion vector resolution among motion vector resolution candidates including an integer pel resolution, a 1/4 pel resolution and a 1/16 pel resolution (claim 1, col. 56 lines 3-7), and wherein the motion vector resolution information is based on a truncated binarization (claim 1, col. 56 lines 10-11).
Jun does not explicitly disclose wherein the motion vector resolution information is based on a truncated unary binarization.
However, Lim teaches wherein the motion vector resolution information is based on a truncated unary binarization ([0439], [0469] and [0495]; “… herein the information about motion compensation may include at least one of … information about a motion vector resolution..” and further in para [0495], “… When entropy encoding/decoding information about motion compensation or granularity information, a truncated Rice binarization method, a K-th order Exp_Golomb binarization method, a constrained K-th order Exp_Golomb binarization method, a fixed-length binarization method, a unary binarization method, or a truncated unary binarization method may be used”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate the teachings of Lim with Jun in order to provide a method and apparatus for encoding/decoding information about motion compensation that is commonly applied to a specific region to improve encoding/decoding efficiency of a picture.
Claim 2 of the instant application corresponds to claim 2 of U.S. Patent No. 12,309,390 B2.
Claim 3 of the instant application corresponds to claim 3 of U.S. Patent No. 12,309,390 B2.
Claim 4 of the instant application corresponds to claim 4 of U.S. Patent No. 12,309,390 B2.
Claim 5 of the instant application is the corresponding image encoding method performed by an encoding apparatus with the limitations of the image decoding method performed by a decoding apparatus, thus the rejection and analysis made for claim 1 also applies here.
Claim 6 of the instant application corresponds to claim 6 of U.S. Patent No. 12,309,390 B2.
Claim 7 of the instant application corresponds to claim 7 of U.S. Patent No. 12,309,390 B2.
Claim 8 of the instant application corresponds to claim 8 of U.S. Patent No. 12,309,390 B2.
Claim 9 of the instant application is the corresponding transmission method for image data with the limitations of the image decoding method performed by a decoding apparatus, thus the rejection and analysis made for claim 1 also applies here.
Claims 1-12 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-12 of copending Application No. 18/991,072 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in the instant application are covered by the scope of the limitations recited in the co-pending application, see below:
Instant Application No. 18/991,049
Co-pending Application No. 18/991,072
1. An image decoding method performed by a decoding apparatus, the method comprising: obtaining image information comprising information related to motion compensation from a bitstream; deriving merge candidates of a current block; generating a merge candidate list of the current block based on the derived merge candidates; determining motion information of the current block based on the merge candidate list; and generating a predicted block of the current block based on the motion information of the current block, wherein the information related to the motion compensation includes motion vector resolution information, wherein the motion vector resolution information indicates a specific motion vector resolution among motion vector resolution candidates including an integer pel resolution, a 1/4 pel resolution and a 1/16 pel resolution, and wherein the motion vector resolution information is based on a truncated unary binarization.
1. An image decoding method performed by a decoding apparatus, the method comprising: obtaining image information comprising information related to motion compensation from a bitstream; deriving merge candidates of a current block; generating a merge candidate list of the current block based on the derived merge candidates; determining motion information of the current block based on the merge candidate list; and generating a predicted block of the current block based on the motion information of the current block, wherein the information related to the motion compensation includes motion vector resolution information, wherein the motion vector resolution information indicates a specific motion vector resolution among motion vector resolution candidates including an integer pel resolution, a 1/4 pel resolution and a 1/16 pel resolution, wherein the merge candidates include motion vectors of neighboring blocks of the current block, wherein the motion vector resolution information is signaled at a coding unit level, and wherein the motion vector resolution information is entropy decoded based on a context model, wherein the motion vector resolution information is based on a truncated unary binarization.
Claim 2 of the instant application corresponds to claim 2 of Co-pending Application No. 18/991,072.
Claim 3 of the instant application corresponds to claim 3 of Co-pending Application No. 18/991,072.
Claim 4 of the instant application corresponds to claim 4 of Co-pending Application No. 18/991,072.
Claim 5 of the instant application corresponds to claim 5 of Co-pending Application No. 18/991,072.
Claim 6 of the instant application corresponds to claim 6 of Co-pending Application No. 18/991,072.
Claim 7 of the instant application corresponds to claim 7 of Co-pending Application No. 18/991,072.
Claim 8 of the instant application corresponds to claim 8 of Co-pending Application No. 18/991,072.
Claim 9 of the instant application corresponds to claim 9 of Co-pending Application No. 18/991,072.
Claim 10 of the instant application corresponds to claim 10 of Co-pending Application No. 18/991,072.
Claim 11 of the instant application corresponds to claim 11 of Co-pending Application No. 18/991,072.
Claim 12 of the instant application corresponds to claim 12 of Co-pending Application No. 18/991,072.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 5-7, 9-11 is/are rejected under 35 U.S.C. 102(a1) as being anticipated by Lim et al., (U.S. Pub. No. 2019/0297325 A1).
As per claim 1, Lim teaches an image decoding method performed by a decoding apparatus, the method comprising: obtaining image information comprising information related to motion compensation from a bitstream ([0438-0439],[0469], “the information about motion compensation may include at least one of a motion vector candidate, a motion vector candidate list, a merge candidate, merge candidate list,…”) deriving merge candidates of a current block ([0087-0088]), generating a merge candidate list of the current block based on the derived merge candidates ([0087], [0194], [0196]); determining motion information of the current block based on the merge candidate list ([0194-0196]); and generating a predicted block of the current block based on the motion information of the current block (fig. 2, [0348]), wherein the information related to the motion compensation includes motion vector resolution ([0438-0439], [0469]), wherein the motion vector resolution information indicates a specific motion vector resolution among motion vector resolution candidates including an integer pel resolution, ¼ pel resolution and 1/16 pel resolution ([0439], [0507], “Information about a motion vector resolution may be information representing whether or not a specific resolution is used for at least one of a motion vector and a motion vector difference value. Herein, the resolution may refer to a precision. In addition, the specific resolution may be set to at least one of an integer-pixel (integer-pel) unit, a ½-pixel(½-pel) unit, a ¼-pixel (¼-pel) unit, a ⅛-pixel (⅛-pel) unit, a 1/16-pixel ( 1/16-pel) unit, a 1/32-pixel ( 1/32-pel) unit, and a 1/64-pixel ( 1/64-pel) unit”), wherein the motion vector resolution is based on a truncated unary binarization ([0439], [0469] and [0495]; “… herein the information about motion compensation may include at least one of … information about a motion vector resolution..” and further in para [0495], “… When entropy encoding/decoding information about motion compensation or granularity information, a truncated Rice binarization method, a K-th order Exp_Golomb binarization method, a constrained K-th order Exp_Golomb binarization method, a fixed-length binarization method, a unary binarization method, or a truncated unary binarization method may be used”).
As per claim 2, Lim teaches wherein the motion vector resolution information is entropy decoded base on a context model ([0495-0496]); and wherein the context model is determined based on information on a size of the current block ([0495-0496]; “a context model may be determined by using at least one of information about motion compensation of a neighboring block adjacent to the current block … and size information of the current block”).
As per claim 3, Lim teaches wherein the motion vector resolution information is entropy decoded based on a context model ([0495-0496]), and wherein the context model is determined based on information related to motion compensation of a neighboring block of the current block ([0496], “entropy encoding/decoding information about motion compensation or granularity information, a context model may be determined by using at least one of information about motion compensation of a neighboring block adjacent to the current block …”).
As per claim 5, which is the corresponding encoding method performed by an encoding apparatus with the limitations of the image decoding method as recited in claim 1. Thus, the rejection and analysis made for claim 1 also applies here.
As per claim 6, which is the corresponding encoding method performed by an encoding apparatus with the limitations of the image decoding method as recited in claim 2. Thus, the rejection and analysis made for claim 1 also applies here.
As per claim 7, which is the corresponding encoding method performed by an encoding apparatus with the limitations of the image decoding method as recited in claim 3. Thus, the rejection and analysis made for claim 1 also applies here.
As per claim 9, which is the corresponding transmission method with the limitations of the image decoding method as recited in claim 1. Thus, the rejection and analysis made for claim 1 also applies here. In addition, Lim teaches transmitting the image data comprising the bitstream (fig.2).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 4, 8 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lim et al., (U.S. Pub. No. 2019/0297325 A1) and further in view of Chuang et al., (U.S. Pub. No. 2016/0337649 A1).
As per claim 4, Although Lim teaches wherein the motion vector resolution information is entropy decoded based on a context model ([0495-0496]), Lim does not explicitly disclose wherein the model is determined based on information related to a prediction mode of the current block.
However, Chuang teaches wherein the model is determined based on information related to a prediction mode of the current block ([0019]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate the teachings of Chuang with Lim for the benefit of improving the coding efficiency of adaptive MV precision.
As per claim 8, which is the corresponding method with the limitations of the method as recited in claim 4, thus the rejection and analysis made for claim 4 also applies here.
As per claim 12, which is the corresponding image encoding method with the limitations of the method as recited in the image decoding method as recited in claim 4, thus the rejection and analysis made for claim 4 also applies here.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Kondo et al., (U.S. Pub. No. 2014/0254687 A1), “Encoding Device and Encoding Method, And Decoding Device And Decoding Method”
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA PRINCE whose telephone number is (571)270-1821. The examiner can normally be reached M-F 7:30-3:30 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JESSICA PRINCE
Examiner
Art Unit 2486
/JESSICA M PRINCE/ Primary Examiner, Art Unit 2486