DETAILED ACTION
1. This office action is in response to U.S. Patent Application No.: 18/921,545 filed on 10/21/2024
with effective filing date 3/24/2016. Claims 1-5 are pending.
Double Patenting
2. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
3. Claims 1-5 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5 of U.S. Patent No. 12155842, claims 1-5 of U.S. Patent No. 11770539, and claims 1-5 of U.S. Patent No. 11388420. Although the claims at issue are not identical, they are not patentably distinct.
Current Application
US 12,155,84
1. An inter prediction method comprising: acquiring motion vector refinement information of a current block;
generating a motion vector of the current block; refining the motion vector of the current block, based on the motion vector refinement information; generating a prediction block of the current block based on the refined motion vector of the current block;
generating a residual block of the current block; and reconstructing the current block based on the prediction block and the residual block, wherein the refined motion vector of the current block is derived by using a reconstructed block which is previously decoded from the current block, wherein the refinement of the motion vector of the current block is performed based on a block size of the current block, wherein the refinement of the motion vector of the current block is performed when bidirectional prediction is performed on the current block, wherein the refinement of the motion vector of the current block is performed with sub-pel precision, and wherein the refined motion vector of the current block is derived based on the motion vector of the current block and a offset vector of the current block.
1. An inter prediction method comprising: acquiring motion vector refinement information of a current block;
generating a motion vector of the current block; refining the motion vector of the current block, based on the motion vector refinement information; generating a prediction block of the current block based on the refined motion vector of the current block;
and reconstructing the current block based on the prediction block,
wherein the refined motion vector of the current block is derived by using a reconstructed block which is previously decoded from the current block, wherein the refinement of the motion vector of the current block is performed based on a block size of the current block, and wherein the refinement of the motion vector of the current block is performed when bidirectional prediction is performed on the current block, wherein the refinement of the motion vector of the current block is performed with sub-pel precision.
4. An inter prediction method comprising: determining a prediction mode of a current block as an inter prediction mode; generating a motion vector of the current block; determining motion vector refinement information of the current block based on the motion vector of the current block; generating a prediction block of the current block based on the refined motion vector of the current block; generating a residual block of the current block based on the prediction block; reconstructing the current block based on the prediction block and the residual block; and encoding the motion vector refinement information, wherein the motion vector refinement information is used to refine the motion vector of the current block in a decoding process, wherein the refined motion vector of the current block in the decoding process is derived by using a reconstructed block which is previously decoded from the current block in the decoding process, wherein the refinement of the motion vector of the current block in the decoding process is performed based on a block size of the current block, wherein the refinement of the motion vector of the current block is performed when bidirectional prediction is performed on the current block, wherein the refinement of the motion vector of the current block is performed with sub-pel precision, and wherein the refined motion vector of the current block in the decoding process is derived based on the motion vector of the current block and a offset vector of the current block.
4. An inter prediction method comprising: determining a prediction mode of a current block as an inter prediction mode; generating a motion vector of the current block; determining motion vector refinement information of the current block based on the motion vector of the current block; generating a prediction block of the current block based on the refined motion vector of the current block; reconstructing the current block based on the prediction block; and encoding the motion vector refinement information, wherein the motion vector refinement information is used to refine the motion vector of the current block in a decoding process, wherein the refined motion vector of the current block in the decoding process is derived by using a reconstructed block which is previously decoded from the current block in the decoding process, wherein the refinement of the motion vector of the current block in the decoding process is performed based on a block size of the current block, wherein the refinement of the motion vector of the current block is performed when bidirectional prediction is performed on the current block, and wherein the refinement of the motion vector of the current block is performed with sub-pel precision.
5. A non-transitory computer-readable recording medium storing a bitstream that is generated by an image encoding method with an encoding apparatus, wherein the image encoding method comprises: determining a prediction mode of a current block as an inter prediction mode; generating a motion vector of the current block; determining motion vector refinement information of the current block based on the motion vector of the current block; generating a prediction block of the current block based on the refined motion vector of the current block; generating a residual block of the current block based on the prediction block; reconstructing the current block based on the prediction block and the residual block; and encoding the motion vector refinement information, wherein the motion vector refinement information is used to refine the motion vector of the current block in a decoding process, wherein the refined motion vector of the current block in the decoding process is derived by using a reconstructed block which is previously decoded from the current block in the decoding process, wherein the refinement of the motion vector of the current block in the decoding process is performed based on a block size of the current block, wherein the refinement of the motion vector of the current block is performed when bidirectional prediction is performed on the current block, wherein the refinement of the motion vector of the current block is performed with sub-pel precision, and wherein the refined motion vector of the current block in the decoding process is derived based on the motion vector of the current block and a offset vector of the current block.
5. A non-transitory computer-readable recording medium storing a bitstream that is generated by an image encoding method with an encoding apparatus, wherein the image encoding method comprises: determining a prediction mode of a current block as an inter prediction mode; generating a motion vector of the current block; determining motion vector refinement information of the current block based on the motion vector of the current block; generating a prediction block of the current block based on the refined motion vector of the current block; reconstructing the current block based on the prediction block; and encoding the motion vector refinement information, wherein the motion vector refinement information is used to refine the motion vector of the current block in a decoding process, wherein the refined motion vector of the current block in the decoding process is derived by using a reconstructed block which is previously decoded from the current block in the decoding process, wherein the refinement of the motion vector of the current block in the decoding process is performed based on a block size of the current block, wherein the refinement of the motion vector of the current block is performed when bidirectional prediction is performed on the current block, and wherein the refinement of the motion vector of the current block is performed with sub-pel precision.
Allowable Subject Matter
4. After analyzing the current application examiner concluded that the novelty of the current application involves acquiring motion vector correction related information about a current block. A motion vector of the current block is restored. Motion compensation about the current block is performed based on the motion vector. The motion compensation about the current block is re-executed by using the corrected motion vector. The motion vector of the current block is amended by using a motion compensation result of training or motion vector correction related information. Intra prediction is performed based on a reference pixel of the current block.
The prior art of record in particular, Jeon et al. US 2015/0036749 A1 in view of Bottreau et al. US 2002/0159518 A1 does not disclose, with respect to claim 1, 4 & 5, wherein the refinement of the motion vector of the current block in the decoding process is performed based on a block size of the current block, wherein the refinement of the motion vector of the current block is performed when bidirectional prediction is performed on the current block, and wherein the refinement of the motion vector of the current block is performed with sub-pel precision as claimed.
Rather, Jeon et al. discloses the method involves receiving prediction mode information, interpolating information and residual of a current block. An interpolating pixel is reconstructed using the interpolating information and neighbor blocks. The current block is reconstructed using the interpolating pixel, prediction mode information and the residual of current block, where the interpolating information is generated based on a location of the current block, and comprises flag information indicating whether the current block is located on a boundary area of a picture.
Similarly, Bottreau et al. discloses an error residual image is decoded by matching pursuit and is
reconstructed by means of the encoded atoms (MP1) and is then added to the motion compensated image (Nc1) corresponding to the current level layer image to produce the enhanced image (Nc'1). The new error residual image is used to refine the current level mesh towards mesh1, which is then taken as input for the next level, while the information concerning the mesh distance is contained in motion vectors (MV1) representing vertex displacements and new nodes are transmitted at each level.
Conclusion
5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRFAN HABIB whose telephone number is (571)270-7325. The examiner can normally be reached Mon-Th 9AM-7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 5712722988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Irfan Habib/Examiner, Art Unit 2485