Detailed Office Action
1. This communication is being filed in response to the submission having a mailing date of (12/30/2024) which a (3) month Shortened Statutory Period for Response has been set.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Acknowledgements
3. Upon new entry, claims (1 -3) appear pending for examination, of which (1-3) are the three (3) parallel running independent claims on record.
Information Disclosure Statement
4. The Information Disclosure Statement (IDS) that was/were submitted on (08/18/2025 and 06/02/2025) is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has/have been considered by the examiner.
Drawings
5. The submitted Drawings on date (12/30/2024) has been accepted and considered under the 37 CFR 1.121 (d).
Double Patenting
6. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g. In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
6.1. A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
6.2. Individuals associated with the filing and prosecution of the instant patent application have a duty to disclose information within their knowledge as to other copending United States applications which are "material to patentability" of the application in question. See MPEP §2001.06(b) for more details.
6.3. Independent claims (1-3) of the instant Application 19/005,642 - directed to a codec apparatus/methodology, that employs a particular inter-prediction technique of the same, being rejected on the ground of nonstatutory obvious-type double patenting as being unpatentable over the analogous claims of the parent Appls. 18/234,208 (now US 12, 219,147); 17835454 (now US 11,818,358) and 17/561,614; (now US 11,503,301).
6.4. Although the conflicting claims are not identical, they are not patentably distinct from each other, because the claims use similar scope of the invention, and/or similar variations of the same claim language.
Instant application:
19/005,642
Reference:
18/234,208
(now US 12, 219,147)
Reference: 17/835/454;
(now US 11,818,358)
Reference: 17/561,614;
(now US 11,503,301).
1. An image decoding method performed by a decoding apparatus, the method comprising: obtaining image information including prediction related information and information on motion vector difference (MVD) from a bitstream; deriving an inter prediction mode for a current block based on the prediction related information; constructing motion vector predictor (MVP) candidate lists for the current block based on neighboring blocks of the current block; deriving MVPs for the current block based on the MVP candidate lists; deriving motion vectors (MWs) for the current block based on MYDs and the VPs; and generating predicted samples for the current block based on motion information including the MWs and symmetric motion vector difference reference indices, wherein bi-prediction is applied to the current block, wherein the MVs include an MVL0 for an L0 prediction and an MVL1 for an L1 prediction, wherein the symmetric motion vector difference reference indices include a symmetric motion vector difference reference index L0 for the L0 prediction and a symmetric motion vector difference reference index L1 for the L1 prediction, wherein the information on the IVID includes information on an VLO for the LO prediction, wherein the MVDs include the MVDL0 for the LO prediction and an MVDL1 for the LI1 prediction, wherein the MVDLO is derived based on the information on the MVDLO, wherein the MYDL1 is derived based on the MVDLO, wherein a size of the MVDLI1 is the same as a size of the MVDLO, and a sign of the MiVDL1 is opposite to a sign of the MiVDL0, wherein the MVPs include an MVPLO for the LO prediction and an MVPL1 for the LI1 prediction, wherein the MVLO is derived based on a sum of the MYDLO and the MYPLO, wherein the MVL I is derived based on a sum of the M~VDL I and the MYPL wherein the symmetric motion vector difference reference index LO and the symmetric motion vector difference reference index LI are derived based on picture order count (POC) differences between short-term reference pictures among reference pictures included in reference picture lists and a current picture including the current block.
2. An image encoding method performed by an encoding apparatus, the method comprising: deriving an inter prediction mode for a current block; constructing motion vector predictor (TVVP) candidate lists for the current block based on neighboring blocks of the current block; deriving MVPs for the current block based on the MVP candidate lists; deriving -motion information for the current block including motion vectors (MWs) and symmetric motion vector difference reference indices; generating prediction related information including information on the inter prediction mode and information on motion vector difference (MYD) for the current block; generating predicted samples for the current block based on the motion information; generating residual information based on the predicted samples; and encoding image information including the prediction related information and the residual information, wherein bi-prediction is applied to the current block, wherein the MVPs include an MVPLO for an LO prediction and an MVPL1 for an Li prediction, wherein the MVs include an MVLO for the LO prediction and an MVLI for the LI prediction, wherein the symmetric motion vector difference reference indices include a symmetric motion vector difference reference index LO for the LO prediction and a symmetric motion vector difference reference index Li for the Li prediction,2wherein the information on the MVD includes information on an I\VDLO for the LO prediction, wherein the MVDLO is derived by subtracting the MVPLO from the MVLO, wherein an VDL I is derived by subtracting the I\VPL I from the MYLIl, wherein a size of the MVDL I is the same as a size of the MVDLO, and a sign of the MVDL1 is opposite to a sign of the MVDLO, and wherein the symmetric motion vector difference reference index LO and the symmetric motion vector difference reference index LI1 are derived based on picture order count (POC) differences between short-term reference pictures among reference pictures included in reference picture lists and a current picture including the current block.
Claim 3. A transmission method of data for an image, the method comprising: obtaining a bitstream for the image, wherein the bitstream is generated based on deriving an inter prediction mode for a current block, constructing motion vector predictor (MVP) candidate lists for the current block based on neighboring block of the current block; deriving MVPs for the current block based on the MVP candidate lists, deriving motion information for the current block including motion vectors (M~s) and symmetric motion vector difference reference indices, generating prediction related information including information on the inter prediction mode and information on motion vector difference (MVD) for the current block, generating predicted samples for the current block based on the motion information, generating residual information based on the predicted samples, and encoding image information including the prediction related information and the residual information; and transmitting the data comprising the bitstream, wherein bi-prediction is applied to the current block, wherein the MVPs include an MVPLO for an LO prediction and an MVPL1 for an Li prediction, wherein the IVVs include an MVLO for the LO prediction and an MVLI for the LI prediction,3wherein the symmetric motion vector difference reference indices include a symmetric motion vector difference reference index LO for the LO prediction and a symmetric motion vector difference reference index Li for the Li prediction, wherein the information on the MVD includes information on an M\VDLO for the LO prediction, wherein the MVDLO is derived by subtracting the MVPLO from the MVLO, wherein an MVDL1 is derived by subtracting the MVPL1 from the MVL1, wherein a size of the MVDLI1 is the same as a size of the MVDLO, and a sign of the MVDL1 is opposite to a sign of the MVDLO, and wherein the symmetric motion vector difference reference index LO and the symmetric motion vector difference reference index LI1 are derived based on picture order count (POC) differences between short-term reference pictures among reference pictures included int reference picture lists and a current picture including the current block.
Claim 19. An image decoding method performed by a decoding apparatus, the method comprising: receiving image information including prediction related information and information on motion vector differences (MVDs) through a bitstream; deriving an inter prediction mode for a current block based on the prediction related information; constructing motion vector predictor (MVP) candidate lists for the current block based on neighboring blocks of the current block; deriving MVPs for the current block based on the MVP candidate lists; deriving motion information for the current block based on the information on the MVDs and the MVPs; and generating predicted samples for the current block based on the motion information, wherein bi-prediction is applied to the current block, wherein the motion information includes motion vectors (MVs) and symmetric motion vector difference reference indices, wherein the MVs include an MVLO for an LO prediction and an MVL1 for an Li prediction, wherein the symmetric motion vector difference reference indices include a symmetric motion vector difference reference index LO for the LO prediction and a symmetric motion vector difference reference index Li for the Li prediction, wherein the information on the MVDs includes information on an MVDLO for the LO prediction, wherein information on an MVDL1 for the Li prediction is derived based on the information on the MVDLO, wherein the MVDLO is derived based on the information on the MVDLO, wherein the MVDL1 is derived based on the information on the MVDL1, wherein the MVPs include an MVPLO for the LO prediction and an MVPL1 for the L1 prediction, wherein the MVLO is derived based on a sum of the MVDLO and the MVPLO, wherein the MVL1 is derived based on a sum of the MVDL1 and the MVPL1, and wherein the symmetric motion vector difference reference index LO and the symmetric motion vector difference reference index L1 are derived based on picture order count (POC) differences between short-term reference pictures among reference pictures included in reference picture lists and a current picture including the current block.
Claim 20. (New) An image encoding method performed by an encoding apparatus, the method comprising: deriving an inter prediction mode for a current block; constructing motion vector predictor (MVP) candidate lists for the current block based on neighboring blocks of the current block; deriving MVPs for the current block based on the MVP candidate lists; deriving motion information for the current block including motion vectors (MVs); generating prediction related information including information on the inter prediction mode and information on motion vector differences (MVDs) for the current block; generating predicted samples for the current block based on the motion information; generating residual information based on the predicted samples; and encoding image information including the prediction related information and the residual information, wherein the motion information includes symmetric motion vector difference reference indices, wherein the MVs include an MVLO for an LO prediction and an MVL1 for an L1 prediction, wherein the symmetric motion vector difference reference indices include a symmetric motion vector difference reference index LO for the LO prediction and a symmetric motion vector difference reference index L1 for the L1 prediction, wherein the information on the MVDs includes information on an MVDLO for the LO prediction, wherein information on an MVDL1 for the L1 prediction is derived based on the information on the MVDLO, wherein the MVLO is derived based on the information on the MVDLO, wherein the MVL1 is derived based on the information on the MVDL1,wherein the MVPs include an MVPLO for the LO prediction and an MVPL1 for the L1 prediction, wherein the MVLO is derived based on a sum of the MVDLO and the MVPLO, wherein the MVL1 is derived based on a sum of the MVDL1 and the MVPL1, and wherein the symmetric motion vector difference reference index LO and the symmetric motion vector difference reference index L1 are derived based on picture order count (POC) differences between short-term reference pictures among reference pictures included in reference picture lists and a current picture including the current block.
Claim 21. (New) A non-transitory computer readable storage medium storing a bitstream generated by the image encoding method of claim 20.
Claim 22. (New) A transmission method of data for an image, the method comprising: obtaining a bitstream for the image, wherein the bitstream is generated based on deriving an inter prediction mode for a current block, constructing motion vector predictor (MVP) candidate lists for the current block based on neighboring blocks of the current block, deriving MVPs for the current block based on the MVP candidate lists, deriving motion information for the current block including motion vectors (MVs), generating prediction related information including information on the inter prediction mode and information on motion vector differences (MVDs) for the current block, generating predicted samples for the current block based on the motion information, generating residual information based on the predicted samples, and encoding image information including the prediction related information and the residual information; and transmitting the data comprising the bitstream, wherein the motion information includes symmetric motion vector difference reference indices, wherein the MVs include an MVLO for an LO prediction and an MVL1 for an L1 prediction, wherein the symmetric motion vector difference reference indices include a symmetric motion vector difference reference index LO for the LO prediction and a symmetric motion vector difference reference index L1 for the L1 prediction, wherein the information on the MVDs includes information on an MVDLO for LO prediction, wherein information on an MVD L1 for the Li prediction is derived based on the information on the MVDLO, wherein the MVLO is derived based on the information on the MVDLO, wherein the MVL1 is derived based on the information on the MVDL1,wherein the MVPs include an MVPLO for the LO prediction and an MVPL1 for the L1 prediction, wherein the MVLO is derived based on a sum of the MVDLO and the MVPLO, wherein the MVL1 is derived based on a sum of the MVDL1 and the MVPL1, and wherein the symmetric motion vector difference reference index LO and the symmetric motion vector difference reference index L1 are derived based on picture order count (POC) differences between short-term reference pictures among reference pictures included in reference picture lists and a current picture including the current block.
Claim 19. An image decoding apparatus, comprising: an entropy decoder for receiving image information including prediction related information and information on motion vector differences (MVDs) through a bitstream; and a predictor for deriving an inter prediction mode for a current block based on the prediction related information, for constructing motion vector predictor (MVP) candidate lists for the current block based on neighboring blocks of the current block, for deriving MVPs for the current block based on the MVP candidate lists, for deriving motion information for the current block based on the information on the MVDs and the MVPs, and for generating predicted samples for the current block based on the motion information, wherein bi-prediction is applied to the current block, wherein the motion information includes motion vectors (MVs) and symmetric motion vector difference reference indices, wherein the MVs include an MVLO for LO prediction and an MVL1 for L1 prediction, wherein the symmetric motion vector difference reference indices include a symmetric motion vector difference reference index LO for the LO prediction and a symmetric motion vector difference reference index LO for the L1 prediction, wherein the information on the MVDs includes information on an MVDLO for the LO prediction, wherein information on an MVDL1 for the L1 prediction is derived based on the information on the MVDLO, wherein the MVDLO is derived based on the information on the MVDLO, wherein the MVDL1 is derived based on the information on the MVDL1, and wherein the symmetric motion vector difference reference index LO and the symmetric motion vector difference reference index L1 are derived based on short-term reference pictures among reference pictures included in reference picture lists.
Claim 1. An image decoding method performed by a decoding apparatus, the method comprising: receiving image information including prediction related information and information on motion vector differences (MVDs) through a bitstream; deriving an inter prediction mode for a current block based on the prediction related information; constructing motion vector predictor (MVP) candidate lists for the current block based on neighboring blocks of the current block; deriving MVPs for the current block based on the MVP candidate lists; deriving motion information for the current block based on the information on the MVDs and the MVPs; and generating predicted samples for the current block based on the motion information, wherein bi-prediction is applied to the current block, wherein the motion information includes motion vectors (MVs) and symmetric motion vector difference reference indices, wherein the MVs include an MVLO for LO prediction and an MVL1 for L1 prediction, wherein the symmetric motion vector difference reference indices include a symmetric motion vector difference reference index LO for the LO prediction and a symmetric motion vector difference reference index L1 for the L1 prediction, wherein the information on the MVDs includes information on an MVDLO for the LO prediction, wherein information on an MVDL1 for the L1 prediction is derived based on the information on the MVDLO, wherein the MVDLO is derived based on the information on the MVDLO, wherein the MVDL1 is derived based on the information on the MVDL1, wherein the symmetric motion vector difference reference index LO and the symmetric motion vector difference reference index L1 are derived based on short-term reference pictures among reference pictures included in reference picture lists.
6.5. It would have been obvious to one of ordinary skill in the art, at the time the invention was made/filed, to combine the instant 19/005,642 with the cited above reference(s) because although the conflicting claims are not identical, they are not patentably distinct from each other, the claim language uses similar scope of the invention, and/or a similar variation of the same claim language.
Claim objection
7. Independent claims (1 -3) and the associated dependencies are objected to, because of the judicially created Double patent doctrine, as documented in section (6) above, but it may be considered for allowance if properly rewritten, and/or if a Terminal Disclaimer (TD) is timely filed, in compliance with 37 CFR 1.321(c) or 1.321(d).
Prior Art Citations
8. The following List of prior art, made of record and not relied upon, is/are considered
pertinent to applicant's disclosure:
8.1. Patent documentation
US 11,503,301 B2 Park; et al. H04N19/109; H04N19/52; H04N19/105;
US 11,818,358 B2 Park; et al. H04N19/573; H04N19/70; H04N19/139;
US 12,219,147 B2 Park; et al. H04N19/58; H04N19/105; H04N19/70;
8.1. Non-Patent documentation (NPL):
_ Symmetrical MVD mode; Chen – 2018;
_ Symmetrical mode for bi-prediction; Oct-2018;
_ Enhanced AMVP Mechanism Based Adaptive Motion Search Range Decision Algorithm for
Fast HEVC; 2014;
CONCLUSIONS
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUIS PEREZ-FUENTES (luis.perez-fuentes@uspto.gov) whose telephone number is (571) 270 -1168. The examiner can normally be reached on Monday-Friday 8am-5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, WILLIAM VAUGHN can be reached on (571) 272-3922. The fax phone number for the organization where this application or proceeding is assigned is (571) 272 -3922. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated system, please call (800) 786 -9199 (USA OR CANADA) or (571) 272 -1000.
/LUIS PEREZ-FUENTES/
Primary Examiner, Art Unit 2481.