DETAILED ACTION
This office action is in response to the application filed on 10/24/2025. Claims 1-5 have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgement is made of applicant's claim for foreign application number JP: 2019-017444 filed on 02/01/2016.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/16/2025 and 07/11/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant's cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-5 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 of U.S. patent Application No. 17/727,409. Although the claims at issue are not identical, they are not patentably distinct from each other because it would be obvious to one of ordinary skill in the art at the time of invention that the claims cover substantially the same subject matter. The table below shows only a sample of how each of these claims is anticipated by claims such as claim 1 of U.S. patent application No. 17/727,409.
Instant Applicant
U.S. patent application No. 17/727,409
Claim 1: A video decoding method, applied to a video decoding device, comprising: decoding a motion vector difference absolute value from a coded data; deriving a difference motion vector by using the motion vector difference absolute value; decoding a flag, from the coded data, specifying an accuracy of a motion vector, in a case that the difference motion vector is not equal to zero; determining, by using the flag, a shift value used for a rounding process of a motion vector; generating a prediction image by using the motion vector based on the difference motion vector; and decoding a coding target image by adding a residual image to the prediction image or subtracting the residual image from the prediction image.
Claim 1: A prediction image generation device for generating a prediction image, the prediction image generation device comprising: an inter-prediction parameter decoding control circuitry that derives a difference motion vector by using a motion vector difference value and that decodes a flag, from a coded data, specifying an accuracy of a motion vector, in a case that the motion vector difference value is not equal to zero; and that decodes a prediction motion vector index; a prediction image generation circuitry that generates a prediction image by using a motion vector based on the difference motion vector and a prediction motion vector, wherein the inter-prediction parameter decoding control circuitry determines a shift value used for a rounding process of the prediction motion vector by using the flag and, the prediction motion vector is selected from a motion vector prediction list by using the prediction motion vector index.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5 are rejected under 35 U.S.C. 103 as being unpatentable over Kondo (US 2013/0077886) in Suzuki (US 2004/0240550) and in further view of Tsukagoshi (US 2015/0172690).
Regarding claim 1, Kondo discloses the following claim limitations: a video decoding method, applied to a video decoding device, comprising: decoding a motion vector difference absolute value from a coded data; deriving a difference motion vector by using the motion vector difference absolute value (Kondo, paragraphs 10 and 33 discloses the inter prediction process… a variable length decoding unit for decoding a coded stream to output a difference motion vector i.e. from motion information; claim 1 discloses an addition unit for adding the difference motion vector to the predictive motion vector to calculate the motion vector of the block to be decoded).
Kondo does not explicitly disclose the following claim limitations: determining, by using the flag, a shift value used for a rounding process of a motion vector; generating a prediction image by using the motion vector based on the difference motion vector; and decoding a coding target image by adding a residual image to the prediction image or subtracting the residual image from the prediction image decoding a flag, from the coded data, specifying an accuracy of a motion vector, in a case that the difference motion vector is not equal to zero.
However, in the same field of endeavor Suzuki discloses more explicitly the following: determining, by using the flag, a shift value used for a rounding process of a motion vector; generating a prediction image by using the motion vector based on the difference motion vector; and decoding a coding target image by adding a residual image to the prediction image or subtracting the residual image from the prediction image (Suzuki, paragraph 95 and Figures 9-12 illustrates using a two dimensional table including the flag and a parameter for a block and “"mv_shift" in the expression (1) is a value shown in a table 81 shown in FIG. 9. The expression (1) shows that a value calculated by arithmetically shifting a value calculated by subtracting a predictive motion vector component PMV from a motion vector component MV rightward by a number shown in mv_shift is a differential motion vector component MVD”; in addition paragraph 98 discloses MV=(MVD<<mv_shift)+PMV (2); in addition paragraph 99 discloses a value calculated by adding a predictive motion vector component PMV to a value calculated by arithmetically shifting a differential motion vector component MVD leftward by a number shown in mv_shift is a decoded motion vector component MV. The decoded differential motion vector component MVD varies from a multiple of 1 to a multiple (corresponding to MV-PMV in the coding apparatus) of 1<<mv_shift by the arithmetic left bit shift process. The decoded motion vector component MV is regenerated to a value of original accuracy by adding PMV to this value).
It would have been obvious to one with ordinary skill in the art before the effective filing date to modify the teachings of Kondo with Suzuki to create the motion vector predictor of Kondo with switching the coding accuracy of a motion vector according to block size.
The reasoning being is to provide an efficient method for moving picture coding/decoding (Suzuki, paragraph 2).
Kondo and Suzuki do not explicitly disclose the following claim limitations: decoding a flag, from the coded data, specifying an accuracy of a motion vector, in a case that the difference motion vector is not equal to zero.
However, in the same field of endeavor Tsukagoshi discloses more explicitly the following: decoding a flag, from the coded data, specifying an accuracy of a motion vector, in a case that the difference motion vector is not equal to zero (Tsukagoshi, paragraph 93 discloses When the flag "constrained_to_half_pixel_MV_ flag" is "1" as illustrated in FIG. 11, it is indicated that the accuracy of the motion vector MV is limited to 1/2 pixel accuracy. Further, when the "constrained_to_integer_pixel_MV_ flag" is "1" as illustrated in FIG. 11, it is indicated that the accuracy of the motion vector MV is limited to integer pixel accuracy, also Tsukagoshi, paragraph59 discloses receiver 200 executes decoding processing for the video stream and obtains display image data; in addition Kondo in paragraph 33 discloses the difference motion vector value which is not to equal zero).
It would have been obvious to one with ordinary skill in the art before the effective filing date to modify the teachings of Kondo and Suzuki with Tsukagoshi to create the system of Kondo and Suzuki as outlined above with flag indicating accuracy of a motion vector.
The reasoning being is to facilitate obtainment of image data having a resolution suitable to own display capability in a receiver not supporting a super-high definition service in the case where image data of the super-high definition service is transmitted without scalable coding (Tsukagoshi, paragraph 5).
Regarding claim 2, Kondo, Suzuki and Tsukagoshi discloses a video decoding device comprising: a prediction image generation device for generating a prediction image, the prediction image generation device comprising: an inter-prediction parameter decoding control circuitry that decodes a motion vector difference absolute value (Kondo, paragraphs 10 and 33 discloses the inter prediction process… a variable length decoding unit for decoding a coded stream to output a difference motion vector i.e. from motion information; claim 1 discloses an addition unit for adding the difference motion vector to the predictive motion vector to calculate the motion vector of the block to be decoded),
from a coded data and derives a difference motion vector by using the motion vector difference absolute value and that decodes a flag, from the coded data, specifying an accuracy of a motion vector, in a case that the difference motion vector is not equal to zero (Tsukagoshi, paragraph 93 discloses When the flag "constrained_to_half_pixel_MV_ flag" is "1" as illustrated in FIG. 11, it is indicated that the accuracy of the motion vector MV is limited to 1/2 pixel accuracy. Further, when the "constrained_to_integer_pixel_MV_ flag" is "1" as illustrated in FIG. 11, it is indicated that the accuracy of the motion vector MV is limited to integer pixel accuracy, also Tsukagoshi, paragraph59 discloses receiver 200 executes decoding processing for the video stream and obtains display image data; in addition Kondo in paragraph 33 discloses the difference motion vector value which is not to equal zero),
and determines a shift value used for a rounding process of a motion vector by using the flag; and a prediction image generation circuitry that generates a prediction image by using the motion vector based on the difference motion vector, and an adder, for decoding a coding target image by adding a residual image to the prediction image or subtracting the residual image from the prediction image (Kondo, paragraph 23 discloses the block on a higher layer includes the block to be decoded and has a block size larger than that of the block to be decoded. The generated difference motion vector is added to the set predictive motion vector in order to calculate the motion vector of the block to be decoded; Suzuki, paragraph 95 and Figures 9-12 illustrates using a two dimensional table including the flag and a parameter for a block and “"mv_shift" in the expression (1) is a value shown in a table 81 shown in FIG. 9. The expression (1) shows that a value calculated by arithmetically shifting a value calculated by subtracting a predictive motion vector component PMV from a motion vector component MV rightward by a number shown in mv_shift is a differential motion vector component MVD”; in addition paragraph 98 discloses MV=(MVD<<mv_shift)+PMV (2); in addition paragraph 99 discloses a value calculated by adding a predictive motion vector component PMV to a value calculated by arithmetically shifting a differential motion vector component MVD leftward by a number shown in mv_shift is a decoded motion vector component MV. The decoded differential motion vector component MVD varies from a multiple of 1 to a multiple (corresponding to MV-PMV in the coding apparatus) of 1<<mv_shift by the arithmetic left bit shift process. The decoded motion vector component MV is regenerated to a value of original accuracy by adding PMV to this value). The same motivation that was utilized in claim 1 applies equally as well to claim 2.
Regarding claim 3, Kondo, Suzuki and Tsukagoshi discloses a video coding method, performed by a video coding device, comprising: determining a difference motion vector (Kondo, paragraphs 10 and 33 discloses the inter prediction process… a variable length decoding unit for decoding a coded stream to output a difference motion vector i.e. from motion information; claim 1 discloses an addition unit for adding the difference motion vector to the predictive motion vector to calculate the motion vector of the block to be decoded),
determining a flag specifying an accuracy of a motion vector in a case that the difference motion vector is not equal to zero (Tsukagoshi, paragraph 93 discloses When the flag "constrained_to_half_pixel_MV_ flag" is "1" as illustrated in FIG. 11, it is indicated that the accuracy of the motion vector MV is limited to 1/2 pixel accuracy. Further, when the "constrained_to_integer_pixel_MV_ flag" is "1" as illustrated in FIG. 11, it is indicated that the accuracy of the motion vector MV is limited to integer pixel accuracy, also Tsukagoshi, paragraph59 discloses receiver 200 executes decoding processing for the video stream and obtains display image data; in addition Kondo in paragraph 33 discloses the difference motion vector value which is not to equal zero),
determining by using the flag, a shift value used for a rounding process of the motion vector; generating a prediction image by using the motion vector based on the difference motion vector; and coding a residual of the prediction image and a coding target image, the flag and a motion vector difference absolute value corresponding to the difference motion vector (Kondo, paragraph 23 discloses the block on a higher layer includes the block to be decoded and has a block size larger than that of the block to be decoded. The generated difference motion vector is added to the set predictive motion vector in order to calculate the motion vector of the block to be decoded; Suzuki, paragraph 95 and Figures 9-12 illustrates using a two dimensional table including the flag and a parameter for a block and “"mv_shift" in the expression (1) is a value shown in a table 81 shown in FIG. 9. The expression (1) shows that a value calculated by arithmetically shifting a value calculated by subtracting a predictive motion vector component PMV from a motion vector component MV rightward by a number shown in mv_shift is a differential motion vector component MVD”; in addition paragraph 98 discloses MV=(MVD<<mv_shift)+PMV (2); in addition paragraph 99 discloses a value calculated by adding a predictive motion vector component PMV to a value calculated by arithmetically shifting a differential motion vector component MVD leftward by a number shown in mv_shift is a decoded motion vector component MV. The decoded differential motion vector component MVD varies from a multiple of 1 to a multiple (corresponding to MV-PMV in the coding apparatus) of 1<<mv_shift by the arithmetic left bit shift process. The decoded motion vector component MV is regenerated to a value of original accuracy by adding PMV to this value). The same motivation that was utilized in claim 1 applies equally as well to claim 3.
Regarding claim 4, Kondo, Suzuki and Tsukagoshi discloses a video coding device comprising: a prediction image generation device for generating a prediction image, the prediction image generation device is configured to: determining a difference motion vector (Kondo, paragraphs 10 and 33 discloses the inter prediction process… a variable length decoding unit for decoding a coded stream to output a difference motion vector i.e. from motion information; claim 1 discloses an addition unit for adding the difference motion vector to the predictive motion vector to calculate the motion vector of the block to be decoded),
determining a flag specifying an accuracy of a motion vector in a case that the difference motion vector is not equal to zero (Tsukagoshi, paragraph 93 discloses When the flag "constrained_to_half_pixel_MV_ flag" is "1" as illustrated in FIG. 11, it is indicated that the accuracy of the motion vector MV is limited to 1/2 pixel accuracy. Further, when the "constrained_to_integer_pixel_MV_ flag" is "1" as illustrated in FIG. 11, it is indicated that the accuracy of the motion vector MV is limited to integer pixel accuracy, also Tsukagoshi, paragraph59 discloses receiver 200 executes decoding processing for the video stream and obtains display image data; in addition Kondo in paragraph 33 discloses the difference motion vector value which is not to equal zero),
and determining by using the flag, a shift value used for a rounding process of the motion vector; and generating a prediction image by using the motion vector; and a coder, configured to code a residual of the prediction image and a coding target image, the flag and a motion vector difference absolute value corresponding to the difference motion vector (Kondo, paragraph 23 discloses the block on a higher layer includes the block to be decoded and has a block size larger than that of the block to be decoded. The generated difference motion vector is added to the set predictive motion vector in order to calculate the motion vector of the block to be decoded; Suzuki, paragraph 95 and Figures 9-12 illustrates using a two dimensional table including the flag and a parameter for a block and “"mv_shift" in the expression (1) is a value shown in a table 81 shown in FIG. 9. The expression (1) shows that a value calculated by arithmetically shifting a value calculated by subtracting a predictive motion vector component PMV from a motion vector component MV rightward by a number shown in mv_shift is a differential motion vector component MVD”; in addition paragraph 98 discloses MV=(MVD<<mv_shift)+PMV (2); in addition paragraph 99 discloses a value calculated by adding a predictive motion vector component PMV to a value calculated by arithmetically shifting a differential motion vector component MVD leftward by a number shown in mv_shift is a decoded motion vector component MV. The decoded differential motion vector component MVD varies from a multiple of 1 to a multiple (corresponding to MV-PMV in the coding apparatus) of 1<<mv_shift by the arithmetic left bit shift process. The decoded motion vector component MV is regenerated to a value of original accuracy by adding PMV to this value). The same motivation that was utilized in claim 1 applies equally as well to claim 4.
Regarding claim 5, Kondo, Suzuki and Tsukagoshi discloses a non-transitory computer-readable storage medium, having a computer program and a bitstream stored thereon, wherein the computer program, when executed by a processor, enables the processor to perform the operations of the video encoding method of claim 3 to generate the bitstream (Suzuki, paragraph 149 discloses A right shift processor 103 shifts MV-PMV rightward. As a decoder executes processing reverse to the processing of an encoder, the memory 101 for storing a motion vector MV, a left shift processor 113 and an adder 112 are provided). The same motivation that was utilized in claim 1 applies equally as well to claim 5.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JERRY T JEAN BAPTISTE whose telephone number is (571)272-6189. The examiner can normally be reached on Monday-Friday 9-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JERRY T JEAN BAPTISTE/Primary Examiner, Art Unit 2481