DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) were submitted on 11/05/2024. The submission are in compliance with the provisions of 37 CFR § 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 17 is/are rejected under 35 U.S.C. § 101 as directed to non-statutory subject matter.
Claim 17 is/are rejected under 35 U.S.C. 101 as not falling within one of the four statutory categories of invention because the broadest reasonable interpretation of the instant claims in light of the specification encompasses transitory signals. But transitory signals are not within one of the four statutory categories ((i.e. process, machine, manufacture, or composition of matter).
However, claims directed toward a non-transitory computer readable medium may qualify as a manufacture and make the claim patent-eligible subject matter. Therefore, amending the claims to recite a “non-transitory computer-readable medium” would resolve this issue.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 17 is/are rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention.
The claim/claims is/are directed to “storing instructions” and/or “storing bitstreams” but claim/claims does not have any steps related to “storing instructions” and “storing bitstreams”, therefore, the scope of the claim/claims are/is vague and indefinite.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-17 are rejected under 35 U.S.C. 103 as being unpatentable over ZHANG et al. (US 20210258575, hereinafter ZHANG) in view of LIM et al. (US 20190104312, hereinafter LIM).
Regarding Claim 1, ZHANG discloses a method of decoding a current block, performed by a video decoding device ([0271] Decoder Side Motion Vector Refinement (DMVR) method), the method comprising:
decoding motion information for a first block partition of the current block from a bitstream ([0126] a corresponding block of the sub-CU is identified by the temporal vector in the motion source picture and the decoder uses motion vector MVx (corresponding to reference picture list X) to predict motion vector MVy for each sub-CU partition);
generating a first prediction block of the current block by using the motion information of the first block partition ([0126] for each sub-CU, the motion information of its corresponding block is used to derive the motion information for the sub-CU and is converted to the motion vectors and reference indices of the current sub-CU, in the same way as temporal motion vector predictor (TMVP) of the HEVC to predict motion vector MVy for each sub-CU; [0170], FIG. 16A and 16B, motion vectors (Mv1 and Mv2) of the triangular prediction units are stored in 4×4 grids);
decoding a merge candidate index for a second block partition of the current block from the bitstream ([0147] using parity to select uni-prediction motion information of the two partitions as merge indices denoted idx0 and idx1. For each merge candidate, if its reference picture list X=(idx0 & idx1) is true, the partition's motion information is set to the merge candidate's list X information. Otherwise, the partition's motion information is set to the merge candidate's list Y (Y=1−X) information resulting in the final motion information with uni-prediction as triangular prediction mode (TPM) candidates);
acquiring motion information of the second block partition from a geometric partition mode (GPM) merge list based on the merge candidate index and then generating a second prediction block of the current block by using the motion information of the second block partition ([0142], FIG. 13A and 13B, triangular prediction mode (TPM) [geometric partition mode (GPM)]: splits a CU into two triangular prediction units, in either diagonal or inverse diagonal direction. Each triangular prediction unit in the CU is inter-predicted using its own uni-prediction motion vector and reference frame index which are derived from one single uni-prediction candidate list - applied to merge modes; [0145] In contrast, for TPM coded block, two merge indices (with predictive coding) to a merge list are signaled).
PNG
media_image1.png
348
456
media_image1.png
Greyscale
ZHANG does not explicitly disclose deriving a geometric block partition shape that partitions the current block into the first block partition and the second block partition by using the first prediction block; and blending the first prediction block and the second prediction block by using the geometric block partition shape to generate a final prediction block of the current block.
LIM teaches deriving a geometric block partition shape that partitions the current block into the first block partition and the second block partition by using the first prediction block ([0159], FIG. 11, (a) to (e) show a division structure; [0173], FIG. 12, (a), a reference block division structure); and
blending the first prediction block and the second prediction block by using the geometric block partition shape to generate a final prediction block of the current block ([0159], FIG. 11, integrate (a) through (e) division structures equivalent or less granular units as shown by the arrows; [0160]; [0173], FIG. 12, (b) an integrated block division structure derived by referring to the division structure of the reference block)
PNG
media_image2.png
248
562
media_image2.png
Greyscale
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of deriving a specific partition shape based on the first prediction and blending the two predicted parts into a final prediction as taught by LIM ([0161]) into the encoding & decoding system of ZHANG in order to provide systems for drawing the divided structure of the current picture based on the reference block of the peripheral picture, so that reducing the data amount used for signaled additional information for division of the current picture can be reduced, and improve overall decoding efficiency (LIM, [0011]).
Regarding Claim 2, ZHANG in view of LIM discloses the method of claim 1,
ZHANG discloses wherein the motion information of the first block partition includes geometric prediction direction information, motion vector predictor information, a reference picture index, and a motion vector difference ([0101], FIG. 8, AMVP exploits spatio-temporal correlation of motion vector with neighbouring PUs, which is used for explicit transmission of motion parameters. For each reference picture list, a motion vector candidate list is constructed by firstly checking availability of left, above temporally neighbouring PU positions, removing redundant candidates and adding zero vector to make the candidate list to be constant length. And, with merge index signalling, the index of the best motion vector candidate is encoded using truncated unary).
Regarding Claim 3, ZHANG in view of LIM discloses the method of claim 2,
ZHANG discloses wherein decoding the motion information for the first block partition includes: decoding a flag indicating an L0 or L1 reference picture list as the geometric prediction direction information; and decoding a flag indicating one of candidates in an advanced motion vector prediction (AMVP) list of the current block as the motion vector predictor information ([0077] A single reference picture list, List 0, is used for a P slice and two reference picture lists, List 0 and List 1 are used for B slices and reference pictures included in List 0/1 could be from past and future pictures in terms of capturing/display order).
Regarding Claim 4, ZHANG in view of LIM discloses the method of claim 3,
ZHANG discloses wherein decoding the motion information for the first block partition includes: decoding an index indicating a reference picture included in the L0 or L1 reference picture list based on the geometric prediction direction information as the reference picture index; and decoding the motion vector difference from the bitstream ([0171]-[0172], the bi-prediction motion vector is derived from Mv1 and Mv2 according to the following rules: 1) In the case that Mv1 and Mv2 have motion vector from different directions (L0 or L1), Mv1 and Mv2 are simply combined to form the bi-prediction motion vector. 2) In the case that both Mv1 and Mv2 are from the same L0 (or L1) direction; [0173] If the reference picture of Mv2 is the same as a picture in the L1 (or L0) reference picture list, Mv2 is scaled to the picture. Mv1 and the scaled Mv2 are combined to form the bi-prediction motion vector).
Regarding Claim 5, ZHANG in view of LIM discloses the method of claim 3,
ZHANG discloses wherein, when a predefined direction is used as geometric prediction direction of the first block partition, decoding the geometric prediction direction information is omitted ([0004], geometric partitioning, are described. The described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC) and/or Versatile Video Coding (VVC) –[In the Versatile Video Coding (VVC/H.266) standard, the Geometric Partitioning Mode (GPM) uses a syntax-efficient method for signaling motion information including Syntax Omission: If the prediction direction (L0 or L1) is predefined by the GPM configuration for the first partition, the flag for geometric prediction direction is not signaled in the bitstream).
Regarding Claim 6, ZHANG in view of LIM discloses the method of claim 4,
ZHANG discloses wherein generating the first prediction block includes: generating the AMVP list; and acquiring a unidirectional motion vector predictor by using the geometric prediction direction information, the motion vector predictor information, and the AMVP list ([0170] The motion vectors (Mv1 and Mv2 in FIG. 16A and 16B) of the triangular prediction units are stored in 4×4 grids. For each 4×4 grid, either uni-prediction or bi-prediction motion vector is stored depending on the position of the 4×4 grid in the CU. As shown in FIGS. 16A-16B, uni-prediction motion vector, either Mv1 or Mv2, is stored for the 4×4 grid located in the non-weighted area (that is, not located at the diagonal edge).
Regarding Claim 7, ZHANG in view of LIM discloses the method of claim 6,
ZHANG discloses wherein generating the first prediction block includes: generating a motion vector by adding the motion vector predictor and the motion vector difference; and generating the first prediction block by using the motion vector and the reference picture index [0101] AMVP exploits spatio-temporal correlation of motion vector with neighbouring PUs, which is used for explicit transmission of motion parameters. For each reference picture list, a motion vector candidate list is constructed by firstly checking availability of left, above temporally neighbouring PU positions, removing redundant candidates and adding zero vector to make the candidate list to be constant length, see FIG. 8).
Regarding Claim 8, ZHANG in view of LIM discloses the method of claim 1,
ZHANG discloses further comprising: generating a merge list of the current block; and generating the GPM merge list from the merge list, wherein each candidate in the GPM merge list includes unidirectional motion information ([0142], FIG. 13A and 13B, triangular prediction mode (TPM) [geometric partition mode (GPM)]: splits a CU into two triangular prediction units, in either diagonal or inverse diagonal direction. Each triangular prediction unit in the CU is inter-predicted using its own uni-prediction motion vector and reference frame index which are derived from one single uni-prediction candidate list - applied to merge modes; [0145] In contrast, for TPM coded block, two merge indices (with predictive coding) to a merge list are signaled).
Regarding Claim 9, ZHANG in view of LIM discloses the method of claim 1,
ZHANG discloses wherein deriving the geometric block partition shape includes: performing an arithmetic operation of searching for a boundary region within a region of the first prediction block to derive a bisecting boundary that divides the first prediction block into the first block partition and the second block partition; and selecting one of predefined geometric block partition shapes based on the bisecting boundary ([0142], FIG. 13A and 13B, triangular prediction mode (TPM) [geometric partition mode (GPM)]: splits a CU into two triangular prediction units, in either diagonal or inverse diagonal direction. Each triangular prediction unit in the CU is inter-predicted using its own uni-prediction motion vector and reference frame index which are derived from one single uni-prediction candidate list - applied to merge modes; [0145] In contrast, for TPM coded block, two merge indices (with predictive coding) to a merge list are signaled).
Regarding Claim 10, ZHANG in view of LIM discloses the method of claim 2,
ZHANG discloses wherein the motion information of the second block partition has a different prediction direction from the motion information for the first block partition, based on the geometric prediction direction information ([0171]-[0172], the bi-prediction motion vector is derived from Mv1 and Mv2 according to the following rules: 1) In the case that Mv1 and Mv2 have motion vector from different directions (L0 or L1), Mv1 and Mv2 are simply combined to form the bi-prediction motion vector. 2) In the case that both Mv1 and Mv2 are from the same L0 (or L1) direction; [0173] If the reference picture of Mv2 is the same as a picture in the L1 (or L0) reference picture list, Mv2 is scaled to the picture. Mv1 and the scaled Mv2 are combined to form the bi-prediction motion vector).
Regarding Claims 11-16, Encoding method claims 11-16 of using the corresponding decoding method claimed in claims 1-6, and the rejections of which are incorporated herein for the same reasons as used above.
Regarding Claim 17, Encoding computer-readable claim 17 of using the corresponding decoding method claimed in claim 1, and the rejections of which are incorporated herein for the same reasons as used above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samuel D Fereja whose telephone number is (469)295-9243. The examiner can normally be reached 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID CZEKAJ can be reached at (571) 272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMUEL D FEREJA/Primary Examiner, Art Unit 2487