DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ZHANG et al., (From IDS: US 2019/0246143).
Regarding claim 1: ZHANG teaches an image decoding method [¶0006 teaches: a block is coded (e.g., encoded or decoded) with a block vector], comprising: deriving a block vector [¶0007 teaches: techniques that a video coder (e.g., video encoder or video decoder) may utilize to determine a block vector]; and deriving a prediction block for a current block by performing prediction based on the block vector [¶0009 teaches: determining one or more block vectors for one or more of the sub-blocks of the second color component that are predicted in intra-block copy (IBC) prediction mode based on one or more block vectors of one or more corresponding blocks of the plurality of blocks of the first color component, and coding the block of the second color component based on the one or more determined block vectors.].
Regarding claim 2: the essence of the claim is taught above in the rejection of claim 1.
In addition, ZHANG teaches wherein the prediction is template matching prediction or intra template matching prediction [¶0149 teaches: The proposed decoder-side motion vector derivation (DMVD) is applied for merge mode of bi-prediction with one from the reference picture in the past and the other from the reference picture in the future; and The following describes the bilateral template matching. A bilateral template is generated as the weighted combination of the two prediction blocks, from the initial MV0 of list0 and MV1 of list1, respectively, as shown in FIG. 12. The template matching operation includes calculating cost measures between the generated template and the sample region (around the initial prediction block) in the reference picture.].
Regarding claim 3: the essence of the claim is taught above in the rejection of claim 2.
In addition, ZHANG teaches wherein, when a luma component and a chroma component of the current block has independent block partitioning structures [See Figure 17A, elements 704A and 704B compared to Figure 17B, element 706}, template matching prediction mode information for the luma component and template matching prediction mode information for the chroma component are independently decoded for the luma component and the chroma component, respectively [¶0169 teaches: When the partition tree is decoupled for different color components, the signaling of IBC mode and/or motion vectors of IBC for a block may only applied to one color component (e.g., luma component). Alternatively, furthermore, the block of a component (e.g., Cb or Cr) which is coded after a pre-coded component (e.g., Luma) always inherits the usage of IBC mode of a corresponding block of that pre-coded component.].
Regarding claim 4: the essence of the claim is taught above in the rejection of claim 2.
In addition, ZHANG teaches wherein: a block vector of the current block is stored in a motion information buffer [¶0241 teaches: In this case, the prediction information syntax elements may indicate a reference picture in DPB 314 from which to retrieve a reference block, as well as a motion vector identifying a location of the reference block in the reference picture relative to the location of the current block in the current picture.], the block vector is added to a block vector candidate list for a next block of the current block [¶0070 teaches: a motion vector itself may be referred in a way that it is assumed that it has an associated reference index. A reference index is used to identify a reference picture in the current reference picture list (RefPicList0 or RefPicList1)], and the prediction is template matching prediction or intra template matching prediction [¶0142 teaches: As shown in FIG. 10, template matching is used to derive motion information of the current block by finding the best match between a template].
Regarding claim 5: the essence of the claim is taught above in the rejection of claim 1.
In addition, ZHANG teaches wherein the block vector is a template matching block vector [¶0202 teaches: template matching is conducted with the reference picture identical to the current picture if the seed motion vector refers to the current picture. In one example, bi-literal matching cannot be conducted if at least one of the two motion vectors refers to the current picture, i.e., at least one of the two motion vectors is a block vector in IBC].
Regarding claim 6: the essence of the claim is taught above in the rejection of claim 1.
In addition, ZHANG teaches wherein scaling is performed on the template matching block vector [¶0254 teaches: For example, the video coder may determine that chroma sub-blocks 708A and 708B are to inherit the block vectors of luma blocks 704A and 704B, and in response, the video coder may determine the block vectors for chroma sub-blocks 708A and 708B based on the block vectors of luma blocks 704A and 704B; and ¶0255 teaches: In some examples, to determine the one or more block vectors for one or more sub-blocks, the video coder may be configured to scale the one or more block vectors of the one or more corresponding blocks of the plurality of blocks of the first color component based on a sub-sampling format of the first color component and the second color component.].
Regarding claim 7: the claim is merely an image encoding method, which complements the image decoding method of claim 1. ZHANG teaches encoding [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 1 applies equally to this claim.
Regarding claim 8: the claim is merely an image encoding method, which complements the image decoding methods of claim 2. ZHANG teaches encoding, [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 2 applies equally to this claim.
Regarding claim 9: the claim is merely an image encoding method, which complements the image decoding methods of claim 3. ZHANG teaches encoding, [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 3 applies equally to this claim.
Regarding claim 10: the claim is merely an image encoding method, which complements the image decoding methods of claim 4. ZHANG teaches encoding, [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 4 applies equally to this claim.
Regarding claim 11: the claim is merely an image encoding method, which complements the image decoding methods of claim 5. ZHANG teaches encoding, [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 5 applies equally to this claim.
Regarding claim 12: the claim is merely an image encoding method, which complements the image decoding methods of claim 6. ZHANG teaches encoding, [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 6 applies equally to this claim.
Regarding claim 13: ZHANG teaches a non-transitory computer-readable storage medium [¶0207 teaches: The units may be implemented as fixed-function circuits, programmable circuits, or a combination thereof.] for storing a bitstream generated by the image encoding method of claim 7 [See rejection of claim 7 above.].
Regarding claim 14: ZHANG teaches a non-transitory computer-readable storage medium for storing a bitstream [¶0207 teaches: The units may be implemented as fixed-function circuits, programmable circuits, or a combination thereof.] for image decoding, wherein: the bitstream comprises prediction mode information, a block vector is derived using the prediction mode information [¶0007 teaches: techniques that a video coder (e.g., video encoder or video decoder) may utilize to determine a block vector], and a prediction block for a current block is derived by performing prediction based on the block vector [¶0009 teaches: determining one or more block vectors for one or more of the sub-blocks of the second color component that are predicted in intra-block copy (IBC) prediction mode based on one or more block vectors of one or more corresponding blocks of the plurality of blocks of the first color component, and coding the block of the second color component based on the one or more determined block vectors.].
Regarding claim 15: the claim is merely a non-transitory computer-readable storage medium for image decoding wherein the prediction is template matching prediction or intra template matching prediction of claim 2. Therefore, the rejection of claim 2 applies equally to this claim.
Regarding claim 16: the claim is merely a non-transitory computer-readable storage medium for storing a bitstream with an image decoding method of claim 3. Therefore, the rejection of claim 3 applies equally to this claim.
Regarding claim 17: the claim is merely a non-transitory computer-readable storage medium for storing a bitstream with an image decoding method of claim 4. Therefore, the rejection of claim 4 applies equally to this claim.
Regarding claim 18: the claim is merely a non-transitory computer-readable storage medium for storing a bitstream with an image decoding method of claim 5. Therefore, the rejection of claim 5 applies equally to this claim.
Regarding claim 19: the claim is merely a non-transitory computer-readable storage medium for storing a bitstream with an image decoding method of claim 6. Therefore, the rejection of claim 6 applies equally to this claim.
Conclusion
Prior art not relied upon: Please refer to the references listed in an attached PTO-892 and that are not relied upon for the claim rejections detailed above. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
In particular, JANG (US 2022/0132137) teaches an image decoding method of deriving a first motion vector predictor for a first prediction direction of a current block and a second motion vector predictor for a second prediction direction, deriving a first motion vector difference for the first prediction direction of the current block and a second motion vector difference for the second prediction direction using information on a motion vector difference;
ABE et al., (US 2024/0048691) teaches a decoder to derive first motion vectors for a first block obtained by splitting a picture, using a first inter prediction scheme, and generates a prediction image corresponding to the first block, by referring to spatial gradients of luminance generated based on the first motion vectors;
LEE et al., (US 2022/0256187) teaches deriving an initial motion vector from a merge candidate list of a current block; and
LEE et al., (US 2020/0314444) teaches a step of deriving two motion vectors (MV) for a current block, the two MVs including MVL0 for the L0 and MVL1 for the L1.
In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Marnie Matt whose telephone number is (303)297-4255. The examiner can normally be reached Monday - Friday, 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARNIE A MATT/Primary Examiner, Art Unit 2485