6098DETAILED ACTION
This Office Action is in response to the application filed on January 16, 2025. Claims 1-3 are pending and are examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Drawings
The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, “wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block” must be shown or the feature(s) canceled from the claim(s). No new matter should be entered.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Interpretation
The independent claims, as filed, contain the language “wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block”. The written description support Examiner finds for such a limitation is in Applicant’s ¶535, which describes that when the reference picture of a spatial candidate is not identical to the reference picture of a current/target block (which may be a co-located picture containing a co-located block as detailed in Applicants ¶186), e.g., based on their POCs being different, the motion vector of the spatially adjacent neighboring block may be scaled to derive the motion vector of the current block. Accordingly, Examiner interprets this claim language to indicate that the motion vector of the spatially adjacent neighboring block is scaled to derived the motion vector of the current block when the reference picture of the spatially adjacent neighboring block is different from the reference picture of the current/target block, e.g., their POCs are different, and otherwise (when they are the same) is not scaled.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-3 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 4 of U.S. Patent No. 12,256,062 (the ‘062 patent) in view of U.S. Patent No. 9,247,249 (“Chen 2”). Although the conflicting claims are not identical, they are not patentably distinct from each other because it would be obvious to one of ordinary skill in the art at the time of invention that claims 1 and 4 of the ‘062 patent in view of Chen 2 and claims 1-3 of the instant invention cover substantially the same subject matter.
The table below shows claim 1, a sample of how each of these claims is rendered unpatentable by claims such as claim 1 of the ‘062 patent:
Instant Application 19/026,098
U.S. Patent No. 12,256,062
1. (Original) A method of decoding an image, the method comprising:
1. A method of decoding an image, the method comprising:
1. Limitation 1: obtaining residual information of a current block;
1. Limitation 2: determining a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block;
1. Limitation 4: … determining a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block…;
1. Limitation 3: deriving the merge candidate in the sub-block unit from a sub-block within the co-located block;
1. Limitation 5: deriving the merge candidate in the sub-block unit from a sub-block within the co-located block;
1. Limitation 4: generating a merge candidate list based on the merge candidate in the sub-block unit;
1. Limitation 6: generating a merge candidate list based on the merge candidate in the sub-block unit;
1. Limitation 5: deriving motion information in the sub-block unit based on the merge candidate list;
1. Limitation 6: deriving a prediction sample of the current block based on the motion information in the sub-block unit;
1. Limitation 7: deriving a residual sample of the current block based on the residual information; and
1. Limitation 8: generating a reconstructed picture based on the prediction sample and the residual sample;
1. Limitation 9: wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block;
1. Limitation 7: wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block;
1. Limitation 10: wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block.
1. Limitation 9: wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block.
The claims of the ‘062 patent do not explicitly recite obtaining residual information of a current block, deriving motion information based on a merge candidate list (e.g., in a sub-block unit), deriving prediction samples based on such motion information, deriving residual samples based on the residual information, and generating a reconstructed picture based on the prediction sample and residual sample. However, these were all well-known elements of coding systems at the time of filing (see, e.g., Chen 2 Fig. 3, items 80, “quantiz. coeff.”, 86, 88, “residual blocks” and 82, 83, 90, 92 and 11:17-21, 19:16-30, 20:5-22:33, 24:43-25:16). Accordingly, to one of ordinary skill in the art at the time of filing, it would have been obvious to have modified these claims to include such well-known elements and would have been considered nothing more than the combination of prior art elements according to known methods to achieve predictable results. Accordingly, in view of the level of skill in the art as evidenced by Chen 2, the claims of the ‘062 patent and claims 1-3 of the instant application cover substantially the same subject matter.
Claims 1-3 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 of copending Application No. 19/026,143 (the ‘143 application) in view of the level of skill in the art as evidenced by Chen 2. Although the conflicting claims are not identical, they are not patentably distinct from each other because it would be obvious to one of ordinary skill in the art at the time of invention that claims 1-3 of the ‘143 application in view of Chen 2 and claims 1-3 of the instant invention cover substantially the same subject matter.
The table below shows claim 1, a sample of how each of these claims is rendered unpatentable by claims such as claim 1 of the ‘143 application:
Instant Application 19/026,098
Co-Pending Application 19/026,143
1. (Original) A method of decoding an image, the method comprising:
1. A method of decoding an image, the method comprising:
1. Limitation 1: obtaining residual information of a current block;
1. Limitation 2: determining a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block;
1. Limitation 2: … determining a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block;
1. Limitation 3: deriving the merge candidate in the sub-block unit from a sub-block within the co-located block;
1. Limitation 3: deriving the merge candidate in the sub-block unit from a sub-block within the co-located block;
1. Limitation 4: generating a merge candidate list based on the merge candidate in the sub-block unit;
1. Limitation 4: generating a merge candidate list based on the merge candidate in the sub-block unit;
1. Limitation 5: deriving motion information in the sub-block unit based on the merge candidate list;
1. Limitation 6: deriving a prediction sample of the current block based on the motion information in the sub-block unit;
1. Limitation 7: deriving a residual sample of the current block based on the residual information; and
1. Limitation 8: generating a reconstructed picture based on the prediction sample and the residual sample;
1. Limitation 9: wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block;
1. Limitation 6: wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block;
1. Limitation 10: wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block.
1. Limitation 7: wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block.
The claims of the ‘143 application do not explicitly recite obtaining residual information of a current block, deriving motion information of based on a merge candidate list (e.g., in a sub-block unit), deriving prediction samples based on such motion information, deriving residual samples based on the residual information, and generating a reconstructed picture based on the prediction sample and residual sample. However, these were all well-known elements of coding systems at the time of filing (see, e.g., Chen 2 Fig. 3, items 80, “quantiz. coeff.”, 86, 88, “residual blocks” and 82, 83, 90, 92 and 11:17-21, 19:16-30, 20:5-22:33, 24:43-25:16). Accordingly, to one of ordinary skill in the art at the time of filing, it would have been obvious to have modified these claims to include such well-known elements and would have been considered nothing more than the combination of prior art elements according to known methods to achieve predictable results. Accordingly, in view of the level of skill in the art as evidenced by Chen 2, the claims of the ‘143 application and claims 1-3 of the instant application cover substantially the same subject matter.
Claims 1-3 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 of copending Application No. 19/026,210 (the ‘210 application) in view of Chen 2. Although the conflicting claims are not identical, they are not patentably distinct from each other because it would be obvious to one of ordinary skill in the art at the time of invention that claims 1-3 of the ‘210 application in view of Chen 2 and claims 1-3 of the instant invention cover substantially the same subject matter.
The table below shows claim 1, a sample of how each of these claims is rendered unpatentable by claims such as claim 1 of the ‘210 application:
Instant Application 19/026,098
Co-Pending Application 19/026,210
1. (Original) A method of decoding an image, the method comprising:
1. A method of decoding an image, the method comprising:
1. Limitation 1: obtaining residual information of a current block;
1. Limitation 2: determining a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block;
1. Limitation 3: … determining a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block;
1. Limitation 3: deriving the merge candidate in the sub-block unit from a sub-block within the co-located block;
1. Limitation 4: deriving the merge candidate in the sub-block unit from a sub-block within the co-located block;
1. Limitation 4: generating a merge candidate list based on the merge candidate in the sub-block unit;
1. Limitation 5: generating a merge candidate list based on the merge candidate in the sub-block unit;
1. Limitation 5: deriving motion information in the sub-block unit based on the merge candidate list;
1. Limitation 6: deriving motion information in the sub-block unit based on the merge candidate list …;
1. Limitation 6: deriving a prediction sample of the current block based on the motion information in the sub-block unit;
1. Limitation 7: deriving a residual sample of the current block based on the residual information; and
1. Limitation 8: generating a reconstructed picture based on the prediction sample and the residual sample;
1. Limitation 9: wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block;
1. Limitation 8: wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block;
1. Limitation 10: wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block.
1. Limitation 9: wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block.
The claims of the ‘210 application do not explicitly recite obtaining residual information of a current block, deriving a prediction sample of the current block based on a motion information (e.g., in a sub-block unit), deriving residual samples based on the residual information, and generating a reconstructed picture based on the prediction sample and residual sample. However, these were all well-known elements of coding systems at the time of filing (see, e.g., Chen 2 Fig. 3, items 80, “quantiz. coeff.”, 86, 88, “residual blocks” and 82, 83, 90, 92 and 11:17-21, 19:16-30, 20:5-22:33, 24:43-25:16). Accordingly, to one of ordinary skill in the art at the time of filing, it would have been obvious to have modified these claims to include such well-known elements and would have been considered nothing more than the combination of prior art elements according to known methods to achieve predictable results. Accordingly, in view of the level of skill in the art as evidenced by Chen 2, the claims of the ‘210 application and claims 1-3 of the instant application cover substantially the same subject matter.
Claims 1-3 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 of copending Application No. 19/026,277 (the ‘277 application) in view of Chen 2. Although the conflicting claims are not identical, they are not patentably distinct from each other because it would be obvious to one of ordinary skill in the art at the time of invention that claims 1-3 of the ‘277 application in view of Chen 2 and claims 1-3 of the instant invention cover substantially the same subject matter.
The table below shows claim 1, a sample of how each of these claims is rendered unpatentable by claims such as claim 1 of the ‘277 application:
Instant Application 19/026,098
Co-Pending Application 19/026,277
1. (Original) A method of decoding an image, the method comprising:
1. A method of decoding an image, the method comprising:
1. Limitation 1: obtaining residual information of a current block;
1. Limitation 2: determining a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block;
1. Limitation 2: determining a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block;
1. Limitation 3: deriving the merge candidate in the sub-block unit from a sub-block within the co-located block;
1. Limitation 3: deriving the merge candidate in the sub-block unit from a sub-block within the co-located block;
1. Limitation 4: generating a merge candidate list based on the merge candidate in the sub-block unit;
1. Limitation 4: generating a merge candidate list based on the merge candidate in the sub-block unit;
1. Limitation 5: deriving motion information in the sub-block unit based on the merge candidate list;
1. Limitation 5: deriving motion information in the sub-block unit based on the … merge candidate list;
1. Limitation 6: deriving a prediction sample of the current block based on the motion information in the sub-block unit;
1. Limitation 6: deriving a prediction sample of the current block based on the motion information in the sub-block unit;
1. Limitation 7: deriving a residual sample of the current block based on the residual information; and
1. Limitation 8: generating a reconstructed picture based on the prediction sample and the residual sample;
1. Limitation 9: wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block;
1. Limitation 7: wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block;
1. Limitation 10: wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block.
1. Limitation 8: wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block.
The claims of the ‘277 application do not explicitly recite obtaining residual information of a current block, deriving residual samples based on the residual information, and generating a reconstructed picture based on the prediction sample and residual sample. However, these were all well-known elements of coding systems at the time of filing (see, e.g., Chen 2 Fig. 3, items 80, “quantiz. coeff.”, 86, 88, “residual blocks” and 82, 83, 90, 92 and 11:17-21, 19:16-30, 20:5-22:33, 24:43-25:16). Accordingly, to one of ordinary skill in the art at the time of filing, it would have been obvious to have modified these claims to include such well-known elements and would have been considered nothing more than the combination of prior art elements according to known methods to achieve predictable results. Accordingly, in view of the level of skill in the art as evidenced by Chen 2, the claims of the ‘277 application and claims 1-3 of the instant application cover substantially the same subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2019/0268611 (“Chen”), which corresponds to a priority application dated February 2018, in view of Chen 2. Examiner notes that although the current application claims priority to foreign applications filed March 2018 and September 2018, Applicant has not shown such priority applications to support the limitations of the claims in the manner required by 35 U.S.C. 112(a). Accordingly, for the purposes of this Action, the effective filing date is presumed to be that of the parent PCT application which was filed March 13, 2019.
With respect to claim 1, Chen discloses the invention substantially as claimed, including
A method of decoding an image (see Abstract, Figs. 9-10, ¶¶62, 88-89, describing a decoder that decodes an image), the method comprising:
obtaining residual information of a current block (see Fig. 9, item “residual motion data”, ¶¶80, 82-83, 91, describing that the bitstream sent from the encoder to the decoder includes coded residual signal/residual blocks, i.e., residual information of a current block);
determining a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block (see ¶¶63-72, describing determining a co-located location/sub-PU/block in a collocated picture, i.e., co-located image, based on an initial motion vector, i.e., motion information, of a spatially neighboring block, i.e., a block that is spatially adjacent to the current block);
deriving the merge candidate in the sub-block unit from a sub-block within the co-located block (see ¶¶6, 35-36, 48, 63-73, 106, describing deriving merge candidates, e.g., sub-PU TMVP candidates, in units of sub-PUs from a sub-block within the co-located block);
generating a merge candidate list based on the merge candidate in the sub-block unit (see ¶¶6, 33, 35-36, 48, 63, 73, 102, 106, describing generating a merge candidate list based on the merge candidates in sub-block units);
deriving motion information in the sub-block unit based on the merge candidate list (see Fig. 9, item “predicted MV”, ¶¶6, 33, 35-36, 48, 63, 73, 88, 94-95, 101-102, 106, describing deriving predicted MVs based on reference MVs of candidate predictors from previous video frames in a merge candidate list and that these MVs may be derived in the sub-block unit);
deriving a prediction sample of the current block based on the motion information in the sub-block unit (see Fig. 9, items 913, 917, 930, 975, “predicted pixel data”, “predicted MV”, “MC MV”, ¶94, describing producing predicted pixel data/motion compensated predictors, i.e., a prediction sample of the current block, based on predicted MVs from reconstructed reference frames, i.e., motion information in the sub-block unit);
deriving a residual sample of the current block based on the residual information (see Fig. 9, items 912, 916, and 919, ¶¶76-77, 83, 91, describing deriving a residual signal/residual block, i.e., a residual sample, of the current block by dequantizing, and inverse transforming received residual coefficients); and
generating a reconstructed picture based on the prediction sample and the residual sample (see Fig. 9, items 913, 917, 919, ¶91, describing adding predicted pixel data with the reconstructed residual signal to obtain decoded pixel data, i.e., generating a reconstructed picture based on the prediction sample and the residual sample),
wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block (see ¶66, describing that the initial motion vector – which as detailed above determines the sub-PU/sub-block within the co-located block, is based on availability checking in a collocated picture search around a center-position/center-pixel of the center sub-PU current PU, i.e., a center position of a current sub-block within the current block), and
wherein based on [] a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as [] the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block (see ¶¶67-73, describing that the initial motion vector, which as detailed above is the motion information of the block that is spatially adjacent to the current block, is used to determine a position of the sub-PU/sub-block within the collocated picture and that this initial motion vector is scaled to derive the motion vector of the current block when the reference picture of the initial motion vector (a reference picture indicated by the motion information of the block that is spatially adjacent to the current block) is NOT, i.e., is different from, the current reference picture, i.e. the reference picture of the current target block – see also claim interpretation section above).
Chen does not explicitly recite determining whether two reference pictures are the same/different using their picture order count.
However, in the same field of endeavor, Chen 2 discloses that it was known to determine whether two reference pictures are the same/different using the images’/pictures’ respective picture order count (see 6:17-20, 7:24-40, describing determining whether two reference pictures are the same or different using their picture order counts).
At the time of filing, one of ordinary skill would have been familiar with determining whether two reference images are the same or different and have understood that, as evidenced by Chen 2, that this is often accomplished using picture order count (POC). Accordingly, to one of ordinary skill in the art at the time of filing, using such a metric to determine whether the reference picture of a spatially adjacent neighboring block is the same or different from the reference picture of a current/target block would have represented nothing more than the combination of prior art elements according to predictable results and/or the simple substitution of one known element for another to obtain predictable results.
Therefore, it would have been obvious to one having ordinary skill in the art at the time of filing to include a mechanism for determining whether the reference picture of the spatially adjacent neighboring block is the same or different from the reference picture of the current/target block by using their POCs in the coding system of Chen as taught by Chen 2.
With respect to claim 2, Chen discloses the invention substantially as claimed. As described above, Chen in view of Chen 2 discloses all the elements of independent claim 1. Chen/Chen 2 additionally discloses:
A method of encoding an image (see Abstract, Figs. 7-8, ¶¶62, 74-75, describing an encoder for encoding an image), the method comprising:
determining a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block (see citations and arguments with respect to corresponding element of claim 1 above and Chen ¶62, describing that the inter-prediction method may be implemented in an encoder or decoder);
deriving the merge candidate in the sub-block unit from a sub-block within the co-located block (see citations and arguments with respect to corresponding element of claim 1 above Chen ¶62, describing that the inter-prediction method may be implemented in an encoder or decoder);
generating a merge candidate list based on the merge candidate in the sub-block unit (see citations and arguments with respect to corresponding 2element of claim 1 above Chen ¶62, describing that the inter-prediction method may be implemented in an encoder or decoder);
deriving motion information in the sub-block unit based on the merge candidate list (see citations and arguments with respect to corresponding element of claim 1 above and Chen Fig. 7, items 735, 713, ¶¶6, 62, 74, 79-82, describing that the inter-prediction method may be implemented in an encoder or decoder and that the encoding method includes deriving motion vectors/MVs);
deriving a prediction sample of the current block based on the motion information in the sub-block unit (see citations and arguments with respect to corresponding element of claim 1 above and element above, describing that the motion information/vectors/MVs in the sub-block/sub-PU unit may be used to derive a predicted pixel data/inter prediction sample of the current block);
deriving a residual sample of the current block based on the prediction sample (see citations and arguments with respect to corresponding element of claim 1 above and Fig. 7, items 775, “residual motion data”, ¶¶80-82, describing deriving residual motion data, i.e., a residual sample, of the current block based on the prediction sample); and
encoding image information including residual information of the current block (see Fig. 7, items 790, 795, ¶83, describing encoding image information including the residual motion data, i.e., residual information, for the current block),
wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block (see citations and arguments with respect to corresponding element of claim 1 above), and
wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block (see citations and arguments with respect to corresponding element of claim 1 above).
The reasons for combining the cited prior art with respect to claim 1 also apply to claim 2.
With respect to claim 3, Chen discloses the invention substantially as claimed. As described above, Chen in view of Chen 2 discloses all the elements of independent claim 1. Chen/Chen 2 additionally discloses:
A transmission method of data for image (see Chen Figs. 7, 9, item 795, 995, ¶¶74, 76, 82-83, 88, 90, describing transmitting a bitstream including image data), the method comprising:
obtaining a bitstream of image information including residual information of a current block (see citations and arguments with respect to corresponding element of claim 1 above);
transmitting the data including the bitstream of the image information including residual information (see citations and arguments with respect to corresponding element of claim 2 above);
wherein it is determined a co-located block in a co-located image based on motion information of a block that is spatially adjacent to the current block, and a merge candidate in the sub-block unit is derived from a sub-block within the co-located block (see citations and arguments with respect to corresponding element of claim 1 above);
wherein a merge candidate list based is generated based on the merge candidate in the sub-block unit, and motion information in the sub-block unit is derived based on the merge candidate list (see citations and arguments with respect to corresponding element of claim 1 above);
wherein a prediction sample of the current block is derived based on the motion information in the sub-block unit and the residual sample of the current block is derived based on the residual information (see citations and arguments with respect to corresponding element of claim 1 above),
wherein the sub-block within the co-located block is determined based on a center location of a current sub-block within the current block (see citations and arguments with respect to corresponding element of claim 1 above), and
wherein based on a picture order count of a reference picture indicated by the motion information of the block that is spatially adjacent to the current block being the same as a picture order count of the co-located image of the current block, the motion information of the block that is spatially adjacent to the current block is used to determine a position of the sub-block within the co-located block (see citations and arguments with respect to corresponding element of claim 1 above).
The reasons for combining the cited prior art with respect to claim 1 also apply to claim 3.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINDSAY JANE KILE UHL whose telephone number is (571)270-0337. The examiner can normally be reached 8:30 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
LINDSAY J UHL
Primary Examiner
Art Unit 2481
/LINDSAY J UHL/Primary Examiner, Art Unit 2481