DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/24/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Objections
Claims 16 and 18 are objected to because of the following informalities:
Claim 16 is an apparatus claim written as a dependent claim to the method claim 8. If Applicant wishes claim 16 to be treated as an independent or dependent claim, an amendment or remark is required.
Claim 18 is a non-transitory computer-readable storage medium claim written as a dependent claim to the method claim 1. If Applicant wishes claim 18 to be treated as an independent or dependent claim, an amendment or remark is required.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3, 5-7, 9-11, 13-15 and 17-18 are rejected under 35 U.S.C. 102(a)(2) as anticipated by Seregin et al. (US 20150229955 A1, hereinafter “Seregin”, Applicant Admitted Prior Art (AAPA)).
Regarding claim 1. Seregin discloses a method for video decoding with geometric partition (0030 and 0103; Figures 1 and 3; ‘video decoder 30’), comprising:
partitioning video pictures into a plurality of coding units (CUs), at least one of which is further partitioned into two prediction units (PUs), a first PU and a second PU, including at least one geometric shaped PU (0048-0051 and 0095-0103; Figures 1-3; “[0103] … prediction module 100 may perform geometric partitioning to partition the video block of a CU among PUs of the CU along a boundary that does not meet the sides of the video block of the CU at right angles”);
constructing a first merge list comprising a plurality of candidates, based on a merge list construction process for regular merge prediction, wherein each one of the plurality of candidates is a motion vector (MV) comprising a List 0 MV, and/or a List 1 MV (0103 and 0135-0137; Figure 3; “[0135] If a PU is encoded in skip mode or motion information of the PU is encoded using merge mode, motion compensation module 162 may generate a merge candidate list for the PU. Motion compensation module 162 may then identify a selected merge candidate in the merge candidate list. After identifying the selected merge candidate in the merge candidate list, motion compensation module 162 may generate a predictive video block for the PU based on the one or more reference blocks associated with the motion information indicated by the selected merge candidate.”, “[0137] If motion information of a PU is encoded using AMVP mode, motion compensation module 162 may generate a list 0 MV predictor candidate list and/or a list 1 MV predictor candidate list. Motion compensation module 162 may then determine a selected list 0 MV predictor candidate and/or a selected list 1 MV predictor candidate. … motion compensation module 162 may determine a list 0 motion vector for the PU and/or a list 1 motion vector for the PU based on a list 0 MVD, a list 1 MVD, a list 0 motion vector specified by the selected list 0 MV predictor candidate, and/or a list 1 motion vector specified by the selected list 1 MV predictor ...”);
locating a first candidate for the first PU according to a first index (0075-0088, 0111 and 0137; Figure 3; selecting merge candidate from generated list; “[0137] If motion information of a PU is encoded using AMVP mode, motion compensation module 162 may generate a list 0 MV predictor candidate list and/or a list 1 MV predictor candidate list. Motion compensation module 162 may then determine a selected list 0 MV predictor candidate and/or a selected list 1 MV predictor candidate. Next, motion compensation module 162 may determine a list 0 motion vector for the PU and/or a list 1 motion vector for the PU based on a list 0 MVD, a list 1 MVD, a list 0 motion vector specified by the selected list 0 MV predictor candidate, and/or a list 1 motion vector specified by the selected list 1 MV predictor candidate. Motion compensation module 162 may then generate a predictive video block for the PU based on reference blocks associated with the list 0 motion vector and a list 0 reference picture index and/or a list 1 motion vector and a list 1 reference picture index.”);
locating a second candidate for the second PU according to a second index (0075-0088 and 0137; Figure 3; selecting merge candidate from generated list; “[0137] If motion information of a PU is encoded using AMVP mode, motion compensation module 162 may generate a list 0 MV predictor candidate list and/or a list 1 MV predictor candidate list. Motion compensation module 162 may then determine a selected list 0 MV predictor candidate and/or a selected list 1 MV predictor candidate. Next, motion compensation module 162 may determine a list 0 motion vector for the PU and/or a list 1 motion vector for the PU based on a list 0 MVD, a list 1 MVD, a list 0 motion vector specified by the selected list 0 MV predictor candidate, and/or a list 1 motion vector specified by the selected list 1 MV predictor candidate. Motion compensation module 162 may then generate a predictive video block for the PU based on reference blocks associated with the list 0 motion vector and a list 0 reference picture index and/or a list 1 motion vector and a list 1 reference picture index.”);
obtaining a first uni-prediction MV for the first PU by determining a List X1 MV of the first candidate according to a first binary reference list indication flag, wherein X1 takes a value of 0 or 1 and is indicated by the first binary reference list indication flag (0183-0186; Figure 8; “[0183] FIG. 8 is a flowchart that illustrates an example operation 400 for determining the motion information of a PU using AMVP mode. A video coder, such as video encoder 20 or video decoder 30, may perform operation 400 to determine the motion information of a PU using AMVP mode.”, “[0184] After the video coder starts operation 400, the video coder may determine whether inter prediction for the current PU is based on list 0 (402). If inter prediction for the current PU is based on list 0 ("YES" of 402), the video coder may generate a list 0 MV predictor candidate list for the current PU (404). The list 0 MV predictor candidate list may include two list 0 MV predictor candidates. Each of the list 0 MV predictor candidates may specify a list 0 motion vector.”); and
obtaining a second uni-prediction MV for the second PU by determining a List X2 MV of the second candidate according to a second binary reference list indication flag, wherein X2 takes a value of 0 or 1 and is indicated by the second binary reference list indication flag (0183 and 0186-0187; Figure 8; “[0183] FIG. 8 is a flowchart that illustrates an example operation 400 for determining the motion information of a PU using AMVP mode. A video coder, such as video encoder 20 or video decoder 30, may perform operation 400 to determine the motion information of a PU using AMVP mode.”, “[0186] Furthermore, after determining that inter prediction for the current PU is not based on list 0 ("NO" of 402) or after determining the list 0 motion vector for the current PU (408), the video coder may determine whether inter prediction for the current PU is based on list 1 or whether the PU is bi-directionally inter predicted (410). If inter prediction for the current PU is not based on list 1 and the current PU is not bi-directionally inter predicted ("NO" of 410), the video coder has finished determining the motion information of the current PU using AMVP mode. In response to determining that inter prediction for the current PU is based on list 1 or the current PU is bi-directionally inter predicted ("YES" of 410), the video coder may generate a list 1 MV predictor candidate list for the current PU (412). The list 1 MV predictor candidate list may include two list 1 MV predictor candidates. Each of the list 0 MV predictor candidates may specify a list 1 motion vector.”).
Regarding claim 3. Seregin discloses the method for video encoding with geometric partition as claimed in claim 1, wherein a uni-prediction zero MV is selected as the first uni-prediction MV upon determining that the List X1 MV of the first candidate does not exist (0172; Figure 7; Claim 6; “[0172] However, in response to determining that the number of merge candidates in the merge candidate list is less than the maximum number of merge candidates ("YES" of 312), the video coder may generate a zero-value merge candidate (314). If the current PU is in a P slice, the zero-value merge candidate may specify a list 0 motion vector that has a magnitude equal to zero. If the current PU is in a B slice and the current PU is not restricted to uni-directional inter prediction, the zero-value merge candidate may specify a list 0 motion vector that has a magnitude equal to zero and a list 1 motion vector that has a magnitude equal to zero. In some examples, the zero-value merge candidate may specify either a list 0 motion vector or a list 1 motion vector that has a magnitude equal to zero if the current PU is in a B slice and the current PU is restricted to uni-directional inter prediction. The video coder may then include the zero-value merge candidate in the merge candidate list (316).”); or
a uni-prediction zero MV is selected as the second uni-prediction MV upon determining that the List X2 MV of the second candidate does not exist (0172; Figure 7; Claim 6; “[0172] However, in response to determining that the number of merge candidates in the merge candidate list is less than the maximum number of merge candidates ("YES" of 312), the video coder may generate a zero-value merge candidate (314). If the current PU is in a P slice, the zero-value merge candidate may specify a list 0 motion vector that has a magnitude equal to zero. If the current PU is in a B slice and the current PU is not restricted to uni-directional inter prediction, the zero-value merge candidate may specify a list 0 motion vector that has a magnitude equal to zero and a list 1 motion vector that has a magnitude equal to zero. In some examples, the zero-value merge candidate may specify either a list 0 motion vector or a list 1 motion vector that has a magnitude equal to zero if the current PU is in a B slice and the current PU is restricted to uni-directional inter prediction. The video coder may then include the zero-value merge candidate in the merge candidate list (316).”).
Regarding claim 5. Seregin discloses the method for video encoding with geometric partition as claimed in claim 1, wherein the first and second binary reference list indication flags are determined based on values of the first and second indexes, respectively (0137, 0129-0130; Figure 3B; “[0129] Accordingly, in operation 35, the inter-prediction mode information receiver 32 according to an exemplary embodiment may parse first direction reference index information from the prediction unit region when an inter-prediction direction read from the inter-prediction mode information is not a second direction. A first direction reference picture may be determined among a first direction reference picture list, based on the parsed first direction reference index information. If the inter-prediction direction is not the second direction, the difference value information of the first motion vector may be parsed together with the first direction reference index, from the prediction unit region. If tmvp usability is approved in a picture parameter set, information about whether a first direction mvp is used in a current prediction unit may be parsed from the prediction unit region.”, “[0130] Also, in operation 37, the inter-prediction mode information receiver 32 according to an exemplary embodiment may parse second direction reference index information from the prediction unit region, if the inter-prediction direction read from the inter-prediction mode information is not a first direction. A second direction reference picture may be determined among a second direction reference picture list, based on the parsed second direction reference index information. If the inter-prediction direction is not the first direction, the difference value information of the second motion vector may be parsed together with the second direction reference index, from the prediction unit region. ...”).
Regarding claim 6. Seregin discloses the method for video encoding with geometric partition as claimed in claim 5, wherein the first and second binary reference list indication flags are coded as CABAC context bins (0094, 0123-0124 and 0143-0144; Claim 6; Figures 1 and 2; “[0124] As part of performing an entropy encoding operation on data, entropy encoding module 116 may select a context model. If entropy encoding module 116 is performing a CABAC operation, the context model may indicate estimates of probabilities of particular bins having particular values. In the context of CABAC, the term "bin" is used to refer to a bit of a binarized version of a syntax element.”, “[0094] Furthermore, different contexts may be used to entropy code the inter prediction mode indicator of a PU in a B slice if the PU is restricted to uni-directional inter prediction than if the PU is not restricted to uni-directional inter prediction. This may further increase coding efficiency.”, “claim 6. The method of claim 1, further comprising entropy decoding the inter prediction mode indicator using different contexts depending on whether the PU is restricted to uni-directional inter prediction”).
Regarding claim 7. Seregin discloses the method for video encoding with geometric partition as claimed in claim 1, wherein the first and second binary reference list indication flags are determined based on parities of index values of the first and second candidates in the first merge list, respectively (0143-0144; Figure 4; “[0143] FIG. 4 is a flowchart that illustrates an example motion compensation operation 200. A video coder, such as video encoder 20 or video decoder 30, may perform motion compensation operation 200. The video coder may perform motion compensation operation 200 to generate a predictive video block for a current PU.”, “[0144] After the video coder starts motion compensation operation 200, the video coder may determine whether the prediction mode for the current PU is skip mode (202). If the prediction mode for the current PU is not skip mode ("NO" of 202), the video coder may determine whether the prediction mode for the current PU is inter mode and that the inter prediction mode of the current PU is merge mode (204). If the prediction mode of the current PU is skip mode ("YES" of 202) or if the prediction mode of the current PU is inter mode and the inter prediction mode of the current PU is merge mode ("YES" of 204), the video coder may generate a merge candidate list for the current PU (206). The merge candidate list may include a plurality of merge candidates. Each of the merge candidates specifies a set of motion information, such as one or more motion vectors, one or more reference picture indexes, a list 0 prediction flag, and a list 1 prediction flag. The merge candidate list may include one or more uni-directional merge candidates or bi-directional merge candidates. In some examples, the video coder may perform the example operation described below with regard to FIG. 6 to generate the merge candidate list.”).
Regarding claims 9, 11 and 13-15. Apparatus claims 9, 11 and 13-15 are drawn to the apparatus corresponding to the method of using same as claimed in claims 1, 3 and 5-7. Therefore, apparatus claims 9, 11 and 13-15 correspond to method claims 1, 3 and 5-7 and are rejected for the same reasons of anticipation as used above.
Regarding claims 17-18. The Method and Non-transitory computer-readable storage medium claims 17-18 are drawn to the method and non-transitory computer-readable storage medium of using the corresponding to the method of using the same as claimed in claim 1. Therefore, method and non-transitory computer-readable storage medium claims 17-18 corresponds to the method claim 1, and is rejected for the same reasons of anticipation as used above.
Furthermore, Claims 17-18 directed to a non-transitory computer readable storage medium (CRM) storing a bitstream generated by an method for video encoding. The claim does not recite that the CRM contains executable instruction, that when executed, implement the encoding method. The bitstream is a product produced by the encoding method. Therefore, the claims are not limited to the recited steps, only the structure implied by the steps. (See MPEP 2113 - Product-by-Process claims.) Hence, the encoding method steps recited are given patentable weight only to structures in the bitstream that are implied by the steps.
To be given patentable weight, the CRM and the bitstream (i.e. descriptive material) must be in a functional relationship. A functional relationship can be found where the descriptive material performs some function with respect to the CRM to which it is associated. See MPEP §2111.05(I)(A). When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists”. MPEP §2111.05(III).
The CRM storing the claimed bitstream in claims 17-18 merely serve as a support for the CRM of the bitstream and provides no functional relationship between the stored bitstream and the CRM.
Therefore, the structure bitstream, which scope is implied by the method steps, is non-functional descriptive material and given no patentable weight. MPEP §2111.05(III).
Thus, the claim scope is just a storage medium storing data and is anticipated by (PRIOR ART) which recites a storage medium storing a bitstream.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 2, 4, 10 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Seregin as applied to claims 1 and 9 above, and in view of LEE (US 20160065987 A1, hereinafter “Lee”, AAPA).
Regarding claim 2. Seregin discloses the method for video encoding with geometric partition as claimed in claim 1, but failed to disclose wherein the second binary reference list indication flag is determined to have a contrary value relative to the first binary reference list indication flag upon determining that the first and second indexes are the same.
Lee, in the same field of endeavor, however, shows the method for video encoding with geometric partition, wherein the second binary reference list indication flag is determined to have a contrary value relative to the first binary reference list indication flag upon determining that the first and second indexes are the same (0129-0130; Figure 3B; “[0129] … the inter-prediction mode information receiver 32 according to an exemplary embodiment may parse first direction reference index information from the prediction unit region when an inter-prediction direction read from the inter-prediction mode information is not a second direction. A first direction reference picture may be determined among a first direction reference picture list, based on the parsed first direction reference index information. If the inter-prediction direction is not the second direction, the difference value information of the first motion vector may be parsed together with the first direction reference index, from the prediction unit region. If tmvp usability is approved in a picture parameter set, information about whether a first direction mvp is used in a current prediction unit may be parsed from the prediction unit region.”, “[0130] … reference index information from the prediction unit region, if the inter-prediction direction read from the inter-prediction mode information is not a first direction. A second direction reference picture may be determined among a second direction reference picture list, based on the parsed second direction reference index information. If the inter-prediction direction is not the first direction, ...”).
Therefore, it would have been obvious to the person of having ordinary skilled in the art before the effective filing date of the invention to modify the video coding of Seregin to integrate and implement the determining a reference image for inter-prediction as taught in Lee as above in order to enables reducing transmit bit amount as transmission procedure of unnecessary reference picture list related information is eliminated, so that data parsing process steps can be reduced (Lee, 0021 and 0196).
Regarding claim 4. Seregin discloses the method for video encoding with geometric partition as claimed in claim 1, but failed to disclose wherein the second binary reference list indication flag is determined to have a contrary value relative to the first binary reference list indication flag upon determining that backward prediction is used for a current picture; and
the second binary reference list indication flag is determined to have a same value as the first binary reference list indication flag upon determining that backward prediction is not used for the current picture.
Lee, in the same field of endeavor, shows the method for video encoding with geometric partition, wherein the second binary reference list indication flag is determined to have a contrary value relative to the first binary reference list indication flag upon determining that backward prediction is used for a current picture (0129-0130 and 0143-0144; Figures 3-4); and
the second binary reference list indication flag is determined to have a same value as the first binary reference list indication flag upon determining that backward prediction is not used for the current picture (0129-0130 and 0143-0144; Figures 3-4; “[0129] Accordingly, in operation 35, the inter-prediction mode information receiver 32 according to an exemplary embodiment may parse first direction reference index information from the prediction unit region when an inter-prediction direction read from the inter-prediction mode information is not a second direction. A first direction reference picture may be determined among a first direction reference picture list, based on the parsed first direction reference index information. If the inter-prediction direction is not the second direction, the difference value information of the first motion vector may be parsed together with the first direction reference index, from the prediction unit region. ...”, “[0130] Also, in operation 37, the inter-prediction mode information receiver 32 according to an exemplary embodiment may parse second direction reference index information from the prediction unit region, if the inter-prediction direction read from the inter-prediction mode information is not a first direction. A second direction reference picture may be determined among a second direction reference picture list, based on the parsed second direction reference index information. If the inter-prediction direction is not the first direction, the difference value information of the second motion vector may be parsed together with the second direction reference index, from the prediction unit region. ...”).
The motivation used in the rejection of claim 2 to combine the Lee prior art with the Seregin prior art still applies to the combination of the prior arts on the rejection of claim 4.
Regarding claims 10 and 12. Apparatus claims 10 and 12 are drawn to the apparatus corresponding to the method of using same as claimed in claims 2 and 4. Therefore, apparatus claims 10 and 12 correspond to method claims 2 and 4 and are rejected for the same reasons of obviousness as used above.
Claim Rejections - 35 USC § 103
Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Seregin as applied to claims 1 and 9 above, and in view of LIM et al. (US 20160065987 A1, hereinafter “LIM”).
Regarding claim 8. Seregin discloses a method for video coding with geometric partition (0030 and 0103; Figures 1 and 3; ‘video decoder 30’), comprising:
obtaining video pictures, wherein the video pictures are partitioned into a plurality of coding units (CUs), at least one of which is further partitioned into two prediction units (PUs) 0048-0051 and 0095-0103; Figures 1-3; “[0103] … prediction module 100 may perform geometric partitioning to partition the video block of a CU among PUs of the CU along a boundary that does not meet the sides of the video block of the CU at right angles”);
constructing a first merge list comprising a plurality of candidates, based on a merge list construction process for regular merge prediction, wherein each one of the plurality of candidates is a motion vector (MV) comprising a List 0 MV, and/or a List 1 MV (0103 and 0135-0137; Figure 3; “[0135] If a PU is encoded in skip mode or motion information of the PU is encoded using merge mode, motion compensation module 162 may generate a merge candidate list for the PU. Motion compensation module 162 may then identify a selected merge candidate in the merge candidate list. After identifying the selected merge candidate in the merge candidate list, motion compensation module 162 may generate a predictive video block for the PU based on the one or more reference blocks associated with the motion information indicated by the selected merge candidate.”, “[0137] If motion information of a PU is encoded using AMVP mode, motion compensation module 162 may generate a list 0 MV predictor candidate list and/or a list 1 MV predictor candidate list. Motion compensation module 162 may then determine a selected list 0 MV predictor candidate and/or a selected list 1 MV predictor candidate. … motion compensation module 162 may determine a list 0 motion vector for the PU and/or a list 1 motion vector for the PU based on a list 0 MVD, a list 1 MVD, a list 0 motion vector specified by the selected list 0 MV predictor candidate, and/or a list 1 motion vector specified by the selected list 1 MV predictor ...”);
locating a first candidate for the first PU according to a first index (0075-0088, 0111 and 0137; Figure 3; selecting merge candidate from generated list; “[0137] If motion information of a PU is encoded using AMVP mode, motion compensation module 162 may generate a list 0 MV predictor candidate list and/or a list 1 MV predictor candidate list. Motion compensation module 162 may then determine a selected list 0 MV predictor candidate and/or a selected list 1 MV predictor candidate. Next, motion compensation module 162 may determine a list 0 motion vector for the PU and/or a list 1 motion vector for the PU based on a list 0 MVD, a list 1 MVD, a list 0 motion vector specified by the selected list 0 MV predictor candidate, and/or a list 1 motion vector specified by the selected list 1 MV predictor candidate. Motion compensation module 162 may then generate a predictive video block for the PU based on reference blocks associated with the list 0 motion vector and a list 0 reference picture index and/or a list 1 motion vector and a list 1 reference picture index.”);
locating a second candidate for the second PU according to a second index (0075-0088 and 0137; Figure 3; selecting merge candidate from generated list; “[0137] If motion information of a PU is encoded using AMVP mode, motion compensation module 162 may generate a list 0 MV predictor candidate list and/or a list 1 MV predictor candidate list. Motion compensation module 162 may then determine a selected list 0 MV predictor candidate and/or a selected list 1 MV predictor candidate. Next, motion compensation module 162 may determine a list 0 motion vector for the PU and/or a list 1 motion vector for the PU based on a list 0 MVD, a list 1 MVD, a list 0 motion vector specified by the selected list 0 MV predictor candidate, and/or a list 1 motion vector specified by the selected list 1 MV predictor candidate. Motion compensation module 162 may then generate a predictive video block for the PU based on reference blocks associated with the list 0 motion vector and a list 0 reference picture index and/or a list 1 motion vector and a list 1 reference picture index.”);
obtaining a first uni-prediction MV for the first PU by determining a List X1 MV of the first candidate according to a first binary reference list indication flag, wherein X1 takes a value of 0 or 1 and is indicated by the first binary reference list indication flag (0183-0186; Figure 8; “[0183] FIG. 8 is a flowchart that illustrates an example operation 400 for determining the motion information of a PU using AMVP mode. A video coder, such as video encoder 20 or video decoder 30, may perform operation 400 to determine the motion information of a PU using AMVP mode.”, “[0184] After the video coder starts operation 400, the video coder may determine whether inter prediction for the current PU is based on list 0 (402). If inter prediction for the current PU is based on list 0 ("YES" of 402), the video coder may generate a list 0 MV predictor candidate list for the current PU (404). The list 0 MV predictor candidate list may include two list 0 MV predictor candidates. Each of the list 0 MV predictor candidates may specify a list 0 motion vector.”); and
obtaining a second uni-prediction MV for the second PU by determining a List X2 MV of the second candidate according to a second binary reference list indication flag, wherein X2 takes a value of 0 or 1 and is indicated by the second binary reference list indication flag (0183 and 0186-0187; Figure 8; “[0183] FIG. 8 is a flowchart that illustrates an example operation 400 for determining the motion information of a PU using AMVP mode. A video coder, such as video encoder 20 or video decoder 30, may perform operation 400 to determine the motion information of a PU using AMVP mode.”, “[0186] Furthermore, after determining that inter prediction for the current PU is not based on list 0 ("NO" of 402) or after determining the list 0 motion vector for the current PU (408), the video coder may determine whether inter prediction for the current PU is based on list 1 or whether the PU is bi-directionally inter predicted (410). If inter prediction for the current PU is not based on list 1 and the current PU is not bi-directionally inter predicted ("NO" of 410), the video coder has finished determining the motion information of the current PU using AMVP mode. In response to determining that inter prediction for the current PU is based on list 1 or the current PU is bi-directionally inter predicted ("YES" of 410), the video coder may generate a list 1 MV predictor candidate list for the current PU (412). The list 1 MV predictor candidate list may include two list 1 MV predictor candidates. Each of the list 0 MV predictor candidates may specify a list 1 motion vector.”).
Seregin failed to disclose obtaining video pictures at least one of which is further partitioned into two prediction units (PUs) along a line which is not a diagonal line.
LIM, in the same field of endeavor, however, shows a method for video coding with geometric partition (0777-0783; Figures 18-25; “[0777] Video Encoding/Decoding Based on Geometric Partitioning”), comprising:
obtaining video pictures, wherein the video pictures are partitioned into a plurality of coding units (CUs), at least one of which is further partitioned into two prediction units (PUs) along a line which is not a diagonal line, a first PU and a second PU, including at least one geometric shaped PU (0777-0783, 0967 and 0987; Figures 18-25; “[0967] Alternatively, the generation of multiple partitioned regions may be configured to determine the prediction scheme to be used for each pixel or subblock of the target block, among multiple prediction schemes. Here, the prediction scheme to be used for each pixel or subblock may be determined depending on the location of the pixel or subblock in the target block, and the pixels (or subblocks) for which the same prediction scheme is identified to be used based on the determination may have a shape generated through geometric partitioning (e.g., a trapezoid, a pentagon, a triangle, or the like).”);
Therefore, it would have been obvious to the person of having ordinary skilled in the art before the effective filing date of the invention to modify the video coding of Seregin to integrate and geometric partitioning the prediction unit along a line that is other than diagonal line as shown by LIM in order to enables adaptive and flexible triangle partition and improve video coding.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-7, 9-15 and 17-18 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-14 of U.S. Patent No. 12,301,790 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the pending claims are broader than the patented claims.
19065543
US 12301790 B2
1. A method for video encoding with geometric partition, comprising:
partitioning video pictures into a plurality of coding units (CUs), at least one of which is further partitioned into two prediction units (PUs), a first PU and a second PU, including at least one geometric shaped PU;
constructing a first merge list comprising a plurality of candidates, based on a merge list construction process for regular merge prediction, wherein each one of the plurality of candidates is a motion vector (MV) comprising a List 0 MV, and/or a List 1 MV;
locating a first candidate for the first PU according to a first index;
locating a second candidate for the second PU according to a second index;
obtaining a first uni-prediction MV for the first PU by determining a List X1 MV of the first candidate according to a first binary reference list indication flag, wherein X1 takes a value of 0 or 1 and is indicated by the first binary reference list indication flag; and
obtaining a second uni-prediction MV for the second PU by determining a List X2 MV of the second candidate according to a second binary reference list indication flag, wherein X2 takes a value of 0 or 1 and is indicated by the second binary reference list indication flag.
1. A method for video decoding with geometric partition, comprising:
… video pictures are partitioned into a plurality of coding units (CUs), wherein at least one of … partitioned into two prediction units (PUs) comprising a first PU and a second PU, … at least one geometric shaped PU;
constructing a first merge list comprising a plurality of candidates, based on a merge list construction process for regular merge prediction, wherein each one of the plurality of candidates is a motion vector (MV) comprising at least one of a List 0 MV or a List 1 MV;
locating a first candidate for the first PU … according to a first index;
locating a second candidate for the second PU … according to a second index;
obtaining a first uni-prediction MV for the first PU according to a first binary reference list indication flag by determining a List X1 MV of the first candidate … wherein X1 takes a value of 0 or 1 and is indicated by the first binary reference list indication flag; and
obtaining a second uni-prediction MV for the second PU according to a second binary reference list indication flag … determining that the List X2 MV of the second candidate …, wherein X2 takes a value of 0 or 1 and is indicated by the second binary reference list indication flag, …
2. The method for video encoding with geometric partition as claimed in claim 1, wherein the second binary reference list indication flag is determined to have a contrary value relative to the first binary reference list indication flag upon determining that the first and second indexes are the same.
2. The method for video decoding with geometric partition of claim 1, …
determining that the second binary reference list indication flag comprises a contrary value relative to the first binary reference list indication flag upon determining that the first and second indexes are the same.
3. The method for video encoding with geometric partition as claimed in claim 1, wherein a uni-prediction zero MV is selected as the first uni-prediction MV upon determining that the List X1 MV of the first candidate does not exist; or
a uni-prediction zero MV is selected as the second uni-prediction MV upon determining that the List X2 MV of the second candidate does not exist.
1. A method for video decoding with geometric partition …
obtaining a first uni-prediction MV … upon determining that the List X1 MV of the first candidate does not exist … ; and
obtaining a second uni-prediction MV … upon determining that the List X2 MV of the second candidate does not exist, …
4. The method for video encoding with geometric partition as claimed in claim 1,
wherein the second binary reference list indication flag is determined to have a contrary value relative to the first binary reference list indication flag upon determining that backward prediction is used for a current picture; and
the second binary reference list indication flag is determined to have a same value as the first binary reference list indication flag upon determining that backward prediction is not used for the current picture.
3. The method for video decoding with geometric partition of claim 1, further comprising:
determining that the second binary reference list indication flag comprises a contrary value relative to the first binary reference list indication flag upon determining that backward prediction is used for a current picture; and
determining that the second binary reference list indication flag comprises a same value as the first binary reference list indication flag upon determining that the backward prediction is not used for the current picture.
5. The method for video encoding with geometric partition as claimed in claim 1, wherein the first and second binary reference list indication flags are determined based on values of the first and second indexes, respectively.
1. A method for video decoding with geometric partition, …
wherein the first and second binary reference list indication flags are determined based on values of the first and second indexes, respectively.
6. The method for video encoding with geometric partition as claimed in claim 5, wherein the first and second binary reference list indication flags are coded as CABAC context bins.
4. The method for video decoding with geometric partition of claim 3, wherein the first and second binary reference list indication flags are coded as CABAC context bins.
7. The method for video encoding with geometric partition as claimed in claim 1, wherein the first and second binary reference list indication flags are determined based on parities of index values of the first and second candidates in the first merge list, respectively.
5. The method for video decoding with geometric partition of claim 1, …
determining the first and second binary reference list indication flags further based on parities of index values of the first and second candidates in the first merge list, respectively.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASMAMAW TARKO whose telephone number is (571)272-9205. The examiner can normally be reached Monday -Friday 9:00AM-5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASMAMAW G TARKO/ Patent Examiner, Art Unit 2482