DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner’s Note
The instant application has a lengthy prosecution history and the examiner encourages the applicant to have an interview (telephonic or personal) with the examiner prior to filing a response to the instant office action. Also, prior to the interview the examiner encourages the applicant to present multiple possible claim amendments, so as to enable the examiner to identify claim amendments that will advance prosecution in a meaningful manner.
Continued Examination Under 37 CFR 1.114
The present application is being examined under the pre-AIA first to invent provisions. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/18/2025 has been entered.
Applicant(s) Response to Official Action
The response filed on 12/18/2025 has been entered and made of record.
Response to Arguments/Amendments
Presented arguments have been fully considered, but are rendered moot in view of the new ground(s) of rejection necessitated by amendment(s) initiated by the applicant(s).
Claim Interpretation
Exemplary independent claim 1 recites:
in response to the target subblock comprising an intra-coded region and a further region, the intra-coded region comprising at least one intra-coded sample, obtaining intra mode information of the at least one intra-coded sample in the intra-coded region as the intra mode information of the target subblock,
wherein in response to the further region comprising an inter-coded region and the inter- coded region comprising at least one inter predicted samples, the method further comprises:
determining, based on coded information for the target video block, whether the intra mode information for the target subblock is equal to the intra mode information of the at least one intra predicted sample; and
storing the intra mode information for the target subblock based on the determination.
The claim elements “obtaining intra mode information of the at least one intra-coded sample in the intra-coded region as the intra mode information of the target subblock” and “the intra mode information for the target subblock is equal to the intra mode information of the at least one intra predicted sample” are being interpreted as “the intra mode information of the target subblock” = “intra mode information of the at least one intra-coded sample in the intra-coded region as the intra mode information of the target subblock”. Therefore, the claim elements above will be wholly interpreted as:
in response to the target subblock comprising an intra-coded region and a further region, wherein the further region comprising an inter-coded region and the inter-coded region comprising at least one inter predicted samples:
obtaining intra mode information of the at least one intra-coded sample in the intra-coded region as the intra mode information of the target subblock;
storing the intra mode information for the target subblock.
The Applicant did not specify what the introduce “coded information for the target video block”. The Examiner will interpret “coded information for the target video block” as the “splitting information” (i.e. the coded information includes but not limited to splitting information (such as GPM partition mode, and/or GPM partition angle, and/or GPM partition direction), and/or weight index, and/or the GPM block/subblock location, and/or the GPM block/subblock dimensions- section 24(4)- current specification) of target block. The selected “splitting information” is defined at the beginning of the claim (i.e. geometric portioning tool) to have the block split as shown in fig. 26; the “obtaining intra mode information of the at least one intra-coded sample in the intra-coded region as the intra mode information of the target subblock” is by default based on the “splitting information”/ “coded information”.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-5, 7, and 9-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Applicant has not pointed out where the amended claims are supported, nor does there appear to be a written description of the claim limitation ‘wherein in response to the further region comprising an inter-coded region and the inter- coded region comprising at least one inter predicted samples, the method further comprises: determining, based on coded information for the target video block, whether the intra mode information for the target subblock is equal to the intra mode information of the at least one intra predicted sample; and storing the intra mode information for the target subblock based on the determination’.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 1-6, 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over Li Zhang et al. [WO2020140862A1: already of record] in view of Yoshitaka Kidani [US 20240187625 A1: already of record] and further in view of Lien-Fei Chen et al. [US 20230073917 A1]
Regarding claim 1, Zhang teaches:
1. A method for video processing (i.e. a method of video processing is disclosed- ¶005), comprising:
obtaining, during a conversion between a target video block of a video and a bitstream of the video, prediction information of a target subblock in the target video (i.e. prediction partition- ¶005) block based on types of predicted samples in the target subblock, the target video block being coded by a geometric partitioning tool (i.e. making a determination that a conversion between a video block of a video region of a video and 25 a coded representation of the video uses a geometry partitioning mode in which the video block is partitioned into multiple prediction partitions including at least a first prediction partition- ¶005); and
performing the conversion based on the prediction information of the target subblock (i.e. performing the conversion based on the motion candidates for the multiple prediction partitions- ¶005), wherein the prediction information of the target subblock comprises intra mode information (i.e. The methods may include the first prediction portion is a triangular shape, and the first prediction portion is combined with intra-prediction- ¶00411).
However, Zhang does not teach explicitly:
obtaining the prediction information based on the types of the predicted samples comprises: in response to the target subblock comprising an intra-coded region and a further region, the intra-coded region comprising at least one intra-coded sample, obtaining intra mode information of the at least one intra-coded sample in the intra-coded region as the intra mode information of the target subblock, wherein in response to the further region comprising an inter-coded region and the inter- coded region comprising at least one inter predicted samples, the method further comprises: determining, based on coded information for the target video block, whether the intra mode information for the target subblock is equal to the intra mode information of the at least one intra predicted sample.
In the same field of endeavor, Kidani teaches:
obtaining the prediction information (i.e. The prediction type including two different intra predictions is treated as the intra prediction- ¶0137) based on the types of the predicted samples (i.e. The target sub-blocks of the three patterns are further partitioned depending on whether the prediction type applied to the partitioned area A and the partitioned area B is inter prediction or intra prediction- ¶0134) comprises: in response to the target subblock comprising an intra-coded region and a further region (i.e. partition line L- fig. 4-7… third pattern includes target sub-blocks belonging to both the partitioned areas A/B- ¶0133), the intra-coded region comprising at least one intra-coded sample, obtaining intra mode information of the at least one intra-coded sample in the intra-coded region as the intra mode information of the target subblock(i.e. FIG. 5 illustrates an example of a method of applying an intra prediction mode to a GPM according to the present embodiment- ¶0022… FIG. 6 illustrates an example of a method of applying an intra prediction mode to the GPM according to the present embodiment- ¶0023… Furthermore, in the GPM illustrated in FIGS. 5 and 6, a method of specifying the application possibility of the GPM to which the intra prediction mode is additionally applied in the block to be decoded and the prediction mode type in each of the partitioned areas A/B when the GPM is applied is defined- ¶0094-0103),
wherein in response to the further region comprising an inter-coded region and the inter- coded region comprising at least one inter predicted samples, the method further comprises: determining, based on coded information for the target video block (i.e. Specifically, FIG. 5 illustrates a configuration example of the GPM in a case where the intra prediction (modeX) and the inter prediction are applied to each partitioned area A/B- ¶0100), whether the intra mode information for the target subblock is equal to the intra mode information of the at least one intra predicted sample (i.e. in the GPM of the case illustrated in FIG. 5, either the normal merge mode or the intra prediction mode can be applied to each partitioned area A/B, and the type of the intra prediction mode is limited according to the partition shape (partition line) of the target block- ¶0101);
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Zhang with the teachings of Kidani to improve prediction accuracy by defining a method of applying the OBMC to the GPM (Kidani- ¶0010).
However, Zhang and Kidani do not teach explicitly:
storing the intra mode information for the target subblock based on the determination.
In the same field of endeavor, Lien-Fei teaches:
storing the intra mode information for the target subblock based on the determination (i.e. when a geometric partition of a current block based on GPM is generated with intra prediction, an encoded intra mode (or an intra mode that is applied to predict the geometric partition) can be stored in corresponding N×N units (e.g., 4×4 units) of the geometric partition. Accordingly, for the N×N units positioned in the geometric partition that is generated with the intra prediction, the encoded intra mode can be stored for each of the N×N units positioned in the geometric partition. The encoded intra mode can be DC mode, PLANAR mode, or an angular mode. In yet another example, for N×N units along (or across) a geometric boundary of the current block based on GPM, an encoded intra mode used for a geometric partition with the intra prediction in the current block can be stored for the N×N units along the geometric boundary. Thus, for a N×N unit across the geometric boundary of the current block in which a first partition is IBC coded and a second partition is intra coded, an intra mode that is applied to predict the second partition can be stored- ¶0154).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Zhang and Kidani with the teachings of Lien-Fei to to improve the coding efficiency, by using intra modes within a CU with inter prediction (e.g., MODE INTER prediction mode) can be stored and be propagated as neighboring intra information for MPM derivation of a neighboring block of the CU (Lien-Fei- ¶0151).
Regarding claim 2, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 1 and Zhang further teaches:
further comprising:
if the target subblock comprises an intra predicted sample and an inter predicted sample (i.e. The two partitions split in TPM can be coded with different modes. a. In one example, one is intra-coded and the other is inter-coded. b. In another example, one is merge-coded, the other is AMVP coded.- page 54, lines 3-5), using intra coded information of the target subblock in at least one of: a coding of a subsequent video block (i.e. In one example, if it is disabled for one color component, the associated prediction block may be derived based on motion information or prediction mode of one partition, for example, the first partition- page 55, line 4-6), or an in-loop filtering process,
wherein obtaining the prediction information of the target subblock based on the types of the predicted samples comprises: if the target subblock comprises an intra predicted sample intra-predicted region and an inter predicted sample in the further region, obtaining motion information of the target subblock (i.e. The two partitions split in TPM can be coded with different modes. a. In one example, one is intra-coded and the other is inter-coded. b. In another example, one is merge-coded, the other is AMVP coded.- page 54, lines 3-5… In one example, if it is disabled for one color component, the associated prediction block may be derived based on motion information or prediction mode of one partition, for example, the first partition- page 55, line 4-6),
wherein the motion information is considered as unavailable (i.e. then the neighboring motion vector precision information is considered to be unavailable.- page 9, line 31), or
wherein the motion information comprises a zero vector with a corresponding reference index (i.e. If numCurrMergeCand is less than 5, zero motion vector candidates are added.- ¶0186), the corresponding reference index indicating no reference picture being for the target subblock(i.e. The first available motion vector as well as its associated reference index are set to be the temporal vector and the index to the motion source picture- page 19 line 31, page 20 line 1), or
wherein obtaining the motion information comprises: obtaining the motion information of the inter predicted sample of the target subblock (i.e. if( mvd_11_zero_ flag && inter _pred_idc[ x0 ][ y0 ] = = PRED_ BI)- page 26), or
Regarding claim 3, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 2 and Zhang further teaches:
further comprising:
applying an adaptive or selective motion information storage to the target subblock (i.e. FIG. 16 shows an example of motion vector storage- ¶0059);
determining, based on coded information for the target video block, whether the motion information for the target subblock is perceived as unavailable (i.e. if( mvd_11_zero_ flag && inter _pred_idc[ x0 ][ y0 ] = = PRED_ BI))- page 26) or equal to motion information of the inter predicted sample (i.e. else{/*MODE_INTER*/- page 25); and
storing the motion information for the target subblock based on the determination, wherein the coded information for the target video block comprises at least one of splitting information, a weight index, a location of a GPM block or GPM subblock, or a dimension of the GPM block or GPM subblock, wherein the splitting information comprises at least one of: a GPM partition mode, a GPM partition angle, or a GPM partition direction, wherein performing the conversion comprises: using the motion information for a succeeding process in the conversion, or wherein the succeeding process comprises a deblocking process (i.e. Figure 16 shows an example of motion vector storage. The motion vectors (Mv I and Mv2 in Figure 16) of the triangular prediction units are stored in 4x4 grids. For each 4x4 grid, either uni-prediction or bi-prediction motion vector is stored depending on the position of the 4x4 grid in the CU. As shown in Figure 16, uni-prediction motion vector, either Mvl or Mv2, is stored for the 4x4 grid located in the non-weighted area (that is, not located at the diagonal edge). On the other hand, a bi-prediction motion vector is stored for the 4x4 grid located in the weighted area. The bi-prediction motion vector is derived from Mvl and Mv2 according to the following rules:- ¶00193).
Regarding claim 4, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 2 and Zhang further teaches:
wherein the obtained motion information is used as temporal motion information for coding or predicting a further block, the further block being within succeeding coded pictures of the video in a coding order, or wherein the obtained motion information is used as spatial motion information for coding or predicting a further block, the further block being within a current picture of the video, or wherein performing the conversion comprises: using the motion information for a loop- filtering in the conversion, wherein the loop-filtering comprises a deblocking filtering (i.e. In the JEM with QTBT, each CU can have at most one set of motion parameters for each prediction direction. Two sub-CU level motion vector prediction methods are considered in the encoder by splitting a large CU into sub-CUs and deriving motion information for all the sub-CUs of the large CU. Alternative temporal motion vector prediction ( A TMVP) method, which is also 10 referred to sub-block temporal motion vector prediction (SbTMVP ), allows each CU to fetch multiple sets of motion information from multiple blocks smaller than the current CU in the collocated reference picture. In spatial-temporal motion vector prediction (STMVP) method motion vectors of the sub-CUs are derived recursively by using the temporal motion vector predictor and spatial neighbouring motion vector- ¶00147).
Regarding claim 5, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 1 and Zhang further teaches:
further comprising:
generating an inter predicted sample of an inter-coded region of the target video block based on a predetermined rule, wherein the predetermined rule comprises at least one of:
a rule indicating to uni-directionally predict the inter-coded region, or a rule indicating to bi-directionally predict the inter-coded region, wherein generating the inter predicted sample comprises:
if at least one subblock of the target video block comprises at least one of an inter predicted sample or an intra predicted sample, the at least one subblock comprising the target subblock,
generating the inter predicted sample based on the predetermined rule (i.e. The uni-prediction candidate list consists of five uni-prediction motion vector candidates. It is derived from seven neighboring blocks including five spatial neighboring blocks (1 to 5) and two temporal co-located blocks (6 to 7), as shown in Figure 14. Figure 14 shows an example of a position of neighboring blocks. The motion vectors of the seven neighboring blocks are collected and put into the uni-prediction candidate list according in the order of uni-prediction motion vectors, LO motion vector of bi-prediction motion vectors, LI motion vector of bi-prediction motion vectors, and averaged motion vector of the LO and LI motion vectors of bi-prediction motion vectors. If the number of candidates is less than five, zero motion vector is added to the list- ¶00172).
Regarding claim 7, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 1 and Zhang further teaches:
However, Zhang does not teach explicitly:
wherein whether the intra mode information for the target subblock is perceived as unavailable or equal to the intra mode information of the at least one intra predicted sample is determined based on the coded information for the target video block, wherein the coded information for the target video block comprises at least one of: splitting information, a weight index, a location of a GPM block or GPM subblock, or a dimension of the GPM block or GPM subblock, wherein the splitting information comprises at least one of: a CPM partition mode, a GPM partition angle, or a GPM partition direction.
In the same field of endeavor, Kidani teaches:
wherein whether the intra mode information for the target subblock is perceived as unavailable or equal to the intra mode information of the at least one intra predicted sample is determined based on the coded information for the target video block(i.e. Specifically, FIG. 5 illustrates a configuration example of the GPM in a case where the intra prediction (modeX) and the inter prediction are applied to each partitioned area A/B- ¶0100… . in the GPM of the case illustrated in FIG. 5, either the normal merge mode or the intra prediction mode can be applied to each partitioned area A/B, and the type of the intra prediction mode is limited according to the partition shape (partition line) of the target block- ¶0101), wherein the coded information for the target video block comprises at least one of: splitting information, a weight index, a location of a GPM block or GPM subblock, or a dimension of the GPM block or GPM subblock, wherein the splitting information comprises at least one of: a CPM partition mode, a GPM partition angle, or a GPM partition direction (i.e. The GPM diagonally divides a rectangular block into two and performs motion compensation on each of the two blocks. Specifically, in the GPM, each of the two partitioned areas are motion-compensated by a motion vector in a merge mode, and are blended by weighted averaging. As the oblique partitioning pattern, sixty-four patterns are prepared according to the angle and the displacement- ¶0004…FIG. 8 is a diagram illustrating an example of angleIdx that defines an angle of a partition line of the GPM.- ¶0025).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Zhang and Kidani with the teachings of Lien-Fei to to improve the coding efficiency, by using intra modes within a CU with inter prediction (e.g., MODE INTER prediction mode) can be stored and be propagated as neighboring intra information for MPM derivation of a neighboring block of the CU (Lien-Fei- ¶0151).
Regarding claim 9, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 1 and Zhang further teaches:
wherein obtaining the prediction information of the target subblock based on the types of the predicted samples comprises: if the predicted samples comp rise at least two inter predicted samples (i.e. TPM is disabled when block width*height is smaller than 64- ¶00363), obtaining motion information for the target subblock, wherein obtaining motion information comprises: obtaining motion information in an inter-coded region, an inter predicted sample being coded in the inter-coded region, wherein the inter-coded region is one of two inter-coded regions for the target subblock, wherein the motion information comprises at least one of: a uni-directional prediction (i.e. The uni-prediction candidate list consists of five uni-prediction motion vector candidates. It is derived from seven neighboring blocks including five spatial neighboring blocks (1 to 5) and two temporal co-located blocks (6 to 7), as shown in Figure 14. Figure 14 shows an example of a position of neighboring blocks. The motion vectors of the seven neighboring blocks are collected 30 and put into the uni-prediction candidate list according in the order of uni-prediction motion vectors, LO motion vector of bi-prediction motion vectors, LI motion vector of bi-prediction motion vectors, and averaged motion vector of the LO and LI motion vectors of bi-prediction motion vectors. If the number of candidates is less than five, zero motion vector is added to the list.- ¶00172), or a bi-directional prediction, wherein in case at least one inter-coded region of the target subblock being bi-directional predicted, the motion information comprises a bi-directional prediction, or wherein obtaining the motion information comprises: obtaining two types of motion information for the target subblock, each of the two types of motion information associated with a respective inter-coded region of the target subblock, wherein a third type of motion information is absent from the motion information for the target subblock, or wherein the third type of motion information comprises combining or constructing motion information from first motion information in a first inter-coded region and second motion information in a second inter-coded region.
Regarding claim 10, Zhang, Kidani and Kidan Lien-Fei all the limitations of claim 9 and Zhang further teaches:
further comprising: determining, based on coded information for the target subblock, whether to store first motion information of a first inter-coded region of the target subblock or second motion information of a second inter-coded region of the target subblock; and storing respective motion information based on the determination, wherein combined motion information of the first and second motion information is absent from the stored motion information, wherein the coded information for the target video block comprises at least one of: splitting information, a weight index, a location of a GPM block or GPM subblock, or a dimension of the GPM block or GPM subblock, wherein the splitting information comprises at least one of: a GPM partition mode, a GPM partition angle, or a GPM partition direction (i.e. Figure 16 shows an example of motion vector storage. The motion vectors (Mv I and Mv2 in Figure 16) of the triangular prediction units are stored in 4x4 grids. For each 4x4 grid, either uni-prediction or bi-prediction motion vector is stored depending on the position of the 4x4 grid in the CU. As shown in Figure 16, uni-prediction motion vector, either Mvl or Mv2, is stored for the 4x4 grid located in the non-weighted area (that is, not located at the diagonal edge). On the other hand, a bi-prediction motion vector is stored for the 4x4 grid located in the weighted area. The bi-prediction motion vector is derived from Mvl and Mv2 according to the following rules- ¶00193).
Regarding claim 11, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 1.
However, Zhang does not teach explicitly:
wherein the further region comprises a further intra-coded region, the further intra-coded region comprises at least one further intra predicted sample.
In the same field of endeavor, Kidani teaches:
wherein the further region comprises a further intra-coded region, the further intra-coded region comprises at least one further intra predicted sample (i.e. On the other hand, the target sub-block of the third pattern is partitioned into a total of three cases of two different inter prediction cases, an inter prediction case and an intra prediction case, and two different intra prediction cases- ¶0136).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Zhang with the teachings of Kidani to improve prediction accuracy by defining a method of applying the OBMC to the GPM (Kidani- ¶0010).
Regarding claim 12, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 11 and Zhang further teaches:
further comprising: determining, based on coded information for the target subblock, whether to store first intramode information of the intra-coded region of the target subblock or second intra mode information of the further intra-coded region of the target subblock; and storing respective intra mode information based on the determination, wherein combined intra mode information of the first and second intra mode information is absent from the stored intra mode information, wherein the coded information for the target video block comprises at least one of: splitting information, a weight index, a location of a GPM block or GPM subblock,or a dimension of the GPM blockorGPM subblock, wherein the splitting information comprises at least one of: a GPM partition mode, a GPM partition angle, or a GPM partition direction (i.e. Figure 16 shows an example of motion vector storage. The motion vectors (Mv I and Mv2 in Figure 16) of the triangular prediction units are stored in 4x4 grids. For each 4x4 grid, either uni-prediction or bi-prediction motion vector is stored depending on the position of the 4x4 grid in the CU. As shown in Figure 16, uni-prediction motion vector, either Mvl or Mv2, is stored for the 4x4 grid located in the non-weighted area (that is, not located at the diagonal edge). On the other hand, a bi-prediction motion vector is stored for the 4x4 grid located in the weighted area. The bi-prediction motion vector is derived from Mvl and Mv2 according to the following rules- ¶00193).
Regarding claim 13, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 1 and Zhang further teaches:
wherein obtaining the intramode information comprises: obtaining at least one of the following for the target subblock: a constructed intra mode, a converted intra mode, or a mapped intra mode (i.e. MHintra (sometimes also called combined inter-intra mode, or CIIP)- ¶0035… an inter-intra (MHintra) mode, a subblock merge mode or a merge with motion vector differencing (MMVD) mode; wherein, in the inter-intra coding mode, a prediction block of the video block is derived from an intra prediction 30 signal and an inter prediction signal; wherein, in the sub-block merge mode, the conversion uses derived motion information for each sub-block within the block; wherein, in the MMVD mode, a combined merge and motion vector differencing (MVD) coding mode is used; and wherein the merge mode enables inheriting motion information from a merge candidate in a merge candidate list without MVD for whole of the video block- ¶0672), or wherein obtaining the intramode information comprises: obtaining intramode information of more than one intra-coded region of the target subblock, wherein the more than one intra-coded region comprises the intra-coded region and the further intra-coded region(i.e. 17. The two partitions split in TPM can be coded with different modes. a. In one example, one is intra-coded and the other is inter-coded. b. In another example, one is merge-coded, the other is AMVP coded.… MHintra (sometimes also called combined inter-intra mode, or CIIP)- ¶0035… an inter-intra (MHintra) mode, a subblock merge mode or a merge with motion vector differencing (MMVD) mode; wherein, in the inter-intra coding mode, a prediction block of the video block is derived from an intra prediction 30 signal and an inter prediction signal; wherein, in the sub-block merge mode, the conversion uses derived motion information for each sub-block within the block; wherein, in the MMVD mode, a combined merge and motion vector differencing (MVD) coding mode is used; and wherein the merge mode enables inheriting motion information from a merge candidate in a merge candidate list without MVD for whole of the video block- ¶0672).
Regarding claim 14, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 1 and Zhang further teaches:
wherein the target video block comprises one of: a geometric partition mode (GPM) coded block without motion refinement (i.e. The disclosed techniques may be used by video or image decoder or encoder embodiments for in which geometry partitions may be used for video coding or decoding- ¶004), a geometric partition mode (GPM) coded block with motion refinement (i.e. The disclosed techniques may be used by video or image decoder or encoder embodiments for in which geometry partitions may be used for video coding or decoding- ¶004), a GPM block with motion vector difference (GPM MMVD), or a GPM block with template matching based motion refinement (GPM TM).
Regarding claim 15, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 1 and Zhang further teaches:
wherein the geometric partitioning tool comprises at least one of: a geometric merge mode (GEO),a geometric partition mode (GPM) (i.e. With reference to the above-listed solution sets, in some embodiments, the geometry partition mode (also called geometric partition mode in the present document)- ¶00731), a wedge prediction mode, a triangular prediction mode (i.e. triangular partition mode TPM- ¶0036), a GPM with motion vector difference (GPM MMVD) (i.e. merge with motion vector difference (MMVD) followed by sub-block merge list followed by triangular partition mode TPM- ¶0036), a GPM block with template matching based motion refinement (GPM TM), a GPM with inter and intra, or a variant coding tool based on GPM.
Regarding claim 16, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 1 and Zhang further teaches:
wherein the conversion includes encoding the target video block into the bitstream (i.e. The disclosed techniques may be used by video or image decoder or encoder embodiments for in which geometry partitions may be used for video coding or decoding- ¶004… The method includes determining, for a conversion between a video block of a video region of a video and a bitstream representation of the video, a relationship between (1) a splitting pattern used to split the video block into prediction partitions such that at least one prediction partition is a nonrectangular and non-square partition, and (2) indexes to merge candidates of the partitions used 15 for the conversion, and a format of the bitstream representation permits changing the relationship at the video region level; and performing the conversion based on the determining- ¶0011).
Regarding claim 17, Zhang, Kidani and Kidan Lien-Fei teach all the limitations of claim 1 and Zhang further teaches:
wherein the conversion includes decoding the target video block from the bitstream (i.e. The disclosed techniques may be used by video or image decoder or encoder embodiments for in which geometry partitions may be used for video coding or decoding- ¶004… The method includes determining, for a conversion between a video block of a video region of a video and a bitstream representation of the video, a relationship between (1) a splitting pattern used to split the video block into prediction partitions such that at least one prediction partition is a nonrectangular and non-square partition, and (2) indexes to merge candidates of the partitions used 15 for the conversion, and a format of the bitstream representation permits changing the relationship at the video region level; and performing the conversion based on the determining- ¶0011).
Regarding claim 18, apparatus claim 18 is drawn to the apparatus using/performing the same method as claimed in claim 1. Therefore, apparatus claim 18 corresponds to method claim 1, and is rejected for the same reasons of obviousness as used above..
Regarding claim 19, computer-readable medium storing instructions claim 19 corresponds to the same method as claimed in claim 1, and therefore is also rejected for the same reasons of obviousness as used above.
Regarding claim 20, Zhang teaches:
20. A non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus (i.e. a method of video processing is disclosed- ¶005), wherein the method comprises:
obtaining prediction information of a target subblock in a target video block of the video based on types of predicted samples in the target subblock, the target video block being coded by a geometric partitioning tool(i.e. making a determination that a conversion between a video block of a video region of a video and 25 a coded representation of the video uses a geometry partitioning mode in which the video block is partitioned into multiple prediction partitions including at least a first prediction partition;- ¶005); and
generating the bitstream based on the prediction information of the target subblock (i.e. performing the conversion based on the motion candidates for the multiple prediction partitions- ¶005…method includes determining, for a conversion between a video block of a video region of a video and a bitstream representation of the video, a relationship between (1) a splitting pattern used to split the video block into prediction partitions such that at least one prediction partition is a nonrectangular and non-square partition, and (2) indexes to merge candidates of the partitions used 15 for the conversion, and a format of the bitstream representation permits changing the relationship at the video region level; and performing the conversion based on the determining- ¶0011), wherein the prediction information of the target subblock comprises intra mode information (i.e. The methods may include the first prediction portion is a triangular shape, and the first prediction portion is combined with intra-prediction- ¶00411).
However, Zhang does not teach explicitly:
obtaining the prediction information based on the types of the predicted samples comprises: in response to the target subblock comprising an intra-coded region and a further region, the intra-coded region comprising at least one intra-coded sample, obtaining intra mode information of the at least one intra-coded sample in the intra-coded region as the intra mode information of the target subblock, wherein in response to the further region comprising an inter-coded region and the inter- coded region comprising at least one inter predicted samples, the method further comprises: determining, based on coded information for the target video block, whether the intra mode information for the target subblock is equal to the intra mode information of the at least one intra predicted sample.
In the same field of endeavor, Kidani teaches:
obtaining the prediction information (i.e. The prediction type including two different intra predictions is treated as the intra prediction- ¶0137) based on the types of the predicted samples (i.e. The target sub-blocks of the three patterns are further partitioned depending on whether the prediction type applied to the partitioned area A and the partitioned area B is inter prediction or intra prediction- ¶0134) comprises: in response to the target subblock comprising an intra-coded region and a further region (i.e. partition line L- fig. 4-7… third pattern includes target sub-blocks belonging to both the partitioned areas A/B- ¶0133), the intra-coded region comprising at least one intra-coded sample, obtaining intra mode information of the at least one intra-coded sample in the intra-coded region as the intra mode information of the target subblock(i.e. FIG. 5 illustrates an example of a method of applying an intra prediction mode to a GPM according to the present embodiment- ¶0022… FIG. 6 illustrates an example of a method of applying an intra prediction mode to the GPM according to the present embodiment- ¶0023… Furthermore, in the GPM illustrated in FIGS. 5 and 6, a method of specifying the application possibility of the GPM to which the intra prediction mode is additionally applied in the block to be decoded and the prediction mode type in each of the partitioned areas A/B when the GPM is applied is defined- ¶0094-0103),
wherein in response to the further region comprising an inter-coded region and the inter- coded region comprising at least one inter predicted samples, the method further comprises: determining, based on coded information for the target video block (i.e. Specifically, FIG. 5 illustrates a configuration example of the GPM in a case where the intra prediction (modeX) and the inter prediction are applied to each partitioned area A/B- ¶0100), whether the intra mode information for the target subblock is equal to the intra mode information of the at least one intra predicted sample (i.e. in the GPM of the case illustrated in FIG. 5, either the normal merge mode or the intra prediction mode can be applied to each partitioned area A/B, and the type of the intra prediction mode is limited according to the partition shape (partition line) of the target block- ¶0101);
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Zhang with the teachings of Kidani to improve prediction accuracy by defining a method of applying the OBMC to the GPM (Kidani- ¶0010).
However, Zhang and Kidani do not teach explicitly:
storing the intra mode information for the target subblock based on the determination.
In the same field of endeavor, Lien-Fei teaches:
storing the intra mode information for the target subblock based on the determination (i.e. when a geometric partition of a current block based on GPM is generated with intra prediction, an encoded intra mode (or an intra mode that is applied to predict the geometric partition) can be stored in corresponding N×N units (e.g., 4×4 units) of the geometric partition. Accordingly, for the N×N units positioned in the geometric partition that is generated with the intra prediction, the encoded intra mode can be stored for each of the N×N units positioned in the geometric partition. The encoded intra mode can be DC mode, PLANAR mode, or an angular mode. In yet another example, for N×N units along (or across) a geometric boundary of the current block based on GPM, an encoded intra mode used for a geometric partition with the intra prediction in the current block can be stored for the N×N units along the geometric boundary. Thus, for a N×N unit across the geometric boundary of the current block in which a first partition is IBC coded and a second partition is intra coded, an intra mode that is applied to predict the second partition can be stored- ¶0154).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Zhang and Kidani with the teachings of Lien-Fei to to improve the coding efficiency, by using intra modes within a CU with inter prediction (e.g., MODE INTER prediction mode) can be stored and be propagated as neighboring intra information for MPM derivation of a neighboring block of the CU (Lien-Fei- ¶0151).
Allowable Subject Matter
Claim 8 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLIFFORD HILAIRE whose telephone number is (571)272-8397. The examiner can normally be reached 5:30-1400.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V PERUNGAVOOR can be reached at (571)272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CLIFFORD HILAIRE
Primary Examiner
Art Unit 2488
/CLIFFORD HILAIRE/Primary Examiner, Art Unit 2488