DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
2. This Office Action is in response to the application filed on 10/08/2024. Claim 1-46 is cancelled. Thus, claims 47-66 have been examined.
Priority
3. Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)- (d), which papers have been placed of record in the file.
IDS and Request for Information Disclosure
4. Examiner notes there have been no information disclosure statements (IDS) filed with this application. Notably, the Background sections of Applicant's Specification does not describe any specific known technologies that are pertinent to the application. The Background merely provides general descriptions of components such as sensors, microphones, cameras, and accelerometers, without identifying any particular prior art or references. According to 37 CFR 1.98(b), a proper information disclosure statement requires a list of all patents, publications, or any other information submitted for consideration by the Office. As stated in MPEP § 609.04(a), "the list may not be incorporated into the specification but must be submitted in a separate paper."
Claim Rejections - 35 USC § 103
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C.
102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction
of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status
The following is a quotation of 35 U.S.C. 103 which forms the basis for all
obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention
is not identically disclosed as set forth in section 102, if the differences between the claimed
invention and the prior art are such that the claimed invention as a whole would have been obvious
before the effective filing date of the claimed invention to a person having ordinary skill in the art
to which the claimed invention pertains. Patentability shall not be negated by the manner in which
the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35
U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness
6. Claims 47-49, 52, 55-56, 59, 62-63, and 66 are rejected under 35 U.S.C. 103 as being unpatentable over Gisquet et al (US 20210144385 A1) hereinafter “Gisquet” in view of Furht et al (US-20200396476-A1) hereinafter “Furht”.
Regarding Claim 47 Gisquet-Furht
Gisquet discloses 47. (New) A video decoding device, (Gisquet, Fig. 4, 410 “MV decoding” [0126] “FIG. 4 illustrates a block diagram of a decoder according to an embodiment…” comprising: a processor Gisquet, “Fig. 2, “CPU 1111” [0100] “a central processing unit 1111, such as a microprocessor, denoted CPU;”) configured to:
obtain a plurality of motion information candidates associated with a block; (Fig. 6 Step S608 second set of N2 motion vector candidates, [0180] “A second set of motion vector predictor candidates L1′ is obtained in step S608.” See also [0181])
determine motion information associated with the block; (Gisquet , Fig. 6 step 612 [0188] “In an alternative embodiment, the importance value can be computed as a function of the distance to a representative vector of the set of vectors considered, such as the average value of the vectors of the set or the median of the vectors of the set. Then, the importance may be computed as the inverse of the distance of a given vector of the set Vn to the representative vector: the closer a vector Vn is to the representative vector of the set, the higher the importance of Vn.”)
determine an order associated with the plurality of motion information candidates based on comparing the respective motion information candidates with the determined motion information, (Gisquet, Fig. 6, step 614, [0189] “, the N2 remaining motion vector predictor candidates are ordered in step S614 according to an order of decreasing importance value. If several motion vector predictors have the same importance value, they can be ordered according to the increasing order of their indexes), wherein a higher rank is associated with a higher consistency (Gisquet, [0192] “…the virtual motion vector predictors are computed from the remaining motion vector predictors, ordered according to their importance . . .); and
decode the block based on the determined order associated with the plurality of motion information candidates. (Gisquet, [0129] “The module 410 applies the motion vector decoding for each current block encoded by motion prediction, comprising determining the number N.sub.max of motion vector predictors used and retrieving the motion vector predictor index…”)
Gisquet does not explicitly disclose
, . . . associated with a motion model;
However, in the same field of endeavor Furht discloses more explicitly the following:
. . . (Furht, [0014] “FIG. 4 illustrates three example motion models that can be utilized for global motion ….”);
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Gisquet in view Furht such that motion information is “associated with a motion model.” Furht teaches the use of defined motion modals for processing motion information, including global motion models. Accordingly, a person of ordinary skill in the art would have been motivated to associate motion information candidates with a motion model as taught by Furht in order to improve overall coding efficiency. (Furht, [0038])
Note: The motivation that was utilized in the rejection of claim 47, applies equally as well to claims 48-49, 52, 55-56, and 62-63.
Regarding Claim 48 Gisquet-Furht
Gisquet-Furht discloses 48. (New) The video decoding device of claim 47,
wherein the order associated with the plurality of motion information candidates is associated with a ranking of the plurality of motion information candidates, (Gisquet, [0188] “In an alternative embodiment, the importance value can be computed as a function of the distance to a representative vector of the set of vectors considered, such as the average value of the vectors of the set or the median of the vectors of the set. …”)
wherein the order associated with the plurality of motion information candidates is ordered from a higher rank to a lower rank. (Gisquet, [0189] “…, the N2 remaining motion vector predictor candidates are ordered in step S614 according to an order of decreasing importance value. If several motion vector predictors have the same importance value, they can be ordered according to the increasing order of their indexes….”)
Regarding Claim 49 Gisquet-Furht
Gisquet-Furht discloses 49. (New) The video decoding device of claim 47,
wherein the processor is further configured to:
obtain an indication that indicates whether a template-based coding tool is enabled for the block, wherein the determination of the order associated with the plurality of motion information candidates is performed based on the indication indicating that the template-based coding tool is not enabled for the block and is performed independent from reconstructed samples. (Furht, [0038] “In some approaches to video coding, and still referring to FIG. 2, in order to improve coding efficiency, after merge candidate list is constructed (with a processing order of spatial candidate locations being A1, B1, B0, A0, B2), an order of each merge candidate is adjusted according to a template matching cost. Template matching cost may be measured by a Sum of Absolute Difference (SAD) between the neighboring samples of a current coding unit (CU) and their corresponding reference samples. For instance, and without limitation, merge candidates may be ordered in an increasing order of SAD computed with that merge candidate. A number of merge candidates selected using a template matching cost may be limited. For example, a set of four lowest cost candidates among five originally generated and/or provided candidates may be selected”)
Regarding Claim 52 Gisquet-Furht
Gisquet-Furht discloses 52. (New) The video decoding device of claim 47, wherein determining the order associated with the plurality of motion information candidates further comprises:
determining, for a first motion information candidate of the plurality of motion information candidates, a first consistency value relative to the obtained motion information; (Gisquet, [0010], “… a plurality of possible motion vector predictors. This method, called motion vector competition, consists in determining between several motion vector predictors or candidates which motion vector predictor [0188] “In an alternative embodiment, the importance value can be computed as a function of the distance to a representative vector of the set of vectors considered, such as the average value of the vectors of the set or the median of the vectors of the set.”)
determining, for a second motion information candidate of the plurality of motion information candidates, a second consistency value relative to the obtained motion information; (Gisquet, [0010], “… a plurality of possible motion vector predictors. This method, called motion vector competition, consists in determining between several motion vector predictors or candidates which motion vector predictor...” [0188] “In an alternative embodiment, the importance value can be computed as a function of the distance to a representative vector of the set of vectors considered, such as the average value of the vectors of the set or the median of the vectors of the set. …”)
and
based on the first motion information candidate being associated with higher consistency value than the second motion information candidate, ranking the first motion information candidate higher than the second motion information candidate.
Gisquet [0188] “…, the importance value can be computed as a function of the distance to a representative vector of the set of vectors considered, such as the average value of the vectors of the set or the median of the vectors of the set.” [0189] “…, the N2 remaining motion vector predictor candidates are ordered in step S614 according to an order of decreasing importance value.”)
Regarding Claim 55,62 Gisquet-Furht
The independent claims 55 and 62 recite a limitation that are substantially the similar to those of independent claim 1, except that claim 55 and 62 are directed to an encoder device and method, rather than a decoder device. It is well established in the art that video compression systems comprise complementary components, namely an encoder (compressor) and a decoder (decompressor), which perform reciprocal operations. Specifically, the encoder compresses source data to reduce the bit rate for transmission or storage, while the decoder reconstructs the data from the compressed bitstream by performing a corresponding inverse process. Furthermore, for the encoder device and method, Gisquet expressly discloses in paragraph [0003] that “The invention relates to a method and device for encoding a sequence of digital images and a method and device for decoding a corresponding bitstream.”
Regarding Claim 56,63 Gisquet-Furht
Gisquet-Furht 56. (New) The video encoding device of claim 55, wherein the processor is further configured to:
determine whether to perform motion information-based candidate ordering; (Gisquet, [0153] “…starting from the initial set of motion vector predictors L1, it is possible to add other candidates as motion vector predictors, in a predetermined order, to form a modified set of motion vector predictors L2.” ); and
based on a determination to perform motion information-based candidate ordering for the block, include an indication that indicates to enable motion information-based candidate ordering, (Gisquet, [0192] “…. motion vector predictor candidates are added to the re-ordered set… motion vector predictors are computed from the remaining motion vector predictors, ordered according to their importance. The motion vector predictor of index n of the re-ordered set…”) wherein the indication further indicates to decode the encoded block based on motion information associated with neighboring blocks of the block. (Furht, [0032]”…some blocks may share the same motion vector information,…some approaches to motion compensation may utilize a merge mode, in which neighboring blocks may share a motion vector allowing motion information to be encoded in a bitstream…a second block may inherit motion information from the first block….During decoding,…an index…may be used to indicate which block a current block will inherit motion information from.”)
Regarding Claim 59, 66 Gisquet-Furht
Gisquet-Furht discloses 59. (New) The video encoding device of claim 56,
wherein the indication further indicates to determine a respective consistency value associated with each of the motion information candidates and the neighboring blocks, and wherein the indication further indicates to determine the order of motion information candidates based on the consistency values, wherein a higher consistency value is ranked higher than a lower consistency value. (Furht, [0038]” …. after merge candidate list is constructed …an order of each merge candidate is adjusted according to a template matching cost. Template matching cost may be measured by a Sum of Absolute Difference (SAD) between the neighboring samples of a current coding unit (CU) and their corresponding reference samples. … merge candidates may be ordered in an increasing order of SAD computed with that merge candidate.”);
Claim Rejections - 35 USC § 103
7. Claim 50 is rejected under 35 U.S.C. 103 as being unpatentable over Gisquet-Furht in view of Mao et al (US-20200120344-A) hereinafter “Mao”.
Regarding Claim 50 Gisquet-Furht-Mao
Gisquet-Furht discloses 50. (New) The video decoding device of claim 47,
wherein the motion model is a global motion model (Furht, [0054] “At step 710, a candidate list may be determined. Candidate list may be based on global motion for a current block.”), and
wherein the processor is further configured to:
. . . , wherein the global motion information is associated with the global motion model, . . . (Furht, [0047] At step 510, and further referring to FIG. 5, a motion vector candidate list including a motion vector candidate having motion information that characterizes a global motion vector may be constructed for a current block. Global motion vector may be characterized by a header of bitstream, the header including a picture parameter set (PPS) and/or a sequence parameter set (SPS).”
Gisquet-Furht do not explicitly disclose
obtain global motion information associated with a slice that comprises the block, wherein the motion information associated with the block comprises the global motion information associated with the slice, . . ., wherein the order associated with the plurality of motion information candidates is Further determined based on comparing the respective motion information candidates with the global motion information associated with the slice, and wherein the global motion information associated with the slice is obtained via a slice header.
However, in the same field of endeavor Mao discloses more explicitly the following:
obtain global motion information associated with a slice that comprises the block, wherein the motion information associated with the block comprises the global motion information associated with the slice, . . ., wherein the order associated with the plurality of motion information candidates is Further determined based on comparing the respective motion information candidates with the global motion information associated with the slice, and wherein the global motion information associated with the slice is obtained via a slice header. (Mao, [0124] “, a process of determining the second motion vector may be divided into two steps. Step 1. MVs in a “candidate motion vector set” are compared to determine a start point of a motion search. The candidate motion vector set includes at least one of the following two types of MVs a zero vector (0, 0) and a global motion vector. Motion compensation is performed in the reference frame of the MV of the current block using the pixel block that is spatially adjacent to the current block and that is within the preset range as a template and using each MV in the candidate motion vector set as a motion vector,…”[0125] “The global motion vector is identified in a bitstream (for example, identified in a slice header of the current frame), and indicates an MV pointing from the current frame to a long-term reference frame. …, one or two motion vectors with a highest occurrence frequency in motion vectors in all areas are selected as global motion vectors. A decoding device obtains the global motion vector of the long-term reference frame of the current frame by parsing the bitstream (for example, the slice header) ....”)
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Gisquet-Furht with Mao to create the system of Gisquet-Furht as outlined above in order to obtain global motion information associated with a slice that comprises the block, wherein the motion information associated with the block comprises the global motion information associated with the slice. Further, the order associated with the plurality of motion information candidates is Further determined based on comparing the respective motion information candidates with the global motion information associated with the slice, and wherein the global motion information associated with the slice is obtained via a slice header.” as suggested by Mao.
The reasoning is that “the MVP list is enriched and motion vector prediction efficiency is improved.” (Mao, [0134])
Claim Rejections - 35 USC § 103
8. Claim 51, 58, and 65 are rejected under 35 U.S.C. 103 as being unpatentable over Gisquet-Furht in view of Seregin et al (US-20130272413-A1) hereinafter “Seregin”.
Regarding Claim 51 Gisquet-Furht-Seregin
Gisquet-Furht discloses 51. (New) The video decoding device of claim 47,
Gisquet-Furht do not disclose
wherein the motion model is a local motion model, and wherein the processor is further configured to:
derive local motion information associated with the local motion model based on motion information candidates associated with neighboring blocks of the block, wherein the motion information associated with the block comprises the derived local motion information.
However, in the same field of endeavor Seregin discloses more explicitly the following:
wherein the motion model is a local motion model (Seregin, Fig. 6, 212 “Identify Local candidate Blocks” [0006] “…motion information of a set of a set of local spatial candidate blocks…”), and wherein the processor is Further configured to:
derive local motion information associated with the local motion model based on motion information candidates associated with neighboring blocks of the block, wherein the motion information associated with the block comprises the derived local motion information. (Seregin, [0023]” …. a video coder may derive the motion vector and/or other motion information for a current video block from a reference block. The reference blocks from which the motion information may be derived generally include a plurality of pre-defined spatially-neighboring blocks, and one or more co-located or neighboring blocks from another picture. A video coder, e.g., a video encoder or video decoder, may construct a motion information candidate list including spatial and temporal candidates based on the motion information of these reference blocks, which may be referred to as local motion information candidate blocks.” [0070] “AMVP mode is similar to merge mode in that video encoder 20 and video decoder 30 implement a common, pre-defined process to evaluate the motion information local neighboring blocks and one or more temporal neighboring blocks, and construct a motion information candidate list for a video block based on the evaluated motion information. …”)
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Gisquet-Furht with Seregin to create the system of Gisquet-Furht as outlined above in order to “derive local motion information associated with the local motion model based on motion information candidates associated with neighboring blocks of the block, wherein the motion information associated with the block comprises the derived local motion information.” as suggested by Seregin.
The reasoning is that such a modification “may provide greater video coding fidelity for the video block, by explicitly signaling more motion information for the video block, at the cost of reduced bit stream efficiency relative to merge mode.” (Seregin, [0064])
Note: The motivation that was utilized in the rejection of claim 51, applies equally as well to claims 58 and 65.
Regarding Claim 58,65 Gisquet-Furht-Seregin
Gisquet-Furht discloses 58. (New) The video encoding device of claim 56,
Gisquet-Furht do not explicitly disclose
wherein the motion model is a local motion model, wherein the motion information for the block is local motion information, and wherein the indication further indicates to derive the local motion information based on motion information candidates associated with neighboring blocks, wherein the local motion information is associated with the local motion model.
However, in the same field of endeavor Seregin discloses more explicitly the following:
wherein the motion information for the block is local motion information (Seregin, Fig. 6, 212 “Identify Local candidate Blocks” [0006]“…motion information of a set of a set of local spatial candidate blocks…”) , and wherein the indication further indicates to derive the local motion information based on motion information candidates associated with neighboring blocks, wherein the local motion information is associated with the local motion model. (Seregin, [0023]” …. a video coder may derive the motion vector and/or other motion information for a current video block from a reference block. The reference blocks from which the motion information may be derived generally include a plurality of pre-defined spatially-neighboring blocks, and one or more co-located or neighboring blocks from another picture. A video coder, e.g., a video encoder or video decoder, may construct a motion information candidate list including spatial and temporal candidates based on the motion information of these reference blocks, which may be referred to as local motion information candidate blocks.” [0070] “AMVP mode is similar to merge mode in that video encoder 20 and video decoder 30 implement a common, pre-defined process to evaluate the motion information local neighboring blocks and one or more temporal neighboring blocks, and construct a motion information candidate list for a video block based on the evaluated motion information. …”)
Claim Rejections - 35 USC § 103
9. Claim 53 is rejected under 35 U.S.C. 103 as being unpatentable over Gisquet-Furht in view of Hinz et al. (US-20150195566-A1) hereinafter “Hinz”.
Regarding Claim 53 Gisquet-Furht-Hinz
Gisquet-Furht discloses 53. (New) The video decoding device of claim 47, wherein determining the order associated with the plurality of motion information candidates further comprises:
determining, for a first motion information candidate of the plurality of motion information candidates, a first consistency value relative to the obtained motion information; (Gisquet, [0010], “… a plurality of possible motion vector predictors. This method, called motion vector competition, consists in determining between several motion vector predictors or candidates which motion vector predictor [0188] “In an alternative embodiment, the importance value can be computed as a function of the distance to a representative vector of the set of vectors considered, such as the average value of the vectors of the set or the median of the vectors of the set.”)
determining, for a second motion information candidate of the plurality of motion information candidates, a second consistency value relative to the obtained motion information; (Gisquet, [0010], “… a plurality of possible motion vector predictors. This method, called motion vector competition, consists in determining between several motion vector predictors or candidates which motion vector predictor [0188] “In an alternative embodiment, the importance value can be computed as a function of the distance to a representative vector of the set of vectors considered, such as the average value of the vectors of the set or the median of the vectors of the set.”) and
Gisquet-Furht do not explicitly disclose
based on the first motion information candidate being associated with higher consistency value than the second motion information candidate, ranking the first motion information candidate lower than the second motion information candidate.
However, in the same field of endeavor Hinz discloses more explicitly the following:
based on the first motion information candidate being associated with higher consistency value than the second motion information candidate, ranking the first motion information candidate lower than the second motion information candidate. (Hinz, [0164] However, the list 528/512 is ordered 544 depending on base layer motion parameters such as, for example, the motion parameter, represented by the motion vector 523, of the co-located base layer block 108. For example, the rank of the members, i.e. motion parameter candidates, 532 or 514 of list 528/512 is determined based on a deviation of each of same to the potentially scaled version of motion parameter 523. The greater the deviation is, the lower the respective member's rank in the ordered list 528/512….”
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Gisquet-Furht with Hinz, to create the system of Gisquet-Furht as outlined above in order to rank motion information candidates based on consistency, such that “based on the first motion information candidate being associated with higher consistency value than the second motion information candidate, ranking the first motion information candidate lower than the second motion information candidate,” as suggested by Hinz.
The reasoning is that such ranking “increase the coding efficiency.” (Hinz, [0005]).
Claim Rejections - 35 USC § 103
10. Claim 54 and 60 are rejected under 35 U.S.C. 103 as being unpatentable over Gisquet-Furht in view of Lee et al (US-20230328228-A1) hereinafter “Lee’228”.
Regarding Claim 54 Gisquet-Furht-Lee’228
Gisquet-Furht discloses 54. (New) The video decoding device of claim 47, wherein determining the order associated with the plurality of motion information candidates further comprises:
Gisquet-Furht do not explicitly discloses
obtaining a difference between a first motion information candidate and a second motion information candidate of the plurality of motion information candidates; and
based on a difference between the first motion information candidate and the second motion information candidate is below a threshold, removing the second motion information candidate.
However, in the same field of endeavor Lee’228 discloses more explicitly the following:
obtaining a difference between a first motion information candidate and a second motion information candidate of the plurality of motion information candidates; (Lee’228, [0247] “…check may be performed only for N motion information candidates … included in a motion information table. In an example, a redundancy check may be performed only for motion information candidates with an index that the number and difference of motion information candidates included in a motion information table… “) and
based on a difference between the first motion information candidate and the second motion information candidate is below a threshold, removing the second motion information candidate. (Lee’228, [0247] “…a redundancy check may be performed only for motion information candidates with an index that the number and difference of motion information candidates included in a motion information table are below the threshold.”)
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Gisquet-Furht in view of Lee’228 to removing the second motion information candidate when a difference between a first motion information candidate and second motion information candidate is below a threshold... one ordinary skill in the art would have been motivated to incorporate Lee’s redundancy-checking technique into the system of Gisquet-Furht in order to reduce redundant motion information candidates, thereby improving compression efficiency without causing large visual distortion.”(Lee’228, [0384])
Note: The motivation that was utilized in the rejection of claim 54, applies equally as well to claim 60.
Regarding Claim 60 Gisquet-Furht-Lee’228
Gisquet-Furht discloses 60. (New) The video encoding device of claim 56,
Gisquet-Furht do not disclose
wherein the indication further indicates to determine a first order of motion information candidates and a second order of motion information candidates, wherein the indication further indicates to determine the second order of motion information candidates based on the first order of motion information candidates and a threshold, and wherein the second order of motion information candidates is a refinement of the first order of motion information candidates.
However, in the same field of endeavor Lee’228 discloses more explicitly the following:
wherein the indication further indicates to determine a first order of motion information candidates and a second order of motion information candidates, wherein the indication further indicates to determine the second order of motion information candidates based on the first order of motion information candidates and a threshold, and wherein the second order of motion information candidates is a refinement of the first order of motion information candidates. (Lee’228, [0247] “A redundancy check with a merge candidate may be performed only for a part of motion information candidates. In an example, a redundancy check may be performed only for N motion information candidates with a large or a small index among motion information candidates included in a motion information table. In an example, a redundancy check may be performed only for motion information candidates with an index that the number and difference of motion information candidates included in a motion information table are below the threshold. When the threshold is 2, a redundancy check may be performed only for 3 motion information candidates with the largest index value among motion information candidates included in a motion information table. A redundancy check may be omitted for motion information candidates except for the above 3 motion information candidates. When a redundancy check is omitted, a motion information candidate may be added to a merge candidate list regardless of whether the same motion information as a merge candidate is exist or not.”)
Claim Rejections - 35 USC § 103
11. Claims 57 and 64 are rejected under 35 U.S.C. 103 as being unpatentable over Gisquet-Furht in view of Okada et al. (US-20070025444-A1) hereinafter “Okada” further in view of Chen et al. (US-20180098063-A1) hereinafter “Chen”.
Regarding Claim 57,64 Gisquet-Furht-Okada-Chen
Gisquet-Furht discloses 57. (New) The video encoding device of claim 56, wherein the motion model is a global motion model, wherein the motion information for the block comprises global motion information, (Furht, [0054] “At step 710, a candidate list may be determined. Candidate list may be based on global motion for a current block.”) and wherein the processor is further configured to:
Gisquet-Furht do not explicitly disclose
compute global motion information associated with a slice that comprises the block, wherein the motion information for the block comprises the computed global motion; and include an indication of the global motion information in video data, wherein the global motion information is associated with the global motion model, wherein the indication further indicates to derive global motion information based on rescaling temporal motion information candidates associated with a temporal reference frame, and wherein the indication further indicates to rescale based on temporal distance.
However, in the same field of endeavor Okada discloses more explicitly the following:
compute global motion information associated with a slice that comprises the block, wherein the motion information for the block comprises the computed global motion; (Okada, [0357] “An arrangement may be made in which the header of each coded data, the header of each picture, or the header of each region in the picture includes the information with respect to the global motion vector and the information with respect to the index for identifying the global motion vector. Furthermore, the header of each picture or the header of each region defined in the picture, for which the global motion vector has been calculated, may include the index information which specifies the global motion used for prediction coding of the target region. With such an arrangement, the information with respect to the global motion vector and the index may be stored in the head defined for each stream or picture, or for each smaller coding unit such as the slice, macro block, etc.”) and
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Gisquet-Furht with Okada to create the system of Gisquet-Furht as outlined above in order to “compute global motion information associated with a slice that comprises the block, wherein the motion information for the block comprises the computed global motion;” as suggested by Okada.
The reasoning is that “to provide a coding technique and a decoding technique for a moving image which offer high coding efficiency and high-precision motion prediction.(Okada, [0129].
Gisquet-Furht-Okada do not explicitly disclose
include an indication of the global motion information in video data, wherein the global motion information is associated with the global motion model, wherein the indication further indicates to derive global motion information based on rescaling temporal motion information candidates associated with a temporal reference frame, and wherein the indication further indicates to rescale based on temporal distance.
However, in the same field of endeavor Chen discloses more explicitly the following:
include an indication of the global motion information in video data, wherein the global motion information is associated with the global motion model, wherein the indication further indicates to derive global motion information based on rescaling temporal motion information candidates associated with a temporal reference frame, and wherein the indication further indicates to rescale based on temporal distance. (Chen, [0096] “A motion vector for a TMVP candidate is derived from the co-located PU of a so-called “co-located picture.” The co-located picture may be indicated in a slice level (e.g., using a collocated_ref_idx syntax element). The motion vector for the co-located PU is called a collocated MV. Similar to temporal direct mode in H.264/AVC, to derive the TMVP candidate motion vector, the co-located MV may be scaled to compensate the temporal distance differences…the video coder generates a TMVP by scaling motion vector based on a difference between a collocated temporal distance and a current temporal distance.”)
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Gisquet-Furht-Okada in view of Chen to create the system of Gisquet-Furht-Okada as outlined above in order to “include an indication of the global motion information in video data, wherein the global motion information is associated with the global motion model, wherein the indication further indicates to derive global motion information based on rescaling temporal motion information candidates associated with a temporal reference frame, and wherein the indication further indicates to rescale based on temporal distance” as suggested by Chen.
The reasoning is that such techniques may “potentially improve coding efficiency. (Chen,[0122])
Claim Rejections - 35 USC § 103
12. Claim 61 is rejected under 35 U.S.C. 103 as being unpatentable over Gisquet-Furht in view of Lee et al. (US-20170332099-A1) hereinafter “Lee’099”.
Regarding Claim 61 Gisquet-Furht-Lee’099
Gisquet-Furht discloses 61. (New) The video encoding device of claim 55, wherein the processor is further configured to:
Gisquet-Furht do not explicitly disclose
select a motion predictor for the block from the plurality of motion information candidates;
determine an index for selected motion predictor based on the order associated with the plurality of motion information candidates; and:
include an indication that indicates the index in video data.
However, in the same field of endeavor Lee’099 discloses more explicitly the following
Select a motion predictor for the block from the plurality of motion information candidates; (Lee’099 [0124] “In either AMVP or merge mode, video encoder 20 and video decoder 30 are configured to construct a motion vector (MV) candidate list for multiple motion vector predictors. “)
determine an index for selected motion predictor based on the order associated with the plurality of motion information candidates; (Lee’099, [0270] “…Video encoder 20 may be configured to signal a merge index for the current block and video decoder 30 may be configured to perform the same procedure to derive merge candidates as does video encoder 20.”) and
include an indication that indicates the index in video data. (Lee’099, [0097] “…The reference index indicates the picture that contains the predictor block. In some descriptions, for simplicity, the term “motion vector” may be used interchangeably with motion information, to indicate both the motion vector and its associated reference index.”)
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Gisquet-Furht with Lee’099 to create the system of Gisquet-Furht as outlined above in order to “select a motion predictor for the block from the plurality of motion information candidates; determine an index for selected motion predictor based on the order associated with the plurality of motion information candidates; and include an indication that indicates the index in video data.” as suggested by Lee’099
The reasoning is that such an approach is “to improve the efficiency of merge-based motion vector prediction.” (Lee’099, [0032])
Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Xu et al. (US 20190342572 A1)
Pace (US 20100008424 A1)
Conclusion
14.. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASTEWAYE GETTU ZEWEDE whose telephone number is (703)756-1441. The examiner can normally be reached Mo-Fr 8:30 am to 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASTEWAYE GETTU ZEWEDE/Examiner, Art Unit 2481 /WILLIAM C VAUGHN JR/Supervisory Patent Examiner, Art Unit 2481