DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Amendment
The Examiner acknowledges the claim amendments filed on 12/05/2025 and enters for consideration. Claims 15-20 have been previously cancelled and claim 2 has been currently cancelled. The amendments are in response to Non-Final Office Action mailed on 09/05/2025. Claims 1, 3-14, 21-26 remain pending in the current Application.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 26 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 26 recites “A non-transitory computer readable storage medium storing a video bitstream of a video that is generated by a video encoding method, the video encoding method comprising:..”. Claim 26 is directed to a non-transitory storage medium storing a bitstream of a video wherein clauses that appear to describe how the bitstream is generated. These elements or steps are not performed by an intended computer, and the bitstream is not a form of programming that causes functions to be performed by an intended computer. This shows that the computer-readable medium merely serves as support for storing the bitstream and provides no functional relationship between the steps/elements that describe the generation of the bitstream and intended computer system. Therefore, those claim elements are not given patentable weight. Patentable weight is given to data stored on a computer-readable medium when there exists a functional relationship between the data and its associated substrate. See MPEP 2111.05 III. For example, if a claim is drawn to a computer-readable medium containing programming, a functional relationship exists if the programming “performs some function with respect to the computer with which it is associated.” However, if the claim recites that the computer-readable medium merely serves as a storage for information or data that is not meant for being executed, no functional relationship exists and the information or data is not given patentable weight. The Examiner suggests that the claim be amended so that it is directed to a functional relationship. For example, in this particular case, the claim should instead be recited as “A method of storing a bitstream of a video block into a non-transitory computer-readable storage medium, wherein the bitstream is generated by a video encoding method comprising:”, followed by a functional step to store the generated bitstream into a non-transitory computer-readable storage medium.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claim 26 is rejected under AIA 35 U.S.C. 102(a)(1) as being anticipated by Zhang et al. (US PGPub 2016/0277762 A1).
Claim 26’s recitation of “A non-transitory computer readable storage medium storing a video bitstream of a video that is generated by a video encoding method, the video encoding method comprising:..” is a product by process claim limitation where the product is the bit stream and the process is the method steps to generate the bitstream. MPEP §2113 recites “Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps”. Thus, the scope of the claim is the storage medium storing the bitstream (with the structure implied by the method steps). The structure includes the information and samples manipulated by the steps. “To be given patentable weight, the printed matter and associated product must be in a functional relationship. A functional relationship can be found where the printed matter performs some function with respect to the product to which it is associated”. MPEP §2111.05(1)(A). When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists. MPEP §2111.05(III). The storage medium storing the claimed bitstream in claim 15 merely services as a support for the storage of the bitstream and provides no functional relationship between the stored bitstream and storage medium. Therefore, the structure bitstream, whose scope is implied by the method steps, is non-functional descriptive material and given no patentable weight. MPEP §2111.05(III). Thus, the claim scope is just a storage medium storing data and is anticipated by Zhang et al. which recites a storage medium storing a bitstream ([0151]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-5, 7, 9-12, 14, 21-26 are rejected under 35 U.S.C. 103 as being unpatentable over Chubach et al. (WO 2023/093863 A1) (Disclosed in IDS) in view of Chen et al. (US PGPub 2024/0214553 A1).
Regarding claim 1 (Currently Amended), Chubach et al. teach a method for decoding a current block of a current frame in a coded video bitstream (P25, L10-15; Fig. 8), the method comprising:
receiving, by a device comprising a memory storing instructions and a processor in communication with the memory (Fig. 11 shows the processing unit 1110 in communication with memory 1135), the coded video bitstream (P25, L10-15; Fig. 8; It teaches that the video decoder 800 is an image-decoding or video-decoding circuit that receives a bitstream 895 and decodes the content of the bitstream into pixel data of video frames for display 855);
identifying, by the device from the coded video bitstream, a motion vector corresponding to a reference block associated with the current block of the current frame (P26, L4-8; Fig. 8; it teaches generating the predicted MVs based on reference MVs...retrieving the reference MVs of previous video frames from the MV buffer 865);
obtaining, by the device, a scaling factor based on a first syntax explicitly signaled in the coded video bitstream (P26, L27-31; P27, L21-26; Fig. 9; It teaches that the video decoder 800 retrieve an entry from the history-based table 950 to obtain the scale and offset parameter values);
determining, by the device, a template used to derive an offset value (Fig. 1; P3, L12-16; The parameters of the function can be denoted by a scale and an offset, which forms a linear equation, that is, α*p[x]+β to compensate illumination changes, where p[x] is a reference sample (template) painted to by MV at a location x on reference picture, wherein α and β can be derived based on current blook template and reference block template) by:
obtaining a second syntax explicitly signaled in the coded video bitstream for indicating the template used to derive the offset value (P3, L12-23; P26, L14-15; Fig. 2A shows the syntax where the position of the template [x0, y0] is obtained to determine the offset value lic_offset_idx[x0][y0]) to comprise at least one of above samples, above-right samples, left samples, bottom-left samples, above region, or left region, and
determining, based on the second syntax, the template used to derive the offset value;
deriving, by the device, the offset value based on the template (Fig. 1; P3, L12-16);
generating, by the device, a predicted block based on the reference block according to a linear equation, the linear equation being associated with the scaling factor and the offset value (P26, L19-23; Fig. 9; It teaches that the quantized LIC scale and offset parameters 925 are used by a LIC linear model 910 to compute a LIC prediction block 960); and
reconstructing, by the device, the current block based on the predicted block (P27, L27-30; It teaches that the decoder decodes (at block 1040) the current block by using the prediction black to reconstruct the current block).
Although, Chubach et al. teach the syntax where the position of the template [x0, y0] is obtained to determine the offset value lic_offset_idx[x0][y0] as described in P3, L12-23; P26, L14-15 and shown in Fig. 2A. But it does not explicitly teach the template position to be specifically one of above samples, above-right samples, left samples, bottom-left samples, above region, or left region.
But, Chen et al., in the same field of endeavor (Abstract), teach a video decoding method, where it shows a LIC (Local Illumination Compensation) model parameter calculation including both the scaling and offset values (Figs. 2, 6, 8 show the offset values to be calculated from the templates at positions above, left, etc.) based on syntax elements which indicate whether the video encoding/decoding uses LIC or not ([0009]; [0157]) and also the template position to be selected for model parameter calculations including offset values ([0012]-[0013]; [0157]).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chubach et al’s invention of local illumination compensation with coded parameters to include Chen et al's usage of syntax elements to obtain template position for calculating offset values, because it reduces the complexity of the spatial LIC derivation (Chen et al.; [0083]; it teaches that complexity reduction is obtained by one improved spatial LIC algorithm for the case when both above and left spatial neighboring blocks are available).
Regarding claim 3 (Currently Amended), Chubach et al. and Chen et al. teach the method according to claim 1, wherein the determining, based on the second syntax, the template used to derive the offset value comprises:
the second syntax comprises an integer between 0 and 2 (Chubach et al.; P5, L26-39; P20, L18 - P21, L6; it teaches a syntax element called MaxNumLicParams{Y,Cb,Cr}, which is the number of samples in the subset of samples and represents the positive integer N, where N could be any integer value between 0 and 2 inclusive), inclusive;
in response to the second syntax being 0, determining both the above and left regions of the current block as the template used to derive the offset value (Chubach et al.; Fig. 2B shows that for block A, both the above and left regions are used as shown in A’);
in response to the second syntax being 1, determining only the above region of the current block as the template used to derive the offset value (Chubach et al.; Fig. 2B shows that for blocks B, C, D, only the above regions are used as shown in B’, C’, D’); and
in response to the second syntax being 2, determining only the left region of the current block as the template used to derive the offset value (Chubach et al.; Fig. 2B shows that for blocks E, F, G, only the left regions are used as shown in E’, F’, G’).
Regarding claim 4 (Currently Amended), Chubach et al. and Chen et al. teach the method according to claim 1, wherein the determining, based on the second syntax, the template used to derive the offset value comprises:
the second syntax comprises 0 or 1 (Chubach et al.; P5, L26-39; P20, L18 - P21, L6; it teaches a syntax element called MaxNumLicParams{Y,Cb,Cr}, which is the number of samples in the subset of samples and represents the positive integer N, where N could be any integer value e.g. 0 and 1);
in response to the second syntax being 0, determining only the above region of the current block as the template used to derive the offset value (Chubach et al.; Fig. 2B shows that for blocks B, C, D, only the above regions are used as shown in B’, C’, D’); and
in response to the second syntax being 1, determining only the left region of the current block as the template used to derive the offset value (Chubach et al.; Fig. 2B shows that for blocks E, F, G, only the left regions are used as shown in E’, F’, G’).
Regarding claim 5 (Currently Amended), Chubach et al. and Chen et al. teach the method according to claim 3, wherein:
the above region of the current block comprises both the above samples of the current block and the above-right samples of the current block (Chen et al.; Fig. 13 shows both above and above right samples in the above region).
Regarding claim 7 (Original), Chubach et al. and Chen et al. teach the method according to claim 3, wherein:
the left region of the current block comprises both left samples of the current block and bottom-left samples of the current block (Chen et al.; Fig. 13 shows both left and bottom-left samples in the left region).
Regarding claim 9 (Currently Amended), Chubach et al. and Chen et al. teach the method according to claim 1, wherein:
the second syntax is entropy coded according to a context based on at least one of the following: a block shape of the current block, a block aspect ratio of the current block, a block size of the current block, or a location of the current block within a current tile (Chubach et al.; P3, L12-23; P26, L14-15; Fig. 2A shows the syntax where the position of the template [x0, y0] is obtained), a slice, a subpicture, or a picture.
Regarding claim 10 (Currently Amended), Chubach et al. and Chen et al. teach the method according to claim 1, wherein:
the second syntax indicates whether an adjacent reference line or a non-adjacent reference line as the template used to derive the offset value (Chubach et al.; Fig. 2A, 2B show only an adjacent reference line being used to derive the offset value).
Regarding claim 11 (Currently Amended), Chubach et al. and Chen et al. teach the method according to claim 1, wherein the determining the template used to derive the offset value further comprises:
determining a subset of samples in the above and left samples of the current block as the template used to derive the offset value (Chubach et al.; Figs. 1, 2A, 2B).
Regarding claim 12 (Original), Chubach et al. and Chen et al. teach the method according to claim 11, wherein the determining the subset of samples in the above and left samples of the current block as the template used to derive the offset value comprises:
determining only the above samples of the current block as the template used to derive the offset value (Chubach et al.; Fig. 2B shows that for blocks B, C, D, only the above samples are used as shown in B’, C’, D’); or
determining only the left samples of the current block as the template used to derive the offset value (Chubach et al.; Fig. 2B shows that for blocks E, F, G, only the left samples are used as shown in E’, F’, G’).
Regarding claim 14 (Original), Chubach et al. and Chen et al. teach the method according to claim 11, wherein the determining the subset of samples in the above and left samples of the current block as the template used to derive the offset value comprises:
in response to the current block being located at a top boundary of a tile, a slice, a subpicture, or a picture, determining only the left samples of the current block as the template used to derive the offset value (Chubach et al.; Fig. 2B shows the current block comprising subblocks A-G. Now assuming the current block is sitting on top of a tile, a slice, a subpicture, or a picture below, meaning the current block is at the boundary of a tile, a slice, a subpicture, or a picture. In that situation the current block will only use the left samples of the current block, not anything from the bottom tile, slice, subpicture, or picture because those samples are not available for prediction); or
in response to the current block being located at a left boundary of a tile, a slice, a subpicture, or a picture, determining only the above samples of the current block as the template used to derive the offset value (Chubach et al.; Fig. 2B shows the current block comprising subblocks A-G. Now assuming the current block is located at a left boundary of a tile, a slice, a subpicture, or a picture below. In that situation the current block will only use the above samples of the current block, not anything from the right boundary tile, slice, subpicture, or picture because those samples are not available for prediction).
Regarding claim 21 (Previously Presented), Chubach et al. and Chen et al. teach the method according to claim 11, wherein the determining the subset of samples in the above and left samples of the current block as the template used to derive the offset value comprises:
in response to the current block being located at a right boundary of a tile, a slice, a subpicture, or a picture, excluding top-right samples of the current block as the template used to derive the offset value (Chubach et al.; Fig. 3B; It shows that for the subblocks D, H, L, P on the right boundary of the current block, it excludes top-right samples, which extend to the right side of the top row); or
in response to the current block being located at a bottom boundary of a tile, a slice, a subpicture, or a picture, excluding bottom-left samples of the current block as the template used to derive the offset value (Chubach et al.; Fig. 3B; It shows that for the subblocks M, N, O, P on the bottom boundary of the current block, it excludes bottom-left samples, which extend to the bottom side of the left column).
Regarding claim 22 (Previously Presented), Chubach et al. and Chen et al. teach the method according to claim 11, wherein:
a number of samples in the subset of samples is less than N, wherein N is a positive integer (Chubach et al.; P5, L26-39; P20, L18 - P21, L6; it teaches a syntax element called MaxNumLicParams{Y,Cb,Cr}, which is the number of samples in the subset of samples and represents the positive integer N).
Regarding claim 23 (Previously Presented), Chubach et al. and Chen et al. teach the method according to claim 22, wherein:
N is determined based on a minimum or a maximum value of a block width and a block height of the current block; or
N is determined irrespective to a block size of the current block (Chubach et al.; P5, L26-39; P20, L18 - P21, L6; it teaches a syntax element called MaxNumLicParams{Y,Cb,Cr}, which represents the number of samples in the subset of samples N and it is dependent on the number of LIC parameters of Y, Cb, Cr components, irrespective of the block size).
Regarding claim 24 (Previously Presented), Chubach et al. and Chen et al. teach the method according to claim 11, wherein:
the subset of samples comprises subsampling of the above and left samples of the current block (Chubach et al.; Figs. 1, 2A, 2B).
Regarding claim 25 (Currently Amended), Chubach et al. teach an apparatus for encoding a current block of a current frame into a video bitstream (P25, L10-15; Fig. 8), the apparatus comprising:
a memory storing instructions (Fig. 11, reference numerals 1120, 1130, 1135); and
a processor in communication with the memory (Fig. 11, reference numerals 1110), wherein, when the processor executes the instructions (P28, L16-20), the processor is configured to cause the apparatus to:
obtain a video data, the video data comprising a current block of a current frame (P25, L10-15; Fig. 8; It teaches that the video decoder 800 is an image-decoding or video-decoding circuit that receives a bitstream 895 and decodes the content of the bitstream into pixel data of video frames for display 855);
determine, based on the video data, a motion vector corresponding to a reference block associated with the current block of the current frame (P26, L4-8; Fig. 8; it teaches generating the predicted MVs based on reference MVs...retrieving the reference MVs of previous video frames from the MV buffer 865);
determine a scaling factor and encode a first syntax explicitly into the video bitstream (P26, L27-31; P27, L21-26; Fig. 9; It teaches that the video decoder 800 retrieve an entry from the history-based table 950 to obtain the scale and offset parameter values);
determine a template used to derive an offset value (Fig. 1; P3, L12-16; The parameters of the function can be denoted by a scale and an offset, which forms a linear equation, that is, α*p[x]+β to compensate illumination changes, where p[x] is a reference sample (template) painted to by MV at a location x on reference picture, wherein α and β can be derived based on current blook template and reference block template) by:
signaling a second syntax explicitly in the video bitstream for indicating the template used to derive the offset value (P3, L12-23; P26, L14-15; Fig. 2A shows the syntax where the position of the template [x0, y0] is obtained to determine the offset value lic_offset_idx[x0][y0]) to comprise at least one of above samples, above- right samples, left samples, bottom-left samples, above region, or left region, and
determining, based on the second syntax, the template used to derive the offset value;
derive the offset value based on the template (Fig. 1; P3, L12-16);
generate a predicted block based on the reference block according to a linear equation, the linear equation being associated with the scaling factor and the offset value (P26, L19-23; Fig. 9; It teaches that the quantized LIC scale and offset parameters 925 are used by a LIC linear model 910 to compute a LIC prediction block 960); and
encode the current block based on the predicted block into the video bitstream (P27, L27-30; It teaches that the decoder decodes (at block 1040) the current block by using the prediction black to reconstruct the current block).
Although, Chubach et al. teach the syntax where the position of the template [x0, y0] is obtained to determine the offset value lic_offset_idx[x0][y0] as described in P3, L12-23; P26, L14-15 and shown in Fig. 2A. But it does not explicitly teach the template position to be specifically one of above samples, above-right samples, left samples, bottom-left samples, above region, or left region.
But, Chen et al., in the same field of endeavor (Abstract), teach a video decoding method, where it shows a LIC (Local Illumination Compensation) model parameter calculation including both the scaling and offset values (Figs. 2, 6, 8 show the offset values to be calculated from the templates at positions above, left, etc.) based on syntax elements which indicate whether the video encoding/decoding uses LIC or not ([0009]; [0157]) and also the template position to be selected for model parameter calculations including offset values ([0012]-[0013]; [0157]).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chubach et al’s invention of local illumination compensation with coded parameters to include Chen et al's usage of syntax elements to obtain template position for calculating offset values, because it reduces the complexity of the spatial LIC derivation (Chen et al.; [0083]; it teaches that complexity reduction is obtained by one improved spatial LIC algorithm for the case when both above and left spatial neighboring blocks are available).
Regarding claim 26 (Currently Amended), Chubach et al. teach a non-transitory computer readable storage medium (Fig. 11, reference numerals 1120, 1130, 1135) storing a video bitstream of a video that is generated by a video encoding method, the video encoding method (P23, L10-11) comprising:
obtain a video data, the video data comprising a current block of a current frame (P25, L10-15; Fig. 8; It teaches that the video decoder 800 is an image-decoding or video-decoding circuit that receives a bitstream 895 and decodes the content of the bitstream into pixel data of video frames for display 855);
determine, based on the video data, a motion vector corresponding to a reference block associated with a current block of a current frame (P26, L4-8; Fig. 8; it teaches generating the predicted MVs based on reference MVs...retrieving the reference MVs of previous video frames from the MV buffer 865);
determine a scaling factor and signal a first syntax explicitly in the video bitstream (P26, L27-31; P27, L21-26; Fig. 9; It teaches that the video decoder 800 retrieve an entry from the history-based table 950 to obtain the scale and offset parameter values);
determine a template used to derive an offset value (Fig. 1; P3, L12-16; The parameters of the function can be denoted by a scale and an offset, which forms a linear equation, that is, α*p[x]+β to compensate illumination changes, where p[x] is a reference sample (template) painted to by MV at a location x on reference picture, wherein α and β can be derived based on current blook template and reference block template) by:
obtaining a second syntax explicitly signaled in the coded video bitstream for indicating the template used to derive the offset value (P3, L12-23; P26, L14-15; Fig. 2A shows the syntax where the position of the template [x0, y0] is obtained to determine the offset value lic_offset_idx[x0][y0]) to comprise at least one of above samples, above-right samples, left samples, bottom-left samples, above region, or left region, and
determining, based on the second syntax, the template used to derive the offset value;
derive the offset value based on the template (Fig. 1; P3, L12-16);
generate a predicted block based on the reference block according to a linear equation, the linear equation being associated with the scaling factor and the offset value (P26, L19-23; Fig. 9; It teaches that the quantized LIC scale and offset parameters 925 are used by a LIC linear model 910 to compute a LIC prediction block 960); and
encode the current block based on the predicted block into the video bitstream (P27, L27-30; It teaches that the decoder decodes (at block 1040) the current block by using the prediction black to reconstruct the current block).
Although, Chubach et al. teach the syntax where the position of the template [x0, y0] is obtained to determine the offset value lic_offset_idx[x0][y0] as described in P3, L12-23; P26, L14-15 and shown in Fig. 2A. But it does not explicitly teach the template position to be specifically one of above samples, above-right samples, left samples, bottom-left samples, above region, or left region.
But, Chen et al., in the same field of endeavor (Abstract), teach a video decoding method, where it shows a LIC (Local Illumination Compensation) model parameter calculation including both the scaling and offset values (Figs. 2, 6, 8 show the offset values to be calculated from the templates at positions above, left, etc.) based on syntax elements which indicate whether the video encoding/decoding uses LIC or not ([0009]; [0157]) and also the template position to be selected for model parameter calculations including offset values ([0012]-[0013]; [0157]).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chubach et al’s invention of local illumination compensation with coded parameters to include Chen et al's usage of syntax elements to obtain template position for calculating offset values, because it reduces the complexity of the spatial LIC derivation (Chen et al.; [0083]; it teaches that complexity reduction is obtained by one improved spatial LIC algorithm for the case when both above and left spatial neighboring blocks are available).
Claim 6, 8, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Chubach et al. (WO 2023/093863 A1) (Disclosed in IDS) in view of Chen et al. (US PGPub 2024/0214553 A1) and further in view of Koo et al. (US PGPub 2021/0160490 A1).
Regarding claim 6 (Original), Chubach et al. and Chen et al. teach the method according to claim 5.
Although, Chen et al. in Fig. 13 show both above and above right samples, but it does not show that the above samples of the current block and the above-right samples of the current block have a same width.
However, Koo et al., in the same field of endeavor (Abstract), teaches a video decoding system where it teaches that the above samples of the current block and the above-right samples of the current block have a same width (Koo et al.; Figs. 24, 25 show that both the above samples and the above-right samples T have the same width).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chubach et al’s invention of local illumination compensation with coded parameters and Chen et al's usage of syntax elements to obtain template position for calculating offset values, to include Koo et al's usage of either above or left samples for prediction calculation, because when width is greater than height, prediction performed in a bottom-left direction may be more accurate than prediction performed in a top-right direction and when height is greater than width, prediction performed in a top-right direction may be more accurate than prediction performed in a bottom-left direction. Accordingly, transforming the index of the aforementioned intra prediction mode may be more advantageous (Koo et al.; [0297]).
Regarding claim 8 (Original), Chubach et al. and Chen et al. teach the method according to claim 7.
Although, Chen et al. in Fig. 13 show both left and bottom-left samples, but it does not show that the left samples of the current block and the bottom-left samples of the current block have a same height.
However, Koo et al., in the same field of endeavor (Abstract), teaches a video decoding system where it teaches that the left samples of the current block and the bottom-left samples of the current block have a same height (Koo et al.; Figs. 24, 25 show that both the left samples and the bottom-left samples L have the same height).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chubach et al’s invention of local illumination compensation with coded parameters and Chen et al's usage of syntax elements to obtain template position for calculating offset values, to include Koo et al's usage of either above or left samples for prediction calculation, because when width is greater than height, prediction performed in a bottom-left direction may be more accurate than prediction performed in a top-right direction and when height is greater than width, prediction performed in a top-right direction may be more accurate than prediction performed in a bottom-left direction. Accordingly, transforming the index of the aforementioned intra prediction mode may be more advantageous (Koo et al.; [0297]).
Regarding claim 13 (Original), Chubach et al. and Chen et al. teach the method according to claim 11, where it determines the subset of samples in the above and left samples of the current block as the template used to derive the offset value.
But Chubach et al. or Chen et al. do not explicitly teach that derivation of the offset values comprises:
in response to a block width of the current block is greater than a block height of the current block, determining only the above samples of the current block as the template used to derive the offset value;
in response to a block height of the current block is greater than a block width of the current block, determining only the left samples of the current block as the template used to derive the offset value; or
in response to a block height of the current block is equal to a block width of the current block, determining both the above samples and the left samples of the current block as the template used to derive the offset value.
However, Koo et al., in the same field of endeavor (Abstract), teaches a video decoding system where it says that in response to a block width of the current block is greater than a block height of the current block, determining only the above samples of the current block as the template used to derive the offset value (Fig. 24; [0297]; It teaches that when a width of a block is greater than a height thereof in general, reference samples located on the upper side are closer to locations within the block to be predicted than reference samples located on the left, meaning only the above samples of the current block as the template used to derive the offset value);
in response to a block height of the current block is greater than a block width of the current block, determining only the left samples of the current block as the template used to derive the offset value (Fig. 25; [0297]; It teaches that when a height of a block is greater than a width thereof, in general, left reference samples are closer to locations within the block to be predicted than upper reference samples, meaning only the left samples of the current block as the template used to derive the offset value); or
in response to a block height of the current block is equal to a block width of the current block, determining both the above samples and the left samples of the current block as the template used to derive the offset value.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chubach et al’s invention of local illumination compensation with coded parameters to include Koo et al's usage of either above or left samples for prediction calculation, because when width is greater than height, prediction performed in a bottom-left direction may be more accurate than prediction performed in a top-right direction and when height is greater than width, prediction performed in a top-right direction may be more accurate than prediction performed in a bottom-left direction. Accordingly, transforming the index of the aforementioned intra prediction mode may be more advantageous (Koo et al.; [0297]).
Response to Arguments
Applicant’s arguments with respect to the independent claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
The amendments made in the claims 1, 25-26 to address the minor claim informalities have successfully addressed the informalities in the claim(s) and therefore withdrawn.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
“METHOD FOR PROCESSING VIDEO SIGNAL BY USING LOCAL ILLUMINATION COMPENSATION (LIC) MODE, AND APPARATUS THEREFOR” – Kim et al., US PGPub 2024/0406410 A1.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAINUL HASAN whose telephone number is (571)272-0422. The examiner can normally be reached on MON-FRI: 10AM-6PM, Alternate FRIDAYS, EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAY PATEL can be reached on (571)272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Mainul Hasan/
Primary Examiner, Art Unit 2485