DETAILED ACTION
1. This office action is in response to U.S. Patent Application No.: 18/461,759 filed on 12/31/2025 with effective filing date 4/27/2023. Claims 1-20 are pending.
Claim Rejections - 35 USC § 103
2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
4. Claim(s) 1-5, 12-16 & 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Karczewicz et al. US 2022/0103816 A1 in view of Pu et al. US 2015/0016512 A1.
Per claims 1, 19 & 20, Karczewicz et al. discloses a method for processing video data in a decoder, the method comprising: receiving a video bitstream comprising a current block and a reference block, the reference block being used for predicting the current block and being identified by a motion vector associated with the current block (para: 172, e.g. motion estimation unit 222 may form one or more motion vectors (MVs) that defines the positions of the reference blocks in the reference pictures relative to the position of the current block in a current picture. Motion estimation unit 222 may then provide the motion vectors to motion compensation unit 224); a scale factor being stored in a lookup table among two or more lookup tables maintained by the decoder for storing candidate scale factors or candidate scale factor differences (para: 120, e.g. video encoder 200 and video decoder 300 may be configured with a lookup table that maps index values to scaling factors, and video encoder 200 may encode a value representing an index value corresponding to the selected scaling factor),
Karczewicz et al. explicitly fails to disclose the remaining claim limitation.
Pu et al. however in the same field of endeavor teaches receiving, from the video bitstream, a first syntax element indicating the scale factor (α) that is used for predicting the current block (para: 96, & 99, e.g. Video encoder 20 may also determine the scale factor predictor as the scale factor alpha value of a left-neighboring block relative to a current block); the candidate scale factor differences being differences between the candidate scale factors and a threshold value (para: 131, e.g. the statistics may be a number of previously-coded blocks, e.g., in a slice, picture, or set of pictures, having non-zero scale factors, for example. If the number of previously-coded blocks having non-zero scale factors is less than a pre-defined threshold, video encoder 20 may set the flag value to zero for the current slice); selecting the lookup table that stores the scale factor (para: 127, e.g. video encoder 20 may also select a range of alpha values rather than selecting a map of alpha values. If video encoder 20 selects a range of scale factors rather than a map, video encoder 20 may select a uniform or non-uniform set of scale factor values within the range of scale factors); determining the scale factor based on a value of the first syntax element and the selected lookup table (127-130, e.g. video encoder 20 may select a scale factor from a range of [0, 7] so that video encoder 20 may directly encode the scale factor using a fixed length code. Selecting a range of [0,7] may enable video encoder 20 to signal a selected scale factor without coding a map between indices and scale factor values, which may improve the efficiency of the coded video bitstream); predicting the current block based on the reference block, the scale factor, and an offset (para: 162-163, e.g. motion compensation unit 72 and/or intra-prediction unit 74 may be configured to determine the scale factor for one or more blocks based on a scale factor of a neighboring block. For example, motion compensation unit 72 and/or intra-prediction unit 74 may determine a scale factor predictor from a neighboring block relative to a current block); and reconstructing the current block based on the predicted current block (para: 165, e.g. once video decoder 30 generates reconstructed video, video decoder 30 may output the reconstructed video blocks as decoded video (e.g., for display or storage)).
Therefore, in view of disclosures by Pu et al., it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Pu et al. and Karczewicz et al. in order to determine luma residual samples for a block of video data. The predictive chroma residual samples for the block of video data are determined. The luma residual samples are scaled with a scale factor to produce scaled luma residual samples. The updated chroma residual samples are determined based on the predictive chroma residual samples and the scaled luma residual samples.
Per claim 2, Pu et al. further teaches the method of claim 1, wherein determining the lookup table comprises determining the lookup table based on decoded information associated with the current block or the reference block, the decoded information comprising at least one of: a block size; an inter prediction mode; a reference frame for the current block; a distance between a reference frame associated with the reference block and a current frame to which the current block belongs to; a temporal level of the current frame in a Group of Pictures (GOP) structure; a temporal level of the reference frame in the GOP structure; an index identifying a scale factor stored in one of the two or more lookup tables, the scale factor being used for decoding a neighboring block of the current block; or a value of a scale factor selected for decoding the neighboring block of the current block (para: 125-130, 160-163).
Per claim 3, Pu et al. further teaches the method of claim 1, wherein determining the lookup table comprises: receiving, from the video bitstream, a second syntax element indicating the lookup table among the two or more lookup tables maintained by the decoder; and determining the lookup table based on a value of the second syntax element (para: 123, 125-130).
Per claim 4, Pu et al. further teaches the method of claim 3, wherein a context used for entropy encoding the second syntax element is based on decoded information associated with the current block or the reference block, the decoded information comprising at least one of: a block size; an inter prediction mode; a reference frame for the current block; a distance between a reference frame associated with the reference block and a current frame to which the current block belongs to; a temporal level of the current frame in a GOP structure; a temporal level of the reference frame in the GOP structure; an index identifying a scale factor stored in one of the two or more lookup tables, the scale factor being used for decoding a neighboring block of the current block; or a value of a scale factor selected for decoding the neighboring block of the current block (para: 102, 125-130, & 132).
Per claim 5, Karczewicz et al. further teaches the method of claim 1, wherein the candidate scale factor differences in each of the two or more lookup tables are sorted (para: 184-186).
Per claim 12, Karczewicz et al. further teaches the method of claim 1, further comprising deriving the offset using following equation: β =cur_template _mean - α*ref_template _mean wherein: cur_template_mean is an average of samples in a template of the current block and ref_template_mean is an average of samples in a template of the reference block level (para: 137-138, e.g. video encoder 200 may signal the values of the scaling factors and the offset as syntax elements in a slice header, a picture header, a picture parameter set (PPS), an adaptive parameter set (APS), or any other high level syntax element body).
Per claim 13, Karczewicz et al. further teaches the method of claim 1, further comprising: deriving the offset from neighboring reconstructed samples of the current block and neighboring reconstructed samples of the reference block (para: 229).
Per claim 14, Karczewicz et al. further teaches the method of claim 1, wherein a precision of the offset is higher than a precision of the scale factor (para: 120).
Per claim 15, Karczewicz et al. further teaches the method of claim 1, further comprising: determining, whether to derive the offset or obtain the offset using a third syntax element carried in the video bitstream based on one of: an explicitly signaled flag; a coding mode of the current block; a coding mode of a neighboring block of the current block; or a number of neighboring blocks of the current block which are coded in BAWP mode (para: 137-138, e.g. video encoder 200 may signal the values of the scaling factors and the offset as syntax elements in a slice header, a picture header, a picture parameter set (PPS), an adaptive parameter set (APS), or any other high level syntax element body); and in a determination to obtain the offset using the third syntax element, determining the offset based on a value of the third syntax element (para: 225, e.g. , video encoder 200 may encode data directly representing the scaling factor(s) and/or offset(s), while in other examples, video encoder 200 may encode index value(s) corresponding to the selected scaling factor(s) and/or offset(s) in corresponding look-up tables).
Per claim 16. The method of claim 1, wherein the current block is coded in one of following modes: a Block Adaptive Weighted Prediction (BAWP) mode; or a Local Illumination Compensation (LIC) mode.
Allowable Subject Matter
5. Claims 6-11 & 17-18 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
6. Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Examiner respectfully suggest that the independent claim limitations to be further clarified.
Conclusion
7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Seregin et al., US 2020/0228796 A1, e.g. an example method of coding video data includes selecting, by one or more processors a sub-set of a plurality of neighboring samples of a current block in a current picture, wherein the plurality of neighboring samples includes a row of samples adjacent to a top row of the current block in the current picture and a column of samples adjacent to a left column of the current block in the current picture.
Park et al. US 2024/0357130 A1, e.g. an image decoding method performed by a decoding device, according to the present disclosure, comprises the steps of: deriving an intra block copy (IBC) mode as a prediction mode of a current block; deriving one or more block vectors and/or scaling information for the IBC mode.
8. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRFAN HABIB whose telephone number is (571)270-7325. The examiner can normally be reached Mon-Th 9AM-7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 5712722988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Irfan Habib/Examiner, Art Unit 2485