DETAILED ACTION
This office action is responsive to the Applicant’s reply filed on 02/25/2026.
Response to Arguments
Applicant’s arguments, filed on 02/25/2026, with respect to claim objections have been fully considered and are persuasive. The objection of claim 2 has been withdrawn.
Applicant’s arguments, filed on 02/25/2026, with respect to claim rejections under 35 U.S.C. 112 have been fully considered and are persuasive. The rejection of claim 3 under 35 U.S.C. 112(b) has been withdrawn.
Applicant's arguments, filed on 02/25/2026, with respect to claim rejections under 35 U.S.C. 103 have been fully considered but they are not persuasive.
(1), Applicant’s argument: “1) First Information (Number of Sub-areas): According to the disclosure of Chong, the frame is always fixedly divided into 16 regions when in the region adaptive mode. Because the number of regions is predetermined and static (i.e., fixed at 16), there is absolutely no need or motivation to signal or transmit separate information ‘specifying the number of sub-areas.’ The number constitutes a fixed standard rather than variable information to be specified. Therefore, Chong neither teaches nor suggests the ‘first information’ as claimed.
Examiner’s response: As discussed in the previous office action, in paragraph 0059 and Fig. 2, Chong teaches that an image frame is divided into 16 regions, i.e., 16 sub-areas, and that each region is represented by an index number. Chong then states: “a video encoder may signal, in the encoded video bitstream, the index number of the set of filter coefficients used by the video encoder for a particular region”. Therefore, if ‘1’ index number is signaled, then the number of regions is ‘1’. Likewise, if ‘2’ index numbers are signaled, then the number of regions is ‘2’.
(2), Applicant’s argument: “2) Second Information (Position of Sub-areas): Furthermore, regarding the indices described in Chong, the Applicant notes that these indices are employed to identify specific sets of linear filter coefficients applied to each region, not to specify the spatial positions of the regions themselves. The passage in Chong clarifies that the indices relate to the signaling of filter coefficients (e.g., ‘each region can have one set of linear filter coefficients’). In other words, the index in Chong is logically decoupled from the spatial coordinates or position of the region. Consequently, Chong fails to teach or suggest the ‘second information specifying a position of each of the one or more sub-areas.’”
Examiner’s response: In paragraph 0059 and Fig. 2, Chong teaches that an image frame is divided into 16 regions, i.e., 16 sub-areas. Chong then states: “Each of these 16 regions is represented by a number (0-15), …… The numbers (0-15) may be index numbers…”. That is, each region is identified by its index number. An index number indicates a position in the numbers 0-15. Moreover, on Fig. 2, Chong discloses that each index number has a unique position on the image frame. That is, the decoder obtains the position of the sub-area on the image according to the received index number.
(3), Applicant’s argument: “Equation 2 in Drugeon is clearly directed to calculating ‘the number of padding rows N’, i.e., the size of the padding area.
In stark contrast, Feature 2 of the subject application employs an equation using the image width or height to explicitly determine ‘the sample inside the reconstructed image’ itself (e.g., identifying which specific sample coordinate within the reconstructed image is to be used for padding).
Merely using a width (‘w’) in a formula does not equate to the claimed invention. The equation in Drugeon calculates a dimension (size), whereas the equation in the subject application identifies a specific reference sample. Since the targets of the calculations are distinct and unrelated, Drugeon fails to disclose Feature 2.”
Examiner’s response:
In paragraph 0099, Chong discloses padding an image slice using mirrored pixels. Chong explicitly states: “Mirrored pixels reflect the pixel values on the inside of the slice”. That is, the pixels inside the image slice are used to pad the image slice.
As admitted in the Examiner’s office action, Chong does not explicitly disclose using a width of the image slice to determine the size of the padding area.
Drugeon teaches that a width (‘w’) may be used to determine the size or a padding area.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Chong et al. (US 2013/0101016 A1) in view of Drugeon et al. (US 2012/0320970 A1).
Consider claim 1:
Chong discloses a method of decoding an image with a decoding apparatus (see Fig. 12 and paragraph 0127, where Chong describes a video decoder 30 which decodes an encoded video sequence), comprising:
obtaining information on one or more sub-areas included in the image from a bitstream (see Fig. 2 and paragraph 0059, where Chong describes that an image frame 120 is divided into 16 regions, each region is indicated by an index number, the video decoder obtains the index number in an encoded video bitstream received from a video encoder);
obtaining the one or more sub-areas included in the image based on the information on the one or more sub-areas (see paragraph 0059, where Chong describes that the video decoder may perform decoding process for that region based on the index);
obtaining a reconstructed image by decoding each of the one or more sub-areas (see Fig. 12 and paragraph 0136, where Chong describes that the video decoder 30 includes a summer 80 which generates reconstructed blocks to form decoded blocks of the image frame; see paragraph 0059, where Chong describes that a decoding process is performed for each region);
performing in-loop filtering on the reconstructed image (see Fig. 12 and paragraph 0137, where Chong describes a loop filter unit 79 which performs loop filtering on the reconstructed blocks);
obtaining image processing information from the bitstream (see paragraph 0059, where Chong describes that the video decoder obtains index number of a region of an image frame from the received video bitstream); and
performing an image processing on the in-loop filtered reconstructed image based on the image processing information (see Fig. 12 and paragraphs 0139-0140, where Chong describes that the output of the loop filter unit 79 is used to generate decoded video; see paragraph 0059, where Chong describes that the decoded video is generated in decoding process using the index number of each region),
wherein the information on the one or more sub-areas comprises first information specifying the number of sub-areas included in the image and second information specifying a position of each of the one or more sub-areas (see paragraph 0059, where Chong describes that the video decoder obtains information specifying a total of 16 regions and an index indicating each region);
wherein the reconstructed image is obtained by generating a residual block for a block included in the image (see Fig. 12 and paragraph 0136, where Chong describes that the summer 80 generates the reconstructed blocks based on received residual blocks);
wherein the in-loop filtering comprises a deblocking filtering (see paragraph 0137, where Chong describes that the loop filtering may include deblocking),
wherein the image processing comprises padding at least one region to the reconstructed image (see paragraphs 0099-0100, where Chong describes that the loop filter may use padded data on the reconstructed block),
wherein the padding uses a sample inside the reconstructed image for the region to be padded (see paragraph 0099, where Chong describes that the padded data takes the data value on the inside of the image boundary), and
wherein the sample inside the reconstructed image is determined using at least one equation (see paragraph 0100, where Chong describes that the padding is determined using an equation a+b=1, where the values “a” and “b” are predefined values based on training).
Chong does not specifically disclose: the equation using a width of the image or a height of the image.
Drugeon teaches: the equation using a width of the image or a height of the image (see Fig. 7 and paragraphs 0140-0141, where Drugeon describes an image decoding apparatus which includes a padding region calculation unit 402 that calculates a number of padding pixels based on original image size; see paragraphs 0154-0156, where Drugeon describes that the padding pixels can be calculated using Equation 2 that includes the width w of the original image).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to include: the equation using a width of the image or a height of the image, as taught by Drugeon to modify the method of Chong in order to improve coding efficiency, as discussed by Drugeon (see paragraph 0154).
Consider claims 2 and 4:
Chong discloses a method of encoding an image with an encoding apparatus (see Fig. 11 and paragraph 0106, where Chong describes a video encoder 20; see paragraph 0057, where Chong describes that the video encoder may be implemented using non-transitory computer readable medium; see Fig. 11 and paragraph 0041, where Chong describes that the video encoder generates a bitstream that includes encoded video data; see Fig. 12 and paragraph 0128, where Chong describes that a video decoder 30 performs decoding process on the encoded bitstream received from the video encoder), comprising:
obtaining one or more sub-areas included in the image, information on the one or more sub-areas included in the image being encoded into a bitstream (see Fig. 2 and paragraph 0059, where Chong describes that an image frame 120 is divided into 16 regions, each region is indicated by an index number, the video decoder obtains the index number in an encoded video bitstream received from a video encoder);
encoding the image into the bitstream by encoding each of the one or more sub-areas (see Fig. 11 and paragraph 0107, where Chong describes that the video encoder 20 encodes the image frame which includes 16 regions; see paragraph 0041, where Chong describes that the encoder generates a bitstream that includes the encoded video data); and
encoding image processing information into the bitstream (see paragraph 0059, where Chong describes that the index of each region is encoded into a video bitstream),
wherein the information on the one or more sub-areas comprises first information specifying the number of sub-areas included in the image and second information specifying a position of each of the one or more sub-areas (see paragraph 0059, where Chong describes that the video decoder obtains information specifying a total of 16 regions and an index indicating each region);
wherein the image is encoded by generating a residual block for a block included in the image (see Fig. 11 and paragraph 0114, where Chong describes that the video encoder 20 includes a summer 50 which generates a residual data);
wherein the reconstructed image is obtained by decoding each of the one or more sub-areas (see Fig. 12 and paragraph 0136, where Chong describes a video decoder 30 which includes a summer 80 that generates reconstructed blocks to form decoded blocks of an image frame; see paragraph 0059, where Chong describes that a decoding process is performed for each region),
wherein in-loop filtering is performed on the reconstructed image (see Fig. 12 and paragraph 0137, where Chong describes a loop filter unit 79 which performs loop filtering on the reconstructed blocks),
wherein an image processing is performed on the in-loop filtered reconstructed image based on the image processing information (see Fig. 12 and paragraphs 0139-0140, where Chong describes that the output of the loop filter unit 79 is used to generate decoded video; see paragraph 0059, where Chong describes that the decoded video is generated in decoding process using the index number of each region),
wherein the in-loop filtering comprises a deblocking filtering (see paragraph 0137, where Chong describes that the loop filtering may include deblocking),
wherein the image processing comprises padding at least one region to the reconstructed image (see paragraphs 0099-0100, where Chong describes that the loop filter may use padded data on the reconstructed block),
wherein the padding uses a sample inside the reconstructed image for the region to be padded (see paragraph 0099, where Chong describes that the padded data takes the data value on the inside of the image boundary), and
wherein the sample inside the reconstructed image is determined using at least one equation (see paragraph 0100, where Chong describes that the padding is determined using an equation a+b=1, where the values “a” and “b” are predefined values based on training).
Chong does not specifically disclose: the equation using a width of the image or a height of the image.
Drugeon teaches: the equation using a width of the image or a height of the image (see Fig. 7 and paragraph 0140, where Drugeon describes an image decoding apparatus which includes a padding region calculation unit 402 that calculates a number of padding pixels based on original image size; see paragraphs 0154-0156, where Drugeon describes that the padding pixels can be calculated using Equation 2 that includes the width w of the original image).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to include: the equation using a width of the image or a height of the image, as taught by Drugeon to modify the method of Chong in order to improve coding efficiency, as discussed by Drugeon (see paragraph 0154).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LIHONG YU whose telephone number is (571)270-5147. The examiner can normally be reached 10:00 am-6:00 pm EST Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hannah S. Wang can be reached at (571)272-9018. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LIHONG YU/Primary Examiner, Art Unit 2631