Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office Action is in response to the application 19/104,993 filed on 02/19/2025.
Claims 1 – 6, 9 – 15, 30 – 35 have been examined and are pending in this application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/19/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 – 6, 9 – 15 and 30 – 35 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al. (US 2019/0082193 A1) in view of Fabrice et al. (“EE2-2.2: Motion compensated picture boundary padding”, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29 27th Meeting, by teleconference, 13–22 July 2022).
Regarding claim 1, Sun discloses: “a method[see para: 0016; FIG. 5 illustrates an example of motion compensated boundary pixel padding (MCP) in accordance with one or more techniques of this disclosure], the method comprising:
based on the first motion vector, determining a position of a first reference block, wherein the first reference block is located within a first reference picture [see para: 0087; FIG. 6 illustrates details of MCP in accordance with one or more techniques of this disclosure. In the example of FIG. 6, Pi is a padded pixel in a padded block 600, denoted PBlkj, while the block width of padded block 600 is MX and height of padded block 600 is MY. A video coder (e.g., video encoder 20 or video decoder 30) derives a padding motion vector 602, denoted PMVj, which points to a reference padded block 604, denoted RPBlkj. RPBlkj contains a reference padded pixel, Ri. To derive PMVj, the video coder can use motion vectors in a boundary block 606, denoted BBlkj];
determining a first distance, the first distance being a distance from a boundary of the first reference block to a corresponding boundary of the first reference picture [see para: 0089; The video coder may determine the size of padded block 600 in various ways. One example of size setting is that MX is 4, MY is 32, NX is 4, NY is 4. Alternatively, the video coder may derive MY from PMVj. One example is that MY is a vertical distance that contains a maximum count of all corresponding pixels, Ri, are inside the boundary of Frame M];
selecting a candidate dimension from a set of two or more candidate dimensions, the set of two or more candidate dimensions including the first candidate dimension and the second candidate dimension [see para: 0087; FIG. 6 illustrates details of MCP in accordance with one or more techniques of this disclosure. In the example of FIG. 6, Pi is a padded pixel in a padded block 600, denoted PBlkj, while the block width of padded block 600 is MX and height of padded block 600 is MY. A video coder (e.g., video encoder 20 or video decoder 30) derives a padding motion vector 602, denoted PMVj, which points to a reference padded block 604, denoted RPBlkj. RPBlkj contains a reference padded pixel, Ri. To derive PMVj, the video coder can use motion vectors in a boundary block 606, denoted BBlkj. A block width of boundary block 606 is denoted Nx and a block height of boundary block 606 is denoted NY. After obtaining Ri, the video coder uses Ri to derive a padded value for Pi. That is, for each value i from 0 to j−1, where j is the number of samples in padded block 600, the video coder may use Ri to derive a padded value for Pi. In some examples, boundary block 606 (i.e., BBlKj) is a coded block];
determining whether the selected candidate dimension is greater than zero; and as a result of determining that the selected candidate dimension is greater than zero, determining at least one sample for the picture padding block based on a motion vector associated with the selected candidate dimension [see para: 0094; After deriving a padding motion vector, PMVj, the video coder can obtain reference padded pixels, Ri, and use Ri to derive a padding value for Pi. In other words, the video coder may use a reference pixel Ri in a first picture (e.g., reference frame 610) to pad a pixel Pi outside a padding boundary (e.g., picture boundary, tile boundary, slice boundary, etc.) of a second picture (e.g., frame 614). In one example, Pi=Ri. Thus, the video coder may directly use Ri to pad Pi. In other words, for each value i from 0 to j−1, where j is the number of samples in padded block 600, the video coder may set Pi equal to Ri].
Sun does not explicitly disclose: “based on the first distance, determining a first candidate dimension for a picture padding block within the extended picture area;
based on the second motion vector, determining a position of a second reference block;
determining a second distance, the second distance being a distance from a boundary of the second reference block to a corresponding boundary of a reference picture in which the second reference block is located;
based on the second distance, determining a second candidate dimension for the picture padding block”
However, Fabrice, from the same or similar field of endeavor teaches: “based on the first distance, determining a first candidate dimension for a picture padding block within the extended picture area [see page: 1; That is, in ECM-5.0, pictures are extended by an area surrounding the picture with a size of (maxCUwidth + 16) in each direction of the picture boundary. The pixel in the extended area is derived by repetitive boundary padding. When a reference block used for uni-prediction locates partially or completely out of the picture boundary (OOB), the repetitive padded pixel is used for motion compensation (MC)];
based on the second motion vector, determining a position of a second reference block [see Fig. 1; page: 2; For motion compensation padding, MV of a 4×4 boundary block is utilized to derive a M×4 or 4×M padding block. The value M is derived as the distance of the reference block to the picture boundary as shown on Figure 2. Moreover, M is set at least equal to 4 as soon as the motion vector points to a position internal to the reference picture bounds. If boundary block is intra coded, then MV is not available, and M is set equal to 0. If M is less than 64, the rest of the padded area is filled with the repetitive padded samples];
determining a second distance, the second distance being a distance from a boundary of the second reference block to a corresponding boundary of a reference picture in which the second reference block is located [see Fig. 2; page: 2; In case of bi-directional inter prediction, only one prediction direction, which has a motion vector pointing to the pixel position farther away from the picture boundary in the reference picture in terms of the padding direction, is used in MC boundary padding];
based on the second distance, determining a second candidate dimension for the picture padding block [see page: 1; That is, in ECM-5.0, pictures are extended by an area surrounding the picture with a size of (maxCUwidth + 16) in each direction of the picture boundary. The pixel in the extended area is derived by repetitive boundary padding. When a reference block used for uni-prediction locates partially or completely out of the picture boundary (OOB), the repetitive padded pixel is used for motion compensation (MC)];
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the video compression techniques disclosed by Sun to add the teachings of Fabrice as above, in order to provide a means for improving or extending the picture area, pictures are extended by an area surrounding the picture with a size of (maxCUwidth + 16) in each direction of the picture boundary and the parameter M is derived as the distance of the reference block to the picture boundary as shown on Figure 2, as shown in the above section [Fabrice see Fig. 1, 2, page 1 - 2].
Regarding claim 2, Sun and Fabrice disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Furthermore, Sun discloses: “wherein the method further comprises: setting a first dimension of the padding block equal to the selected candidate dimension; and setting a second dimension of the padding block [see para: 0089; The video coder may determine the size of padded block 600 in various ways. One example of size setting is that MX is 4, MY is 32, NX is 4, NY is 4. Alternatively, the video coder may derive MY from PMVj. One example is that MY is a vertical distance that contains a maximum count of all corresponding pixels, Ri, are inside the boundary of Frame M. To simplify implementation complexity, there can be a maximum value, MAX_MY. When MY is larger than MAX_MY, MY can be set as MAX_MY. MAX_MY can be 16, 32, 64, 128, or signaled. MAX_MY can be one or multiple of CTU size. Another constraint can be that NX equals MX or NX equals the minimum block size of the motion vector buffer].
Regarding claim 3, Sun and Fabrice disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Furthermore, Sun discloses: “wherein determining the first candidate dimension comprises setting the first candidate dimension to 0 as a result of determining that the resolution of the current picture is different than the resolution of the first reference picture [see para: 0135; In this example, the second picture is a different picture from the first picture (e.g., the first and second reference pictures may be in different access units or different layers) and the padded pixels are in a padding area surrounding the second picture. Inter-prediction processing unit 1020 and other components of video encoder 20 may then encode one or more blocks of the video data based on the padded pixels].
Regarding claim 4, Sun and Fabrice disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Furthermore, Sun discloses: “wherein determining the first candidate dimension comprises setting the first candidate dimension to 0 as a result of i) determining that the width of the current picture is different than the width of the first reference picture and ii) determining that the picture boundary block has its left or right boundary colliding with a boundary of the current picture [see para: 0087; FIG. 6 illustrates details of MCP in accordance with one or more techniques of this disclosure. In the example of FIG. 6, Pi is a padded pixel in a padded block 600, denoted PBlkj, while the block width of padded block 600 is MX and height of padded block 600 is MY. A video coder (e.g., video encoder 20 or video decoder 30) derives a padding motion vector 602, denoted PMVj, which points to a reference padded block 604, denoted RPBlkj. RPBlkj contains a reference padded pixel, Ri. To derive PMVj, the video coder can use motion vectors in a boundary block 606, denoted BBlkj. A block width of boundary block 606 is denoted Nx and a block height of boundary block 606 is denoted NY. After obtaining Ri, the video coder uses Ri to derive a padded value for Pi. That is, for each value i from 0 to j−1, where j is the number of samples in padded block 600, the video coder may use Ri to derive a padded value for Pi. In some examples, boundary block 606 (i.e., BBlKj) is a coded block].
Regarding claim 5, Sun and Fabrice disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Furthermore, Sun discloses: “wherein determining the first candidate dimension comprises setting the first candidate dimension to 0 as a result of i) determining that the height of the current picture is different than the height of the first reference picture and ii) determining that the picture boundary block has its top or bottom boundary colliding with a boundary of the current picture [see para: 0087; FIG. 6 illustrates details of MCP in accordance with one or more techniques of this disclosure. In the example of FIG. 6, Pi is a padded pixel in a padded block 600, denoted PBlkj, while the block width of padded block 600 is MX and height of padded block 600 is MY. A video coder (e.g., video encoder 20 or video decoder 30) derives a padding motion vector 602, denoted PMVj, which points to a reference padded block 604, denoted RPBlkj. RPBlkj contains a reference padded pixel, Ri. To derive PMVj, the video coder can use motion vectors in a boundary block 606, denoted BBlkj. A block width of boundary block 606 is denoted Nx and a block height of boundary block 606 is denoted NY. After obtaining Ri, the video coder uses Ri to derive a padded value for Pi. That is, for each value i from 0 to j−1, where j is the number of samples in padded block 600, the video coder may use Ri to derive a padded value for Pi. In some examples, boundary block 606 (i.e., BBlKj) is a coded block]. .
Regarding claim 6, Sun and Fabrice disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Sun does not explicitly disclose: “wherein determining the first candidate dimension comprises setting the first candidate dimension to a value derived using the first distance, a dimension of the current picture and a dimension of the first reference picture”.
However, Fabrice, from the same or similar field of endeavor teaches: “wherein determining the first candidate dimension comprises setting the first candidate dimension to a value derived using the first distance, a dimension of the current picture and a dimension of the first reference picture [see page: 1; That is, in ECM-5.0, pictures are extended by an area surrounding the picture with a size of (maxCUwidth + 16) in each direction of the picture boundary. The pixel in the extended area is derived by repetitive boundary padding. When a reference block used for uni-prediction locates partially or completely out of the picture boundary (OOB), the repetitive padded pixel is used for motion compensation (MC)].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the Video compression techniques disclosed by Sun to add the teachings of Fabrice as above, in order to extend the picture area, the image processing unit will set a value for a distance, dimension of the current and reference picture so that the processor can determine the width of the extension area needed [Fabrice see page: 1].
Claims 7 - 8. (Cancelled)
Regarding claim 9, Sun and Fabrice disclose all the limitation of claim 1 and are analyzed as previously discussed with respect to that claim.
Furthermore, Sun discloses: “wherein selecting the candidate dimension from a set of two or more candidate dimensions comprises:
comparing the first candidate dimension to the second candidate dimension; and
i) as a result of determining that the first candidate dimension is larger than the second candidate dimension, selecting the first candidate dimension; ii) as a result of determining that the second candidate dimension is larger than the first candidate dimension, selecting the second candidate dimension; or iii) as a result of determining that the first candidate dimension is equal to the second candidate dimension, selecting either the first candidate dimension or the second candidate dimension [see para: 0092; If the derived PMVj is a bi-prediction motion vector (i.e., the derived PMVj contains two motion vectors pointing to two positions), the video coder may use the vector pointing to the position which is inside the boundary and furthest to the boundary. Alternatively, the video coder may first scale the two motion vectors to the same reference picture, if needed, and the select one of them. And see para: 0104; for comparing data].
Regarding claim 10, claim 10 is rejected under the same art and evidentiary limitations as determined for the method of claim 1.
Regarding claim 11, claim 11 is rejected under the same art and evidentiary limitations as determined for the method of claim 2.
Regarding claim 12 and 33, claim 12 and 33 is rejected under the same art and evidentiary limitations as determined for the method of claim 3.
Regarding claim 13 and 35, claim 13 and 35 is rejected under the same art and evidentiary limitations as determined for the method of claim 4.
Regarding claim 14, claim 14 is rejected under the same art and evidentiary limitations as determined for the method of claim 5.
Regarding claim 15, claim 15 is rejected under the same art and evidentiary limitations as determined for the method of claim 6.
Claims 16 - 29. (Cancelled).
Regarding claim 30 – 31, claim 30 – 31 is rejected under the same art and evidentiary limitations as determined for the method of claim 1 but for non-transitory computer readable medium.
Regarding claim 32 and 34, claim 32 and 34 is rejected under the same art and evidentiary limitations as determined for the method of claim 1 but for an apparatus executed by non-transitory computer readable medium.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20210029353 A1
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Masum Billah whose telephone number is (571)270-0701. The examiner can normally be reached Mon - Friday 9 - 5 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie J. Atala can be reached at (571) 272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MASUM BILLAH/Primary Patent Examiner, Art Unit 2486