DETAILED ACTION
1. This is a first office action in response to application no. 18/923,192 filed on October 22, 2024 in which claims 2-22 are presented for examination. The Applicant filed a preliminary amendment on January 27, 2025 to cancel claim 1, and to add claims 2-22.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
3. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
4. Claims 2, 4, 8, 10, 14, 16 and 20-22 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sun et al. (US Patent Application Publication no. 2019/0082193).
Regarding claim 2, Sun discloses a video decoder for decoding a picture from a data stream, the video decoder comprising at least one processor, the at least one processor (See Sun [0008]) configured to: for an inter-predicted block of the picture, identify, using a motion vector (See Sun [0004], [0027]), a first referenced area that is located beyond a border of a reference picture (See Sun [0008] “outside a picture boundary”), and a second reference area that is located within the border of the reference picture (See Sun [0027] “within the picture boundary”); select a padding mode for padding the first referenced area, based on an indication in the data stream, generate padding for the first referenced area, using the selected padding mode (See Sun [0007]-[0008] “use reference pixels in the first picture to pad pixels outside”); and predict the inter-predicted block of the picture using the padded reference area and the second reference area (See Sun [0086]).
As per claim 8, Sun discloses a method for decoding a picture from a data stream, the method comprising: for an inter-predicted block of the picture, identifying, using a motion vector (See Sun [0004], [0027]), a first referenced area that is located beyond a border of a reference picture (See Sun [0008] “outside a picture boundary”) and a second reference area that is located within the border of the reference picture (See Sun [0027] “within the picture boundary”); selecting a padding mode for padding the first referenced area, based on an indication in the data stream, generating padding for the first referenced area, using the selected padding mode (See Sun [0007]-[0008] “use reference pixels in the first picture to pad pixels outside”); and predicting the inter-predicted block of the picture using the padded reference area and the second reference area (See Sun [0086]).
As per claim 14, Sun discloses a non-transitory computer readable medium containing instructions that when executed cause at least one processor (See Sun [0038] “non-transitory storage media” and [0008]) to: for an inter-predicted block of a picture, identify, using a motion vector(See Sun [0004], [0027]), a first referenced area that is located beyond a border of a reference picture (See Sun [0008] “outside a picture boundary”) and a second reference area that is located within the border of the reference picture(See Sun [0027] “within the picture boundary”); select a padding mode for padding the first referenced area, based on an indication in a data stream containing the picture, generate padding for the first referenced area, using the selected padding mode (See Sun [0007]-[0008] “use reference pixels in the first picture to pad pixels outside”); and predict the inter-predicted block of the picture using the padded reference area and the second reference area (See Sun [0086]).
As per claim 20, Sun discloses a video encoder for encoding a picture from a data stream, the video encoder comprising at least one processor (See Sun’s Abstract, [0008]), the at least one processor (See Sun [0008]) configured to: encode an indication of a padding mode in the data stream; for an inter-predicted block of the picture (See Sun [0084]), identify, using a motion vector, a first referenced area that is located beyond a border of a reference picture and a second reference area that is located within the border of the reference picture (See Sun [0080] “ In the case where the motion vector points to a block outside the frame boundary”); generate padding for the first referenced area, using the padding mode (See Sun [0007]-[0008] “use reference pixels in the first picture to pad pixels outside”); and predict the inter-predicted block of the picture using the padded reference area and the second reference area (See Sun [0086]).
As per claim 21, Sun discloses a method for encoding a picture from a data stream (See Sun’s Abstract), the method comprising: encoding an indication of a padding mode in the data stream; for an inter-predicted block of the picture (See Sun [0084]), identifying, using a motion vector, a first referenced area that is located beyond a border of a reference picture and a second reference area that is located within the border of the reference picture (See Sun [0080] “ In the case where the motion vector points to a block outside the frame boundary”); generating padding for the first referenced area, using the padding mode (See Sun [0007]-[0008] “use reference pixels in the first picture to pad pixels outside”); and predicting the inter-predicted block of the picture using the padded reference area and the second reference area (See Sun [0086]).
As per claim 22, Sun discloses a non-transitory computer readable medium containing instructions that when executed cause at least one processor (See Sun [0038] “non-transitory storage media” and [0008]) to: encode an indication of a padding mode in a data stream (See Sun [0084]); for an inter-predicted block of a picture in the data stream, identify, using a motion vector, a first referenced area that is located beyond a border of a reference picture and a second reference area that is located within the border of the reference picture(See Sun [0080] “ In the case where the motion vector points to a block outside the frame boundary”); generate padding for the first referenced area, using the padding mode(See Sun [0007]-[0008] “use reference pixels in the first picture to pad pixels outside”); and predict the inter-predicted block of the picture using the padded reference area and the second reference area (See Sun [0086]).
As per claims 4, 10 and 16, most of the limitations of these claims have been noted in the above rejection of claims 1, 8 and 14. In addition, Sun further teaches decoding a signal of the data stream and determine the indication based on the decoded signal (See Sun [0045] and [0062]).
Claim Rejections - 35 USC § 103
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. Claims 3, 5-6, 9, 11-12, 15, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al. (US Patent Application Publication no. 2019/0082193) in view of He et al. (US Patent Application Publication no. 2019/0215532).
Regarding claims 3, 9 and 15, most of the limitations of these claims have been noted in the above rejections of claims 2, 8 and 14.
It is noted that Sun is silent about wherein the padding mode is either an omnidirectional mode or a perpendicular mode.
However, He teaches wherein the padding mode is either an omnidirectional mode or a perpendicular mode (See He [0155] “The repetitive padding may be employed as an example in the hybrid padding. One or more other padding methods (e.g., the face-based padding in FIG. 14, the perpendicular-extrapolation based and/or the diagonal-extrapolation based padding in FIG. 15, and/or the like) may be applied in the hybrid padding to pad the samples that may be outside the valid region of padded samples for the unicube projection format.”).
Therefore, it is considered obvious that one skilled in the art, before the effective filing date of the claimed invention, would recognize the advantage of modifying Sun’s padding mode to incorporate He’s teachings wherein the padding mode is either an omnidirectional mode or a perpendicular mode. the motivation for performing such a modification in Sun is to apply geometry padding to derive the corresponding sample value based on any geometry structure represented in a video as taught by He (See He [0155]).
As per claims 5, 11, and 17, most of the limitations of these claims have been noted in the above rejection of claims 4, 10 and 16.
It is noted that Sun is silent about wherein the signal is in at least one of a picture parameter set and a sequence parameter set.
However, He teaches wherein the signal is in at least one of a picture parameter set and a sequence parameter set (See He [0144]).
Therefore, it is considered obvious that one skilled in the art, before the effective filing date of the claimed invention, would recognize the advantage of modifying Sun to incorporate He’s teachings wherein the signal is in at least one of a picture parameter set and a sequence parameter set is to be able to pad any region of any size as taught by He (See He [0158]).
As per claims 6, 12 and 18, most of the limitations of these claims have been noted in the above rejection of claims 2, 8 and 14.
It is noted that Sun is silent about wherein the border is an external picture border or an inner picture border that separates portions of different picture content.
However, He teaches wherein the border is an external picture border or an inner picture border that separates portions of different picture content (See He [0005]-[0006] and [0009]).
Therefore, it is considered obvious that one skilled in the art, before the effective filing date of the claimed invention, would recognize the advantage of modifying Sun to incorporate He’s teachings wherein the border is an external picture border or an inner picture border that separates portions of different picture content. The motivation for performing such a modification in Sun is to determine the proper location of the padding sample.
8. Claims 7, 13 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The claims are allowable over the prior art of record since the references taken individually or in combination, in addition to the limitations of the independent claims 2, 8 and 14,… “divide a picture plane of a video that includes the picture and the reference picture into spatial segments in a manner static over at least two pictures of the video, wherein the spatial segments are independently coded with respect to entropy coding, wherein the border of the reference picture is a segment border of a predetermined spatial segment in the reference picture, within which the inter-predicted block is located.”
9. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
See the Notice of References Cited (PTO-892).
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GIMS S PHILIPPE whose telephone number is (571)272-7336. The examiner can normally be reached Maxi Flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GIMS S PHILIPPE/Primary Examiner, Art Unit 2424