DETAILED ACTION
This office action is in response to an application filed 2/7/2025, wherein claims 1-20 are pending and being examined. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 2/7/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
In regard to claim 20, the claim recites a “non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by an apparatus for video processing, wherein the method comprises…” Significantly, the claimed non-transitory computer readable medium is NOT implementing any encoding method; no instructions/steps are being executed. Instead, the claimed storage medium merely stores the data output from and/or generated by an encoding method. In other words, these claims are directed to a mere machine-readable medium storing data content (a bitstream generated by an encoding method).
Applicant seeks to patent the storage of a bitstream in the abstract. In other words, the claims seek to patent the content of the information (bitstream with encoded video content). Moreover, this stored bitstream does not impose any definitive physical organization on the data as there is no functional relationship between the bitstream and the storage medium. In conclusion, claim 20 is directed to mere data content (bitstream generated by the recited encoding method) stored as a bitstream on a decoder-readable storage medium. Under MPEP 2111.05(III), such claims are merely machine-readable media. Furthermore, the Examiner found and continues to find that there is no disclosed or claimed functional relationship between the stored data and medium. Instead, the medium is merely a support or carrier for the data being stored. Therefore, the data stored and the way such data is generated should not be given patentable weight. See MPEP 2111.05 applying In re Lowry, 32 F.3d 1579, 1583-84, 32 USPQ2d 1031, 1035 (Fed. Cir. 1994) and In re Ngai, 367 F.3d 1336, 70 USPQ2d 1862 (Fed. Cir. 2004). As such, claim 20 is subject to a prior art rejection based on any non-transitory computer readable medium known before the earliest effective filing date of the present application. Therefore, claim 20 has been rejected based on prior art that discloses a method of generating and storing an encoded data stream (bitstream) as noted below.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-5, 7, 8, and 10-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhang et al. (US 2018/0278942) (hereinafter Zhang).
In regard to claim 1, Zhang discloses a method of video processing [¶0034; techniques for encoding and decoding video data], comprising:
determining, for a conversion between a video unit of a video and a bitstream of the video [¶0052; generate an encoded representation of a picture (e.g., an encoded video bitstream)], whether the video unit is non-intra coded or intra coded [¶0084; prediction information for a video block of the current video… determine a prediction mode (e.g., intra- or inter-prediction)];
in accordance with a determination that the video unit is non-intra coded, deriving one or more intra prediction modes (IPMs) for the video unit [¶0099-¶0101; determine an intra-prediction mode for an inter-predicted current block from the intra-prediction mode of a collocated block… current block 152 in picture 150 is coded using inter-prediction… determine an intra-prediction mode for current block 152 by copying intra-prediction mode 156 from collocated block 162 in picture 160. ¶0108-¶0111];
in accordance with a determination that the video unit is intra coded, obtaining a prediction of the video unit without applying an IPM [¶0068; Intra-mode (I mode) may refer to any of several spatial based coding modes. ¶0090-¶0092; an intra-frame or intra-slice, video encoder 20 and video decoder 30 may only code a block as an intra-block (i.e., with a particular intra-prediction mode) based on samples within the same picture as the intra-block… Video encoder 20 and video decoder 30 may apply one of the intra-prediction modes shown in FIG. 5 to code an intra-block. Of course, other intra-prediction modes may be used. ¶0094; To code an intra-prediction mode for current block 146, intra-prediction modes of neighboring blocks may be used as prediction modes for the current block 146. That is, in some examples, video encoder 20 and video decoder 30 may use the intra-prediction modes of neighboring blocks to determine the intra-prediction mode for block 146. ¶0110, ¶0120]; and
performing the conversion based on the one or more IPMs or the obtained prediction [¶0128-¶0130; an inter-coded block may conduct intra-prediction with its derived intra-prediction mode. ¶0076, ¶0110].
Zhang discloses a method and system for video encoding and decoding. Current blocks are encoded or decoded according to various prediction modes. When it is determined that a block is inter coded, an intra prediction mode may be propagated from a nearby block and used when encoding or decoding the current block. An intra block may be coded by selecting a prediction mode among a list of candidate modes, wherein one or more candidate modes are not selected when performing the prediction. Thus prediction is “without applying an IPM”, as not selected candidate intra modes are not applied to the current block during the intra prediction process. Zhang therefore anticipates the claim.
In regard to claim 2, Zhang discloses the method of claim 1. Zhang further discloses, wherein in accordance with the determination that the video unit is non-intra coded, the video unit is coded with one of the followings:
an inter mode [¶0067-¶0068, ¶0084],
an intra block copy (IBC) mode,
a palette coding (PLT) mode, or
another coding mode that does not belong to an intra mode.
In regard to claim 3, Zhang discloses the method of claim 1. Zhang further discloses, wherein in accordance with the determination that the video unit is intra coded, the video unit is coded with one of the followings:
a matrix weighted intra prediction (MIP) mode,
an intra template matching prediction (intraTMP) mode, or
another intra prediction mode which obtains the prediction without using the IPM [¶0068, ¶0090-¶0094, ¶0110, ¶0120].
In regard to claim 4, Zhang discloses the method of claim 1. Zhang further discloses,
wherein a displacement vector (DV) is used in the deriving of the one or more IPMs [¶0105-¶0106, Fig.10].
In regard to claim 5, Zhang discloses the method of claim 4. Zhang further discloses,
wherein the DV comprises at least one of a motion vector (MV) used in an inter prediction or a block vector (BV) used in IBC [¶0105-¶0106, Fig.10], and/or
wherein the DV is derived from adjacent or non-adjacent neighboring blocks, or the DV is derived from adjacent or non-adjacent neighboring sample [¶0115-¶0116, Fig.13], and/or
wherein the DV is derived during coding of the video unit, and/or
wherein the one or more IPMs are set equal to the IPM of a reference video unit indicated by the DV when the reference video unit is intra coded, and/or
wherein the one or more IPMs are set equal to a propagated IPM of a reference video unit indicated by the DV when the video unit is non-intra coded, and/or
wherein the one or more IPMs are set equal to the IPM of a reference video unit at a position in the reference video unit.
In regard to claim 7, Zhang discloses the method of claim 1. Zhang further discloses,
wherein one or more neighboring samples are used in the deriving of the one or more intra prediction modes [¶0105-¶0106, ¶0115-¶0116, Fig.10, Fig.13].
In regard to claim 8, Zhang discloses the method of claim 7. Zhang further discloses,
wherein the one or more neighboring samples are adjacent and/or non-adjacent [¶0105-¶0106, ¶0115-¶0116, Fig.10, Fig.13], and/or
wherein gradients of the one or more neighboring samples are calculated and used to derive the one or more intra prediction modes, and/or
wherein a template matching based approach is used to derive the one or more intra prediction modes.
In regard to claim 10, Zhang discloses the method of claim 1. Zhang further discloses,
wherein the derived one or more IPMs are used for a subsequent video unit of the video unit [Fig.16, ¶0066, ¶0070, ¶0076].
In regard to claim 11, Zhang discloses the method of claim 10. Zhang further discloses,
wherein the one or more IPMs are used in a most probable mode (MPM) list construction for the subsequent video unit [¶0130-¶0131], and/or
wherein the derived one or more IPMs are used in a derivation of the IPM for the subsequent video unit, and
wherein a derived IPM of a first block is used to derive an IPM of a second block, wherein a DV of the second block points to the first block [Fig.10 through Fig.14].
In regard to claim 12, Zhang discloses the method of claim 11. Zhang further discloses,
wherein an order of the one or more IPMs added into the MPM list depends on coded information [¶0131-¶0137].
In regard to claim 13, Zhang discloses the method of claim 1. Zhang further discloses,
wherein whether to and/or how to derive the one or more IPMs of the video unit is conducted individually for two color components [¶0062, ¶0075, ¶0126].
In regard to claim 14, Zhang discloses the method of claim 1. Zhang further discloses,
wherein the one or more IPMs for the video unit are used to generate a final or medium prediction value of the video unit[¶0035, ¶0128, ¶0144, ¶0153] , and/or
wherein the one or more IPMs are used for a chroma intra prediction [¶0075, ¶0126], and/or
wherein the one or more IPMs are stored in a buffer and used in an IPM propagation for a subsequent video unit of the video unit [¶0106-¶0107], and/or
wherein the one or more IPMs are used for a determination of whether to and/or how to apply a transform for the video unit.
In regard to claim 15, Zhang discloses the method of claim 14. Zhang further discloses,
wherein the one or more IPMs for the video unit are used to generate an intra-prediction value for the video unit [¶0099-¶0101, ¶0108-¶0111], and/or
wherein the final prediction value is set as a weighted sum of the generated intra-prediction value and a second prediction value [¶0035, ¶0128, ¶0144, ¶0153].
In regard to claim 16, Zhang discloses the method of claim 14. Zhang further discloses,
wherein the one or more IPMs are used in a construction of a chroma intra prediction mode list [¶0062, ¶0075, ¶0124-¶0126], and/or
wherein the one or more IPMs are used as chroma direct copy modes, and/or
wherein the one or more IPMs are used for a derivation of IPM for a chroma video unit [¶0062, ¶0075, ¶0126], and/or
wherein the one or more IPMs are stored at subblock level [¶0116-¶0119], and/or
wherein when accessing one or more propagated IPMs in the buffer, a position is aligned with a subblock grid, and/or
wherein when multiple transform sets are used for the video unit, the one or more IPMs are used to select a transform core or a transform set, and/or
wherein when a secondary transform is used for the video unit, the one or more IPMs are used to select an index of the secondary transform which indicates which transformation matrix is used.
In regard to claim 17, Zhang discloses the method of claim 1. Zhang further discloses,
wherein the conversion includes encoding the video unit into the bitstream [¶0068-¶0070], or
wherein the conversion includes decoding the video unit from the bitstream [¶0082-¶0083].
In regard to claim 18, this claim is drawn to an apparatus for video processing comprising a processor and memory, wherein when the instructions are executed by a processor, the apparatus performs the method of claim 1. Therefore claim 18 is rejected for the same reasons as claim 1. Additionally, Zhang discloses implementing the system via a processor coupled with memory in at least ¶0011-¶0013.
In regard to claim 19, this claim is drawn to a non-transitory computer-readable storage medium with instructions that when executed perform the method of claim 1. Therefore claim 19 is rejected for the same reasons as claim 1. Additionally, Zhang discloses implementing the system via a processor coupled with memory in at least ¶0011-¶0013.
In regard to claim 20, Zhang discloses a non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method [¶0011-¶0013, ¶0044] performed by an apparatus for video processing, wherein the method comprises: determining whether a video unit of the video is non-intra coded or intra coded; in accordance with a determination that the video unit is non-intra coded, derive one or more intra prediction modes (IPMs) for the video unit; in accordance with a determination that the video unit is intra coded, obtain a prediction of the video unit without applying an IPM; and generating the bitstream based on the one or more IPMs or the obtained prediction [see the rejection of claim 1].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 6 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US 2018/0278942) in view of Li et al. (US 2023/0113104) (hereinafter Li).
In regard to claim 6, Zhang discloses the method of claim 5. Zhang does not explicitly disclose, wherein the DV is used to obtain a prediction in intra prediction with a template matching based approach, and/or
wherein the template matching based approach is an intra TMP. However Li discloses,
wherein the DV is used to obtain a prediction in intra prediction with a template matching based approach [¶0119-¶0122; propagated intra prediction mode can be derived using the motion vector and reference picture… Intra template matching prediction (also referred to as Intra TMP) is a special intra prediction mode that copies a best prediction block from the reconstructed part of the current frame, whose L-shaped template matches the current template], and/or
wherein the template matching based approach is an intra TMP [¶0119-¶0122].
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method disclosed by Zhang with the template matching disclosed by Li in order to improve encoding and decoding efficiency with improved intra mode propagation [Li ¶0017-¶0020, ¶0135].
In regard to claim 9, Zhang discloses the method of claim 8. Zhang does not explicitly disclose, wherein a decoder-side intra mode derivation (DIMD) approach is used, and/or wherein a template-based intra mode derivation (TIMD) is used. However Li discloses,
wherein a decoder-side intra mode derivation (DIMD) approach is used [¶0118-¶0122; a decode side intra mode derivation (DIMD) is applied… propagated intra prediction mode can be derived using the motion vector and reference picture], and/or
wherein a template-based intra mode derivation (TIMD) is used [¶0118-¶0122; an intra coded CU where a template-based intra mode derivation (TIMD) is applied… propagated intra prediction mode can be derived using the motion vector and reference picture].
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method disclosed by Zhang with the TIMD and DIMD modes disclosed by Li in order to improve encoding and decoding efficiency with improved intra mode propagation [Li ¶0017-¶0020, ¶0135].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REBECCA A VOLENTINE whose telephone number is (571)270-7261. The examiner can normally be reached Monday-Friday 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joe Ustaris can be reached at (571)272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/REBECCA A VOLENTINE/Primary Examiner, Art Unit 2483 February 5, 2026