DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments, filed 03/16/2026, have been entered and made of record. Claims 1-13 and 15-21 are pending. Claim 14 was previously cancelled. In view of applicants’ amendment to claim 15, the rejection of claim 15 under 35 U.S.C. § 101, is hereby withdrawn.
Response to Arguments
Applicant’s arguments with respect to claims 1-13 and 15-21 have been considered but are moot because of the new ground of rejection sets forth below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-13 and 15-21 are rejected under 35 U.S.C. 103 as being unpatentable over LEE et al. (US PG PUB 2021/0297658 hereinafter referred as Lee) in view of Zhang et al. (US Pat. No. 11, 653, 002 hereinafter referred as Zhang).
Regarding claim 1, Lee discloses a method, comprising:
extracting motion information for a video block (see paragraph 0129 the motion prediction unit may retrieve a region that best matches with an input block from a reference image when performing motion prediction, and deduce a motion vector by using the retrieved region: see paragraph 0069, video or moving picture);
obtaining motion compensated reference samples from the motion information for the video block (see paragraph 0130 the motion compensation unit generates a prediction block by performing motion compensation for the current block using a motion vector; see paragraph 0101, reference picture may mean a reference picture which is referred to by a specific block for the purposes of inter prediction or motion compensation of the specific block, or the reference picture may be a picture including a reference block referred to by a current block for inter prediction or motion compensation).
determining an intra prediction for the video block (see paragraph 0129 when encoding/decoding of the reference frame is performed, it may be stored in the reference picture buffer; see paragraph 0128 intra-prediction mode use a sample of a block that has been already encoded/decoded; see also figure 1 and paragraphs 0094-0097); and
encoding at least a portion of the video block using the intra prediction (see paragraph 0005 encoding using intra prediction; see also paragraphs 0019 and 0022).
Claim 1 differs from Lee in that the claim requires the determining of an intra prediction step is done based on the motion compensated reference samples derived using the extracted motion information.
In the same field of endeavor Zhang discloses determining an intra prediction for the video block based on the motion compensated sampled derived using the extracted motion information (see col. 2 lines 22-36 sub-block intra copy is coded based on reference samples from the video region and performing the conversion, wherein the conversion includes determining an initialization motion vector for a given sub-block identifying a reference block from the initMV, and deriving motion vector information for the given sub-block using MV information for the reference block; see also col. 32 lines 36-60).
Therefore, in light of the teaching in Zhang, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lee by adding the feature of determining of an intra prediction step is done based on the motion compensated reference samples derived using the extracted motion information in order to remove redundancy between repeating patterns and improve coding for screen content.
Regarding claim 2, the limitation of claim 2, can be found in claim 1 above. Therefore claim 2 is analyzed and rejected for the same reasons as discussed in claim 1 above. See also paragraph 0674 of Lee.
Regarding claim 3, the limitation of claim 3, can be found in claim 1 above. Therefore claim 3 is analyzed and rejected for the same reasons as discussed in claim 1 above. It is also noted that Lee discloses decoding at least a portion of the video block using the intra prediction (see paragraphs 0005 decoding using intra picture, see also paragraphs 0008 and 0011-0012).
Regarding claim 4, the limitation of claim 4, can be found in claim 3 above. Therefore claim 4 is analyzed and rejected for the same reasons as discussed in claim 1 above. See also paragraph 0674 of Lee.
Regarding claim 5, Lee discloses extracted motion information comprises a motion vector (see paragraphs 0015, 0024 and 0102) and Zhang discloses the motion vector corresponding to a reference frame used to obtain motion compensated reference sample for generating the intra prediction (
Regarding claim 6, Lee discloses the said motion compensated reference samples are obtained from a reference frame (see paragraphs 0097, 0101 and 0147); and Zhang discloses using extracted motion information to identify reference samples used to determine the intra prediction for the video block (see col. 2 lines 22-36). The motivation to combine the references is discussed in claim 1 above.
Regarding claim 7, Lee discloses the said reference frame is a frame collocated with the video block in a temporal domain (see paragraphs 0023,0088, 0109 and 0220).
Regarding claim 8, Lee discloses said motion information is determined by a motion model that represents motion associated with video block and used to obtain the motion compensated reference samples for intra prediction (see paragraphs 0128-0130, 0223, 0611 and 0635). See also Zhang’s col. 2 lines 45-54 and col. 32 lines 37-60; and claim 1 rejection above.
Regarding claim 9, Lee discloses the said motion model is computed in an encoder and sent to a decoder as header information for use in obtaining the motion compensated reference samples used to generate the intra prediction (see paragraphs 0091, 0139 and 0223).
Regarding claim 10, Lee discloses reference samples from a reference frame comprise samples surrounding a block in the reference frame that are motion compensated (see paragraphs 0128, 0139 and 0200); Zhang discloses using the motion information to generate the intra prediction (see rejection of claim 1 above. The motivation to combine the references is discussed in claim 1 above.
Regarding claim 11, Lee discloses a reference collocated frame to use, or an index to a reference frame to use, is signaled for obtaining the motion compensated reference samples used for intra prediction of the video block (see paragraphs 0204, 0206, 0224 and 0553; see also claim 1 rejection above).
Regarding claim 12, Lee discloses a device comprising: an apparatus according to Claim 4; and at least one of (i) an antenna configured to receive a signal, the signal including the video block, (ii) a band limiter configured to limit the received signal to a band of frequencies that includes the video block, and (iii) a display configured to display an output representative of the video block (see paragraphs 0002, 0139, 0127 and 0151).
Regarding claim 13, Lee discloses a non-transitory computer readable medium containing data content generated according to the method of for playback using a processor (see paragraphs 0139 and 0674).
Regarding claim 15, Lee discloses a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method claim 3 (see paragraph 0674).
Claims 16-21 are rejected for the same reasons as discussed in clams 5-10 respectively above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HELEN SHIBRU whose telephone number is (571)272-7329. The examiner can normally be reached M-TR 8:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571 272 7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HELEN SHIBRU/Primary Examiner, Art Unit 2484 March 30, 2026