Prosecution Insights
Last updated: April 19, 2026
Application No. 19/039,633

SAMPLE-WISE EXTRAPOLATED INTRA PREDICTION

Non-Final OA §102§103
Filed
Jan 28, 2025
Examiner
ABOUZAHRA, MAHMOUD KAMAL
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Tencent America LLC
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
2y 7m
To Grant
62%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
16 granted / 28 resolved
-0.9% vs TC avg
Minimal +4% lift
Without
With
+4.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
41 currently pending
Career history
69
Total Applications
across all art units

Statute-Specific Performance

§101
0.5%
-39.5% vs TC avg
§103
74.2%
+34.2% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/29/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Status of Claims The following is a Non-Final Office Action in response to the correspondence filed on 01/28/2025. Claims 1-20 are considered in this Office Action. Claims 1-20 are currently pending. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 7-10, and 19- 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jinhan Song (US 20130215960 A1) (hereinafter Song): Regarding Claim 1, Song teaches a method of video decoding (apparatus and method for intra prediction encoding/decoding; [0001]) performed at a computing system having memory and one or more processors (every one of the components may be implemented by itself in hardware while the respective ones can be combined in part or as a whole selectively and implemented in a computer program having program modules for executing functions of the hardware equivalents; the computer program may be stored in computer readable media; [0070]), the method comprising: receiving a video bitstream comprising a plurality of blocks that includes a current block (at step S601the entropy decoding unit 410 reads and reconstructs information about a target block to be decoded, with respect to an encoding signal bitstream input from the intra prediction encoding apparatus 200; Fig. 6; [0064]); selecting a selected extrapolation filter from a set of extrapolation filters (at step S603, the filter selecting unit 430 provides a plurality of extrapolation prediction filter candidates for filtering pixels adjacent to the target block with respect to each intra prediction direction or prediction mode of the target block, receives filter information used for encoding the target block with respect to the encoding signal input from the intra prediction encoding apparatus 200, and selects a filter corresponding to the received filter information among the extrapolation prediction filter candidates; Fig. 6; [0065]); deriving a prediction sample for the current block using an extrapolated intra prediction with the selected extrapolation filter (at step S607, the extrapolation prediction unit 440 predicts an extrapolated pixel value of the target block, based on the filter selected by the filter selecting unit 430; Fig. 6; [0067]); and reconstructing the current block using the derived prediction sample (at step S609, the current block decoding unit 450 reconstructs the target block by adding an output value of the inverse quantization and inverse transform unit 420 to the extrapolated pixel value predicted by the extrapolation prediction unit 440; Fig. 6; [0068]). Regarding Claim 7, Song teaches the method of claim 1. Song further teaches wherein a set of filter coefficients for the selected extrapolation filter are trained in an offline manner (the extrapolation prediction filters may be generated through a training process and may be differentially provided according to weight values of pixel values of adjacent blocks with respect to pixel values of the current block; Fig. 2; [0039]). Regarding Claim 8, Song teaches the method of claim 1. Song further teaches wherein a set of filter coefficients for the selected extrapolation filter are trained using a set of neighboring samples of the current block (the extrapolation prediction filters may be generated through a training process and may be differentially provided according to weight values of pixel values of adjacent blocks with respect to pixel values of the current block; Fig. 2; [0039]). Regarding Claim 9, Song teaches the method of claim 1. Song further teaches wherein the set of extrapolation filters includes at least one fixed filter (the filter updating unit 460 can design a plurality of extrapolation prediction filter candidates of the current target block to be decoded, based on the filter used in the previously decoded block; Fig. 4; [0054]) and at least one adaptive filter (the filter updating unit 460 can design a plurality of extrapolation prediction filter candidates of the target block, based on information about a mode of a target frame to be decoded, a pixel value of the target block, and pixel values of the pixels adjacent to the target block; Fig. 4; [0055]). Regarding Claim 10, Song teaches the method of claim 1. Song further teaches further comprising parsing an indicator from the video bitstream, wherein the indicator indicates which filter to use from the set of extrapolation filters and wherein the selected extrapolation filter is selected according to the indicator (the filter selecting unit 430 receives the filter information about the intra prediction direction of the target block to be decoded and the extrapolation prediction filter used therein, from the bitstream received from the intra prediction encoding apparatus 200, and selects a filter corresponding to the received filter information among the extrapolation prediction filter candidates; Fig. 4; [0051]). Regarding Claim 19, Song teaches a method of video encoding (apparatus and method for intra prediction encoding/decoding; [0001]) performed at a computing system having memory and one or more processors (every one of the components may be implemented by itself in hardware while the respective ones can be combined in part or as a whole selectively and implemented in a computer program having program modules for executing functions of the hardware equivalents; the computer program may be stored in computer readable media; [0070]), the method comprising: receiving video data comprising a current picture that includes plurality of blocks, the plurality of blocks including a current block (optimal filter selecting unit 210 may predict an intra prediction direction of the current block from the pixels adjacent to the current block, and select an optimal filter among the extrapolation prediction filter candidates provided in the corresponding intra prediction direction; Fig. 5; [0057]); selecting a selected extrapolation filter from a set of extrapolation filters (optimal filter selecting unit 210 may predict an intra prediction direction of the current block from the pixels adjacent to the current block, and select an optimal filter among the extrapolation prediction filter candidates provided in the corresponding intra prediction direction; Fig. 5; [0057]); encoding the current block by applying the selected extrapolation filter (the residual signal generating unit 220 generates an extrapolation prediction value of the current block through the filter selected by the optimal filter selecting unit 210, and generates a residual signal by calculating a difference between the generated extrapolation prediction value of the current block and the pixel value of the current block; Fig. 5; [0058]; the transform and quantization unit 230 performs a block-based transform or an image-based transform and quantization on the residual signal generated by the residual signal generating unit 220; Fig. 5; [0059]); and signaling the encoded current block in a video bitstream (the entropy encoding unit 240 generates a bitstream of 0 and 1 by performing entropy encoding on the residual signal transformed and quantized by the transform and quantization unit 230; Fig. 5; [0060]). Regarding Claim 20, Song teaches a non-transitory computer-readable storage medium storing a video bitstream that is generated by a video encoding method ((non-transitory computer readable storage medium storing the video data [0070]); video data is encoded using the encoder 200 and is a bitstream [0064]), the video encoding method comprising: receiving video data comprising a current picture that includes plurality of blocks, the plurality of blocks including a current block (at step S601the entropy decoding unit 410 reads and reconstructs information about a target block to be decoded, with respect to an encoding signal bitstream input from the intra prediction encoding apparatus 200; Fig. 6; [0064]) selecting a selected extrapolation filter from a set of extrapolation filters (at step S603, the filter selecting unit 430 provides a plurality of extrapolation prediction filter candidates for filtering pixels adjacent to the target block with respect to each intra prediction direction or prediction mode of the target block, receives filter information used for encoding the target block with respect to the encoding signal input from the intra prediction encoding apparatus 200, and selects a filter corresponding to the received filter information among the extrapolation prediction filter candidates; Fig. 6; [0065]); encoding the current block by applying the selected extrapolation filter (encoding the current block based on the selected filter [0018]); and signaling the encoded current block in a video bitstream (the encoded video data is signaled in a bitstream[0064]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2, 4, 6, and 11-18 are rejected under 35 U.S.C. 103 as being unpatentable over Jinhan Song (US 20130215960 A1) (hereinafter Song) in view of Nan Hu (US 20230010869 A1) (hereinafter Hu): Regarding claim 2, Song teaches the method of claim 1; however, does not explicitly teach wherein the selected extrapolation filter is selected from the set of extrapolation filters according to an output of a classifier classifying one or more reference samples. However, in an analogous art, Hu teaches wherein the selected extrapolation filter is selected from the set of extrapolation filters according to an output of a classifier classifying one or more reference samples (video encoder 200 and video decoder 300 applies a classifier to determine a first-class index for the reconstructed sample and selects a filter from a first set of filters based on the first class index; Fig. 9; [0105]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song to add the classifier features as disclosed by Hu to allow a video decoder to determine for a current block of video data, a classifier from a plurality of different classifiers, thereby producing better filter selection and improved decoded video quality and compression (Hu [0005]). Regarding claim 4, Song in view of Hu teach the method of claim 2. Hu further teaches wherein the classifier is a gradients-based classifier configured to derive at least one of directionality of the one or more reference samples and activity of the one or more reference samples (to determine the class index of a 4x4 block, a surrounding window with 8x8 luma samples is employed to derive direction and activity information. In this 8x8 luma samples window, four gradient values of every second sample are first calculated; [0082]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song to add the gradient features as disclosed by Hu to allow gradient values to be used to determine the class index of a sample (Hu [0082]), thereby producing better filter selection and improved decoded video quality and compression (Hu [0005]). Regarding claim 6, Song in view of Hu teach the method of claim 2. Song further teaches wherein the one or more reference samples comprise reconstructed samples (at step S601 the entropy decoding unit 410 reads and reconstructs information about a target block to be decoded, with respect to an encoding signal bitstream input from the intra prediction encoding apparatus 200; Fig. 6; [0064]; at step S611, the filter updating unit 460 can design a plurality of extrapolation prediction filter candidates of the current target block to be decoded, based on the filter used in the previously decoded block; Fig. 6; [0069]). Regarding claim 11, Song teaches the method of claim 1; however, does not explicitly teach selecting the set of extrapolation filters from a group of extrapolation filter sets. However, in an analogous art, Hu teaches selecting the set of extrapolation filters from a group of extrapolation filter sets (In VVC version 1, ALF coefficients are signaled in ALF adaptation parameter sets; one APS may contain one set of luma filters with up to 25 filters, up to 8 chroma filters and up to 8 cross-component ALF filters; [0102]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song to add the filter set features as disclosed by Hu to allow different luma filter sets to be used in a slice (Hu [0096]), thereby producing better filter selection and improved decoded video quality and compression (Hu [0005]). Regarding claim 12, Song in view of Hu teach the method of claim 11. Hu further teaches parsing an indicator from the video bitstream, wherein the indicator indicates which filter set to use from the group of extrapolation filter sets and wherein the set of extrapolation filters is selected from the group of extrapolation filter sets according to the indicator (when ALF is enabled, the filter set index of either a fixed filter set or a signaled luma filter set is signaled for a luma CTB; [0097]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song to add the filter set features as disclosed by Hu to allow different luma filter sets to be used in a slice (Hu [0096]), thereby producing better filter selection and improved decoded video quality and compression (Hu [0005]). Regarding claim 13, Song in view of Hu teach the method of claim 11. Song further teaches comprising parsing an indicator from the video bitstream, wherein the indicator indicates which filter to use from the set of extrapolation filters (the filter selecting unit 430 receives the filter information about the intra prediction direction of the target block to be decoded and the extrapolation prediction filter used therein, from the bitstream received from the intra prediction encoding apparatus 200, and selects a filter corresponding to the received filter information among the extrapolation prediction filter candidates; Fig. 4; [0051]). Song does not explicitly teach the following limitations; however, in an analogous art, Hu teaches wherein the set of extrapolation filters are selected from the group of extrapolation filter sets according to an output of a classifier classifying one or more reference samples (for a signaled filter set, video encoder 200 may signal to video decoder 300, an index to indicate which classifier is used when this filter set is applied; when an APS has multiple filter sets, video encoder 200 may be configured to signal an index to indicate which classifier is used when the filter sets in the APS are applied; when a CTU or block is referencing a filter set, video decoder 300 may apply the classifier corresponding to the signaled classifier index of the filter set; Fig. 10; [0050]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song to add the filter features as disclosed by Hu to allow different luma filter sets to be used in a slice (Hu [0096]), thereby producing better filter selection and improved decoded video quality and compression (Hu [0005]). Regarding claim 14, Song teaches the method of claim 1; however, does not explicitly teach wherein the prediction sample is derived using one or more different extrapolation filter shapes. However, in an analogous art, Hu teaches wherein the prediction sample is derived using one or more different extrapolation filter shapes (the filter shapes of the ALF adopted in the joint exploration model software were 5x5, 7x7 and 9x9 diamond shapes; video encoder 200 may select and signal the filter shape at the picture level; [0075]). Regarding claim 15, Song teaches the method of claim 1; however, does not explicitly teach wherein the set of extrapolation filters comprises filters having different shapes. However, in an analogous art, Hu teaches wherein the set of extrapolation filters comprises filters having different shapes (the filter shapes of the ALF adopted in the joint exploration model software were 5x5, 7x7 and 9x9 diamond shapes; video encoder 200 may select and signal the filter shape at the picture level; [0075]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song to add the shape features as disclosed by Hu to allow different filter shapes to be applied (Hu [0075]), thereby producing better filter selection and improved decoded video quality and compression (Hu [0005]). Regarding claim 16, Song in view of Hu teach the method of claim 15. Hu further teaches wherein the set of extrapolation filters comprises a filter having a fixed filter shape (to obtain a better trade-off between coding efficiency and filter complexity, in VVC, only a 7x7 diamond shape 400 and a 5x5 diamond shape 402 are supported for luma and chroma components, respectively; [0075]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song to add the shape features as disclosed by Hu to allow different filter shapes to be applied (Hu [0075]), thereby producing better filter selection and improved decoded video quality and compression (Hu [0005]). Regarding claim 17, Song in view of Hu teach the method of claim 15. Hu further teaches wherein selecting the selected extrapolation filter comprises a selecting a filter shape and identifying the selected extrapolation filter as having the filter shape (the output samples of an ALF are stored in a decoded picture buffer or sent out as output pictures; the filter shapes of the ALF adopted in the joint exploration model software were 5x5, 7x7 and 9x9 diamond shapes; video encoder 200 may select and signal the filter shape at the picture level; [0075]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song to add the shape features as disclosed by Hu to allow different filter shapes to be applied (Hu [0075]), thereby producing better filter selection and improved decoded video quality and compression (Hu [0005]). Regarding claim 18, Song in view of Hu teach the method of claim 17. Hu further teaches wherein the filter shape is selected according to an output of a classifier classifying one or more reference samples (instead of being signaled in a bit stream, video encoder 200 and video decoder 300 may be configured to derive the classifier index implicitly based on some coding information, such as filter shape and whether fixed filters are applied in the first stage as the pre-filtering of the filtering; when fixed filters are applied as the first stage pre-filtering, video decoder 300 may be configured to apply a first classifier, for example, band-based classifier to the signaled filters in the second stage; hen fixed filters are not applied as the first stage pre-filtering, video decoder 300 may be configured to apply a second classifier to the signaled filters; [0153]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song to add the shape features as disclosed by Hu to allow different filter shapes to be applied (Hu [0075]), thereby producing better filter selection and improved decoded video quality and compression (Hu [0005]). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Jinhan Song (US 20130215960 A1) (hereinafter Song) in view of Nan Hu (US 20230010869 A1) (hereinafter Hu) further in view of Chia-Ming Tsai (US 20250260828 A1) (hereinafter Tsai): Regarding claim 3, Song in view of Hu teach the method of claim 2; however, do not explicitly teach wherein the classifier uses a template-based intra prediction mode derivation to classify the one or more reference samples. However, in an analogous art, Tsai teaches wherein the classifier uses a template-based intra prediction mode derivation to classify the one or more reference samples (the intra-prediction mode of the current block 300 is implicitly derived using template-based intra-mode derivation; a neighborhood of pixels of the current block 300 is used as a template 310; for each candidate pattern, the reference samples in the reference region 320 above and to the left of the template 310 are used to generate the prediction samples of the template 310; the cost is calculated based on the difference between the predicted samples and the reconstructed samples of the template; the least costly intra prediction mode is selected for intra prediction of the CU; Fig. 3; [0029]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song in view of Hu to add the template features as disclosed by Tsai to allow candidate patterns to be evaluated based on a template, thereby allowing the least costly intra prediction mode to be selected (Tsai Fig. 3 and [0029]). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Jinhan Song (US 20130215960 A1) (hereinafter Song) in view of Nan Hu (US 20230010869 A1) (hereinafter Hu) further in view of Franck Galpin (US 20220264085 A1) (hereinafter Galpin): Regarding claim 5, Song in view of Hu teach the method of claim 2; however, do not explicitly teach wherein the classifier is a matrix-based classifier. However, in an analogous art, Galpin teaches wherein the classifier is a matrix-based classifier (encoding or decoding method 10 comprises obtaining, for the block being encoded/decoded in intra prediction mode, intra predicted samples from a selected weight matrix and associated bias among a set of weight matrices and associated bias vectors and from a set of neighboring reference samples; Fig. 6; [0077]). It would have been obvious to the person having ordinary skill in the art before the effective filling date of the claimed invention to modify the video decoding method and system as disclosed by Song in view of Hu to add the matrix features as disclosed by Galpin to allow encoding or decoding a block using matrix based intra prediction (Galpin [0012]), thereby reducing the amount of data storage needed for the process and enabling resource limited devices to more effectively used for decoding (Galpin [0006]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHMOUD KAMAL ABOUZAHRA whose telephone number is (703)756-1694. The examiner can normally be reached M-F 7:00 AM to 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at (571) 272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAHMOUD KAMAL ABOUZAHRA/Examiner, Art Unit 2486 /JAMIE J ATALA/Supervisory Patent Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Jan 28, 2025
Application Filed
Feb 11, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12558845
System and Method for a Three-Dimensional Optical Switch Display Device
2y 5m to grant Granted Feb 24, 2026
Patent 12464148
COMPUTER-IMPLEMENTED MULTI-SCALE MACHINE LEARNING MODEL FOR THE ENHANCEMENT OF COMPRESSED VIDEO
2y 5m to grant Granted Nov 04, 2025
Patent 12422691
VEHICULAR CAMERA ASSEMBLY WITH LENS BARREL WELDED AT IMAGER HOUSING
2y 5m to grant Granted Sep 23, 2025
Patent 12387309
INSPECTION APPARATUS AND INSPECTION METHOD
2y 5m to grant Granted Aug 12, 2025
Patent 12389089
THERMAL SENSOR, THERMAL SENSOR ARRAY, ELECTRONIC APPARATUS INCLUDING THE THERMAL SENSOR, AND OPERATING METHOD OF THE THERMAL SENSOR
2y 5m to grant Granted Aug 12, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
62%
With Interview (+4.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month