Prosecution Insights
Last updated: April 19, 2026
Application No. 19/039,636

MPM CANDIDATE DERIVATION IMPROVEMENT BY USING INTRA TEMPLATE-MATCHING

Non-Final OA §102§103
Filed
Jan 28, 2025
Examiner
PICON-FELICIANO, ANA J
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Tencent America LLC
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
294 granted / 428 resolved
+10.7% vs TC avg
Strong +22% interview lift
Without
With
+21.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
459
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Office Action is sent in response to Applicant’s Communication received on January 28,2025 and February 27, 2025 for application number 19/039,636. This Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract and Claims. 3. Claims 1-20 are presented for examination. Information Disclosure Statement 4. The information disclosure statement (IDS) submitted on April 18,2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation Nonfunctional Descriptive Material 6. Claim 20 recites “a non-transitory computer-readable storage medium storing a video bitstream that is generated by a video encoding method”. There are no recitations of a processor or other element-merely bitstream content (bitstream comprising an encoded signal). Under MPEP 2111.05(III), this claim is merely machine-readable media. The Examiner finds that there is no disclosed or claimed functional relationship between the stored bitstream and the medium. Instead, the medium is merely a support or carrier for the bitstream being stored. Therefore, the bitstream stored and the medium should not be given patentable weight. See MPEP 2111.05 applying In re Lowry, 32 F.3d 1579, 1583-84, 32 USPQ2d 1031, 1035 (Fed. Cir. 1994); and In re Ngai, 367 F.3d 1336, 70 USPQ2d 1862 (Fed. Cir. 2004). As such, claim 20 is subject to a prior art rejection based on any non-transitory computer readable medium known before the earliest effective filing date of the present application. Claim Rejections - 35 USC § 102 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 7. Claims 1-7 and 11-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chen et al.(US 2022/0070486 A1)(hereinafter Chen). Regarding claim 1, Chen discloses a method of video decoding[See Chen: at least Figs. 1, 4, 6, 25, 36, par. 8 regarding method for video decoder 30] performed at a computing system having memory and one or more processors[See Chen: at least Fig. 1 and par. 8, 57-68, 308-309 regarding Video encoder 20 and video decoder 30 each may be implemented as any of a variety of suitable circuitry, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. When the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable medium and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure.], the method comprising: receiving a video bitstream comprising a plurality of blocks that includes a current block[See Chen: at least Fig. 25, par. 55, 246-247 regarding Video data memory 151 may store encoded video data, such as an encoded video bitstream, to be decoded by the components of video decoder 30…In this example, a video coder (e.g., a video encoder or a video decoder) may determine a motion vector of a non-adjacent block of a current picture of the video data. The non-adjacent block is non-adjacent to a current block of the current picture…]; identifying a reference block using a template-matching process[See Chen: at least Fig. 4, and par. 96, 124 regarding a video coder may determine a reference block based on samples of a reference picture…As shown in FIG. 4, template matching is used to derive motion information of the current CU by finding the best match between a template (top and/or left neighboring blocks of the current CU) in the current picture and a block (same size to the template) in a reference picture…]; identifying intra prediction information for the reference block[See Chen: at least Fig. 6, par. 91 and 146 regarding For instance, as part of decoding a picture of the video data, video decoder 30 may use inter prediction or intra prediction to generate predictive blocks… in the example of FIG. 6, a current block 600 in a current picture 602 has a first motion vector 604 (MV0) and a second motion vector 606 (MV1). Motion vector 604 points to a reference block 608 in a list0 reference picture 610. Motion vector 606 points to a reference block 612 in a list1 reference picture 614. Reference block 608 and reference block 612 may also be referred to herein as prediction blocks…]; including the intra prediction information in a most probable mode (MPM) list[See Chen: at least Fig. 36, par. 301-302 regarding In the example of FIG. 36, video decoder 30 may determine a plurality of MPMs (3600). Each respective MPM of the plurality of MPMs specifies a respective intra prediction mode of a respective block… In some examples, as part of determining the plurality of MPMs, video decoder 30 may determine an ordered list of the MPMs…]; and reconstructing the current block using information from the MPM list [ See Chen: at least Fig. 36, par. 177, 301-304 regarding Video decoder 30 may also determine a predictive block based on the motion vector of the current block. Video decoder 30 may then reconstruct, based on the predictive block, sample values of the current picture… In some examples, as part of determining the plurality of MPMs, video decoder 30 may determine an ordered list of the MPMs… Furthermore, in the example of FIG. 36, video decoder 30 may generate a predictive block based on an intra prediction mode specified by an MPM of the plurality of MPMs (3602). Additionally, video decoder 30 may reconstruct, based on the predictive block, sample values of the current picture (3604)…]. Regarding claim 2, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein the template-matching process includes searching a set of blocks within a predefined area to identify the reference block [See Chen: at least Fig. 6, par. 121, 146 regarding After identifying reference block 608 and reference block 612, a video coder may generate a predictive block 616 as a weighted average of reference block 608 and reference block 612. Predictive block 616 may also be referred to herein as a bilateral template… the video coder has identified block 618 of reference picture 610 as the best match for predictive block 616. The video coder may also search in reference picture 614 for a block that best matches predictive block 616. In the example of FIG. 6, the video coder has identified block 620 as the best match for predictive block 616… a local search (predefined area) based on bilateral matching or template matching around the starting point is performed and the MV that results in the minimum matching cost is taken as the MV for the whole CU…]. Regarding claim 3, Chen discloses all of the limitations of claim 2, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein the template-matching process includes identifying more than one reference block within the predefined area[See Chen: at least Fig. 6, par. 121, 146 regarding After identifying reference block 608 and reference block 612, a video coder may generate a predictive block 616 as a weighted average of reference block 608 and reference block 612. Predictive block 616 may also be referred to herein as a bilateral template… the video coder has identified block 618 of reference picture 610 as the best match for predictive block 616. The video coder may also search in reference picture 614 for a block that best matches predictive block 616. In the example of FIG. 6, the video coder has identified block 620 as the best match for predictive block 616… a local search (predefined area) based on bilateral matching or template matching around the starting point is performed and the MV that results in the minimum matching cost is taken as the MV for the whole CU…]. Regarding claim 4, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein the intra prediction information is identified by checking at least one position of the reference block [See Chen: at least par. 96, 154 regarding The first neighbor is an N×N block above sub-CU A (block c). If block c is not available or is intra coded, the other N×N blocks above sub-CU A are checked (from left to right, starting at block c). The second neighbor is a block to the left of the sub-CU A (block b). If block b is not available or is intra coded, other blocks to the left of sub-CU A are checked (from top to bottom, staring at block b). The motion information obtained from the neighboring blocks for each list is scaled to the first reference frame for a given list… In some examples, the video coder may determine the reference block such that each sample of the reference block is equal to a sample of the reference picture...]. Regarding claim 5, Chen discloses all of the limitations of claim 4, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein the at least one position comprises a center position of the reference block [See Chen: at least Fig. 7, par. 151 regarding the motion information of the first merge candidate in a merge candidate list of current CU 700 is used to determine reference picture 704 and corresponding block 706…This way, in ATMVP, corresponding block 706 may be more accurately identified, compared with TMVP, wherein the corresponding block (sometimes called collocated block) is always in a bottom-right or center position relative to current CU 700]. Regarding claim 6, Chen discloses all of the limitations of claim 4, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein the at least one position of the reference block is checked according to a predefined scanning order [See Chen: at least Fig. 8, par. 96, 153-154 regarding in spatial-temporal motion vector prediction, the motion vectors of the sub-CUs are derived recursively, following raster scan order... The first neighbor is an N×N block above sub-CU A (block c). If block c is not available or is intra coded, the other N×N blocks above sub-CU A are checked (from left to right, starting at block c). The second neighbor is a block to the left of the sub-CU A (block b). If block b is not available or is intra coded, other blocks to the left of sub-CU A are checked (from top to bottom, staring at block b). The motion information obtained from the neighboring blocks for each list is scaled to the first reference frame for a given list… the video coder may determine the reference block such that each sample of the reference block is equal to a sample of the reference picture…]. Regarding claim 7, Chen discloses all of the limitations of claim 4, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein the at least one position of the reference block comprises an intra mode information field for the reference block[See Chen: at least par. 220, 222 regarding for each selected reference picture, the video coder may check the H and C blocks within the selected reference picture in the same order as used in HEVC (i.e., bottom-right, then center)…for each selected picture, the video coder checks more blocks (e.g., the co-located blocks of spatially adjacent and/or NA-blocks of the current block).. The MPM is derived from the intra modes of spatially adjacent blocks. In the case that the current luma prediction mode is one of three MPMs, only the MPM index is transmitted to the decoder. Otherwise, the index of the current luma prediction mode excluding the three MPMs is transmitted to the decoder by using a 5-bit fixed length code…]. Regarding claim 11, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses further comprising parsing an indicator from the video bitstream, wherein the indicator indicates an index to the MPM list, and wherein the current block is reconstructed using information from the MPM list indicated by the index [See Chen: at least Fig. 36, par. 67, 117, 177, 222, 301-304 regarding Storage media 28 may be configured to store encoded video data, such as encoded video data (e.g., a bitstream) received by input interface 26. in HEVC and potentially other codecs, a fixed candidate list size is used to decouple the candidate list construction and the parsing of the index… due to the increased number of intra prediction directions as compared to H.264/MPEG-4 AVC, HEVC considers three most probable modes (MPMs) when coding the luma intra prediction mode predictively, rather than the one most probable mode considered in H.264/MPEG-4 AVC. The MPM is derived from the intra modes of spatially adjacent blocks. In the case that the current luma prediction mode is one of three MPMs, only the MPM index is transmitted to the decoder. Otherwise, the index of the current luma prediction mode excluding the three MPMs is transmitted to the decoder by using a 5-bit fixed length code...Video decoder 30 may also determine a predictive block based on the motion vector of the current block. Video decoder 30 may then reconstruct, based on the predictive block, sample values of the current picture… In some examples, as part of determining the plurality of MPMs, video decoder 30 may determine an ordered list of the MPMs… in the example of FIG. 36, video decoder 30 may generate a predictive block based on an intra prediction mode specified by an MPM of the plurality of MPMs (3602). Additionally, video decoder 30 may reconstruct, based on the predictive block, sample values of the current picture (3604)…]. Regarding claim 12, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein the intra prediction mode obtained via the template- matching process is added to the MPM list before intra mode information from non-adjacent neighboring blocks of the current block [See Chen: at least Fig. 36, par. 154, 206-207, 301-304 regarding a CU-level motion search is first performed, followed by sub-CU level motion refinement. At the CU level, a video coder derives an initial motion vector for the whole CU based on bilateral matching or template matching. To derive the initial motion vector for the whole CU, the video coder may first generate a list of MV candidates (FRUC CU level MV candidates set) and the video coder selects the candidate which leads to the minimum matching cost as the starting point for further CU level refinement. Then, the video coder performs a local search based on bilateral matching or template matching around the starting point. The video coder then takes the MV that results in the minimum matching cost as the MV for the whole CU. Subsequently, the video coder may further refine the motion information at the sub-CU level with a FRUC sub-CU level MV candidates set which contains the derived CU motion vectors… The video coder may also refine the CU-level motion vector at a sub-CU level with a set of FRUC sub-CU level motion vector candidates. In this example, at least one of the set of CU-level FRUC motion vector candidates and the set of FRUC sub-CU level motion vector candidates may include a NA-SMVP that specifies a motion vector of a non-adjacent block… as part of determining the plurality of MPMs, video decoder 30 may determine an ordered list of the MPMs…In some examples, the plurality of MPMs is a global MPM list that comprises MPMs specifying motion information for each block that is in the current picture and that is encoded prior to the current block…video decoder 30 may determine an MPM from the MPMs in the global motion vector candidate list. In some examples, video decoder 30 may store a plurality of non-adjacent MPMs in a first-in, first-out (FIFO) buffer…Furthermore, in the example of FIG. 36, video decoder 30 may generate a predictive block based on an intra prediction mode specified by an MPM of the plurality of MPMs (3602)… The motion derivation for sub-CU A starts by identifying its two spatial neighbors. The first neighbor is an N×N block above sub-CU A (block c). If block c is not available or is intra coded, the other N×N blocks above sub-CU A are checked (from left to right, starting at block c). The second neighbor is a block to the left of the sub-CU A (block b). If block b is not available or is intra coded, other blocks to the left of sub-CU A are checked (from top to bottom, staring at block b). The motion information obtained from the neighboring blocks for each list is scaled to the first reference frame for a given list]. Regarding claim 13, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses further comprising, after including the intra prediction information in the MPM list, sorting the MPM list, wherein the information from the MPM list used to reconstruct the current block corresponds to a top entry in the MPM list after the sorting is performed [See Chen: at least Fig. 36, par. 301-304 regarding as part of determining the plurality of MPMs, video decoder 30 may determine an ordered list [sorting] of the MPMs. In such examples, the MPMs based on non-adjacent blocks are ordered in the list according to frequency with which intra prediction modes are specified by non-adjacent blocks in a plurality of nonadjacent blocks…the plurality of MPMs is a global MPM list that comprises MPMs specifying motion information for each block that is in the current picture and that is encoded prior to the current block…video decoder 30 may determine an MPM from the MPMs in the global motion vector candidate list. In some examples, video decoder 30 may store a plurality of non-adjacent MPMs in a first-in, first-out (FIFO) [top entry] buffer. In such examples, the plurality of non-adjacent MPMs includes a non-adjacent MPM specifying the intra prediction mode of the nonadjacent block…Furthermore, in the example of FIG. 36, video decoder 30 may generate a predictive block based on an intra prediction mode specified by an MPM of the plurality of MPMs (3602). Additionally, video decoder 30 may reconstruct, based on the predictive block, sample values of the current picture (3604)..]. Regarding claim 14, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein applying the template-matching process comprises deriving a template-matching cost for the reference block [See Chen: at least Fig. 6, par. 145 -146 regarding a bilateral template is generated as a weighted combination (i.e. average) of the two prediction blocks, from the initial MV0 of list0 and MVI of list 1, respectively, as shown in FIG. 6. The template matching operation may include or consist of calculating cost measures between the generated bilateral template and the sample region (around the initial prediction block) in the reference picture. For each of the two reference pictures, the MV that yields the minimum template cost may be considered as the updated MV of that list to replace the original one. The template cost may be calculated as the sum of absolute differences (SAD) or sum of squared differences (SSD) between the current template and the reference samples…Thus, in the example of FIG. 6, a current block 600 in a current picture 602 has a first motion vector 604 (MV0) and a second motion vector 606 (MVI). Motion vector 604 points to a reference block 608 in a list0 reference picture 610.]. Regarding claim 15, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein the template-matching process is applied to a search area corresponding to a reconstructed portion of the current picture [See Chen: at least Figs. 4, 26-28, par. 124, 179, 206, 277 regarding As shown in FIG. 4, template matching is used to derive motion information of the current CU by finding the best match between a template (top and/or left neighboring blocks of the current CU) in the current picture and a block (same size to the template) in a reference picture…In particular, a CU-level motion search is first performed, followed by sub-CU level motion refinement. At the CU level, a video coder derives an initial motion vector for the whole CU based on bilateral matching or template matching... Then, the video coder performs a local search based on bilateral matching or template matching around the starting point…FIG. 28 is a flowchart illustrating an example operation for determining a NA-SMVP using FRUC motion vector candidates…The video coder may determine a CU-level motion vector at least in part by performing a local search starting from a selected CU-level FRUC motion vector candidate (2804)… at least one of the set of CU-level FRUC motion vector candidates and the set of FRUC sub-CU level motion vector candidates includes a NA-SMVP that specifies the motion vector of the nonadjacent block of FIG. 26 and FIG. 27…As shown in FIG. 15, the non-adjacent blocks 1500 are reconstructed blocks that are not immediately adjacent to a current block 1502.]. Regarding claim 16, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses further comprising identifying a template-matching type from a set of template-matching types, wherein the template-matching process is applied using the template-matching type[See Chen: at least Fig. 4 and par. 124 regarding As shown in FIG. 4, template matching is used to derive motion information of the current CU by finding the best match between a template (top and/or left neighboring blocks of the current CU) [template-matching type] in the current picture and a block (same size to the template) in a reference picture.]. Regarding claim 17, Chen discloses all of the limitations of claim 16, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein the template-matching type is identified based on indicator signaled in the video bitstream[See Chen: at least Fig. 4 and par. 100, 124, 247-248 regarding As shown in FIG. 4, template matching is used to derive motion information of the current CU by finding the best match between a template (top and/or left neighboring blocks of the current CU) [template-matching type] in the current picture and a block (same size to the template) in a reference picture…Video encoder 20 may signal motion information of a video unit in various ways. Such motion information may include motion vectors, reference indexes, reference picture list indicators, and/or other data related to motion…Video data memory 151 may store encoded video data, such as an encoded video bitstream, to be decoded by the components of video decoder 30]. Regarding claim 18, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Chen discloses wherein the template-matching process uses a subsampled template [See Chen: at least Fig. 4 and par.124, 206 regarding As shown in FIG. 4, template matching is used to derive motion information of the current CU by finding the best match between a template (top and/or left neighboring blocks of the current CU) in the current picture and a block (same size to the template) in a reference picture… In particular, a CU-level motion search is first performed, followed by sub-CU level motion refinement. At the CU level, a video coder derives an initial motion vector for the whole CU based on bilateral matching or template matching ... Then, the video coder performs a local search based on bilateral matching or template matching around the starting point. The video coder then takes the MV that results in the minimum matching cost as the MV for the whole CU. Subsequently, the video coder may further refine the motion information at the sub-CU level with a FRUC sub-CU level MV candidates set which contains the derived CU motion vectors.]. Regarding claim 19, Chen discloses a method of video encoding performed at a computing system having memory and one or more processors[See Chen: at least Figs. 1, 24-25 and par. 8, 57-68, 308-309 regarding Video encoder 20 and video decoder 30 each may be implemented as any of a variety of suitable circuitry, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. When the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable medium and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure.], the method comprising: receiving video data comprising a current picture that includes plurality of blocks, the plurality of blocks including a current block[See Chen: at least Figs. 1, 24-25, par. 55, 228-230, 247 regarding Video data memory 101 may be configured to store video data to be encoded by the components of video encoder 20. The video data stored in video data memory 101 may be obtained, for example, from video source 18… Video encoder 20 receives video data…a video coder (e.g., a video encoder or a video decoder) may determine a motion vector of a non-adjacent block of a current picture of the video data. The non-adjacent block is non-adjacent to a current block of the current picture.]; identifying a reference block using a template-matching process[See Chen: at least Fig. 4, and par. 96, 124 regarding a video coder may determine a reference block based on samples of a reference picture…As shown in FIG. 4, template matching is used to derive motion information of the current CU by finding the best match between a template (top and/or left neighboring blocks of the current CU) in the current picture and a block (same size to the template) in a reference picture…]; identifying intra prediction information for the reference block [See Chen: at least Fig. 6, par. 73 and 146 regarding in the example of FIG. 6, a current block 600 in a current picture 602 has a first motion vector 604 (MV0) and a second motion vector 606 (MV1). Motion vector 604 points to a reference block 608 in a list0 reference picture 610. Motion vector 606 points to a reference block 612 in a list1 reference picture 614. Reference block 608 and reference block 612 may also be referred to herein as prediction blocks…to encode a block of the picture, video encoder 20 performs intra prediction or inter prediction to generate one or more predictive blocks…]; including the intra prediction information in a most probable mode (MPM) list[See Chen: at least Fig. 35, par. 297-300 regarding video encoder 20 may determine a plurality of Most Probable Modes (MPMs) (3500). Each respective MPM of the plurality of MPMs specifies a respective intra prediction mode of a respective block… video encoder 20 may determine a global MPM list that comprises MPMs specifying motion information for each block that is in the current picture and that is encoded prior to the current block… video encoder 20 may generate a predictive block based on an intra prediction mode specified by an MPM of the plurality of MPMs (3502)]; and encoding the current block using information from the MPM list [See Chen: at least Fig. 35, par. 297-300 regarding video encoder 20 may determine a global MPM list that comprises MPMs specifying motion information for each block that is in the current picture and that is encoded prior to the current block.. video encoder 20 may generate a predictive block based on an intra prediction mode specified by an MPM of the plurality of MPMs (3502).]. Regarding claim 20, Chen discloses a non-transitory computer-readable storage medium storing a video bitstream that is generated by a video encoding method[See Chen: at least Fig. 1 and par. 8, 57-68, 308-309 regarding Video encoder 20 and video decoder 30 each may be implemented as any of a variety of suitable circuitry, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. When the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable medium and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure.], the video encoding method comprising: receiving video data comprising a current picture that includes plurality of blocks, the plurality of blocks including a current block[See Chen: at least Figs. 1, 24-25, par. 55, 228-230, 247 regarding Video data memory 101 may be configured to store video data to be encoded by the components of video encoder 20. The video data stored in video data memory 101 may be obtained, for example, from video source 18… Video encoder 20 receives video data…a video coder (e.g., a video encoder or a video decoder) may determine a motion vector of a non-adjacent block of a current picture of the video data. The non-adjacent block is non-adjacent to a current block of the current picture.]; identifying a reference block using a template-matching process[See Chen: at least Fig. 4, and par. 96, 124 regarding a video coder may determine a reference block based on samples of a reference picture…As shown in FIG. 4, template matching is used to derive motion information of the current CU by finding the best match between a template (top and/or left neighboring blocks of the current CU) in the current picture and a block (same size to the template) in a reference picture…]; identifying intra prediction information for the reference block [See Chen: at least Fig. 6, par. 73 and 146 regarding in the example of FIG. 6, a current block 600 in a current picture 602 has a first motion vector 604 (MV0) and a second motion vector 606 (MV1). Motion vector 604 points to a reference block 608 in a list0 reference picture 610. Motion vector 606 points to a reference block 612 in a list1 reference picture 614. Reference block 608 and reference block 612 may also be referred to herein as prediction blocks…to encode a block of the picture, video encoder 20 performs intra prediction or inter prediction to generate one or more predictive blocks…]; including the intra prediction information in a most probable mode (MPM) list[See Chen: at least Fig. 35, par. 297-300 regarding video encoder 20 may determine a plurality of Most Probable Modes (MPMs) (3500). Each respective MPM of the plurality of MPMs specifies a respective intra prediction mode of a respective block… video encoder 20 may determine a global MPM list that comprises MPMs specifying motion information for each block that is in the current picture and that is encoded prior to the current block… video encoder 20 may generate a predictive block based on an intra prediction mode specified by an MPM of the plurality of MPMs (3502)]; and encoding the current block using information from the MPM list [See Chen: at least Fig. 35, par. 297-300 regarding video encoder 20 may determine a global MPM list that comprises MPMs specifying motion information for each block that is in the current picture and that is encoded prior to the current block.. video encoder 20 may generate a predictive block based on an intra prediction mode specified by an MPM of the plurality of MPMs (3502).]. 8. Claim 20 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chen et al.(US 20180359483 A1)(hereinafter Chen2). Regarding claim 20, “A non-transitory computer-readable storage medium storing a video bitstream that is generated by a video encoding method” has been interpreted above as nonfunctional descriptive material under MPEP 2111.05(III) and the case law cited therein. As such, claim 20 is subject to a prior art rejection based on any non-transitory computer readable medium known before the earliest effective filing date of the present application. In other words, the proper interpretation of claim 19 is merely a machine-readable media in which media is merely support or carrier for the bitstream being stored wherein the bitstream stored should not be given patentable weight. Chen2 which is analogous art discloses a non-transitory computer-readable storage medium storing a video bitstream that is generated by a video encoding method [See Chen2: at least Fig. 1 and par. 57-68, 321]. As such, Chen anticipated the non-transitory computer-readable storage medium storing a video bitstream that is generated by a video encoding method. Claim Rejections - 35 USC § 103 9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 10. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 11. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al.(US 2022/0070486 A1)(hereinafter Chen) in view of HEO et al.(US 2022/0217333 A1)(hereinafter Heo). Regarding claim 8, Chen discloses all of the limitations of claim 4, and are analyzed as previously discussed with respect to that claim Chen does not explicitly disclose further comprising: when the at least one position of the reference block does not have available intra prediction information, forgoing populating the MPM list intra prediction information corresponding to the reference block. However, Heo, from the same field of endeavor, teaches further comprising: when the at least one position of the reference block does not have available intra prediction information, forgoing populating the MPM list intra prediction information corresponding to the reference block [See Heo: at least par. 94, 108, 110 regarding as an example, if the corresponding sample is located outside the picture, the corresponding sample may be a non-available sample. For example, if the current block 300 is located on the edge of the picture, some of the neighboring samples may be not available. As another example, if another CU including the corresponding sample is not coded yet, the corresponding sample may be a non-available sample… if the intra prediction is applied to the current block, an intra prediction mode applied to the current block may be derived based on the intra prediction mode of the neighboring block of the current block. For example, the decoding apparatus may derive a most probable mode (MPM) list based on the intra prediction mode of the neighboring block (for example, left neighboring block and/or top neighboring block) of the current block and additional candidate modes…The encoding apparatus/decoding apparatus may search for the neighboring blocks of the current block according to a specific order ... Meanwhile, after searching, if six MPM candidates are not derived (“forgoing populating”), the MPM candidate may be derived based on the intra prediction mode derived as the MPM candidate…]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Chen with Heo teachings by including “further comprising: when the at least one position of the reference block does not have available intra prediction information, forgoing populating the MPM list intra prediction information corresponding to the reference block” because this combination has the benefit of providing the step of deriving the intra prediction modes based on the MPM flag[See Heo: at least par. 111]. 12. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al.(US 2022/0070486 A1)(hereinafter Chen) in view of XU et al.(US 2020/0404287 A1)(hereinafter Xu). Regarding claim 9, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further o, Chen discloses further comprising: identifying a second reference block indicated by the block vector[See Chen: at least par. 96, 146, 154 regarding Reference block 608 and reference block 612 [second reference block] may also be referred to herein as prediction blocks. After identifying reference block 608 and reference block 612, a video coder may generate a predictive block 616 as a weighted average of reference block 608 and reference block 612… The first neighbor is an NxN block above sub-CU A (block c). If block c is not available or is intra coded, the other NxN blocks above sub-CU A are checked (from left to right, starting at block c). The second neighbor is a block to the left of the sub-CU A (block b ). If block b is not available or is intra coded, other blocks to the left of sub-CU A are checked (from top to bottom, staring at block b ). The motion information obtained from the neighboring blocks for each list is scaled to the first reference frame for a given list… the video coder may determine the reference block such that each sample of the reference block is equal to a sample of the reference picture]; and including the second intra prediction information in the MPM list [See Chen: at least par. 96,146, 154, 297-300 regarding Reference block 608 and reference block 612 may also be referred to herein as prediction blocks. After identifying reference block 608 and reference block 612, a video coder may generate a predictive block 616 as a weighted average of reference block 608 and reference block 612… The first neighbor is an NxN block above subCU A (block c). If block c is not available or is intra coded, the other NxN blocks above sub-CU A are checked (from left to right, starting at block c). The second neighbor is a block to the left of the sub-CU A (block b ). If block b is not available or is intra coded, other blocks to the left of sub-CU A are checked (from top to bottom, staring at block b ). The motion information obtained from the neighboring blocks for each list is scaled to the first reference frame for a given list… the video coder may determine the reference block such that each sample of the reference block is equal to a sample of the reference picture… video decoder 30 may determine an MPM from the MPMs in the global motion vector candidate list. In some examples, video decoder 30 may store a plurality of non-adjacent MPMs in a first-in, first-out (FIFO) buffer. In such examples, the plurality of non-adjacent MPMs includes a non-adjacent MPM specifying the intra prediction mode of the non-adjacent block. Furthermore, in such examples, video decoder 30 may update the FIFO buffer to remove an earliest-added non-adjacent MPM from the FIFO buffer and adding an MPM to the FIFO buffer. The plurality of MPMs may include the MPMs in the FIFO buffer.]. Chen does not explicitly disclose further comprising: when the reference block has a corresponding block vector, identifying a second reference block indicated by the block vector. However, Xu teaches further comprising: when the reference block has a corresponding block vector, identifying a second reference block indicated by the block vector[See Xu: at least par. 168-169 regarding At (S1130), in response to determining that the current block is coded in IBC mode, a block vector that points to a first reference block of the current block is determined…. At (S140), an operation is performed on the block vector so that when the first reference block is not fully reconstructed or not within a valid search range of the current block, the block vector is modified to point to a second reference block that is in a fully reconstructed region and within the valid search range of the current block..]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Chen with Xu teachings by including “further comprising: when the reference block has a corresponding block vector, identifying a second reference block indicated by the block vector” because this combination has the benefit of providing a step of determining the block vector based on an IBC AMVP mode or merge mode [See Xu: at least par. 168-169]. 13. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al.(US 2022/0070486 A1)(hereinafter Chen) in view of LEE et al.(US 2024/0048693 A1)(hereinafter Lee). Regarding claim 10, Chen discloses all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Chen does not explicitly disclose wherein a most frequent intramode of the reference block is used as the intra prediction information for the reference block. However, Lee teaches wherein a most frequent intramode of the reference block is used as the intra prediction information for the reference block [See Lee: at least par. 184, 230 regarding For example, the most frequent one of the intra prediction modes of the neighbor blocks adjacent to the current block may be derived as the intra prediction mode of the current block… A prediction block of the current block may be generated by using the selected reference block.]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Chen with Lee teachings by including “wherein a most frequent intramode of the reference block is used as the intra prediction information for the reference block” because this combination has the benefit of providing a step of deriving the intra prediction mode of the current block using a statistic value of the intra prediction modes of the selected neighbor blocks [See Lee: at least par. 229]. Conclusion 14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANA J PICON-FELICIANO whose telephone number is (571)272-5252. The examiner can normally be reached Monday-Friday 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Kelley can be reached at 571 272 7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ana Picon-Feliciano/Examiner, Art Unit 2482 /CHRISTOPHER S KELLEY/Supervisory Patent Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Jan 28, 2025
Application Filed
Jan 08, 2026
Non-Final Rejection — §102, §103
Apr 06, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598287
DISPLAY DEVICE, METHOD, COMPUTER PROGRAM CODE, AND APPARATUS FOR PROVIDING A CORRECTION MAP FOR A DISPLAY DEVICE, METHOD AND COMPUTER PROGRAM CODE FOR OPERATING A DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593021
ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12567163
IMAGING SYSTEM AND OBJECT DEPTH ESTIMATION METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12561788
FLUORESCENCE MICROSCOPY METROLOGY SYSTEM AND METHOD OF OPERATING FLUORESCENCE MICROSCOPY METROLOGY SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554122
TECHNIQUES FOR PRODUCING IMAGERY IN A VISUAL EFFECTS SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
90%
With Interview (+21.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month