Prosecution Insights
Last updated: April 19, 2026
Application No. 18/825,577

METHOD AND APPARATUS FOR VIDEO CODING USING PALETTE MODE BASED ON PROXIMITY INFORMATION

Final Rejection §101§103§112
Filed
Sep 05, 2024
Examiner
RETALLICK, KAITLIN A
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Kwangwoon University Industry-Academic Collaboration Foundation
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
388 granted / 515 resolved
+17.3% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
27 currently pending
Career history
542
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
58.4%
+18.4% vs TC avg
§102
7.0%
-33.0% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 515 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Application Claim 6 has been cancelled. Claims 1-5 and 7-18 are currently pending in this application. Claim Rejections - 35 USC § 101 Claim 18 has been amended in order to overcome the 35 U.S.C. 101 rejection. Thus, the 35 U.S.C. 101 rejection of claim 18 has been withdrawn. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2 and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites the limitation "the index block" in line 7. There is insufficient antecedent basis for this limitation in the claim. Claim 17 recites the limitation "the bitstream" in line 3. There is insufficient antecedent basis for this limitation in the claim. In regards to claim 17, the claim limitation states, “The method of claim 11, wherein determining the palette table of the current block comprises: obtaining the palette table of the current block from the bitstream.” The method of claim 11 is an encoding method and would not being obtaining information from a bitstream but generating a bitstream (See Applicant’s specification, at least [0068]). According to the Applicant’s specification, the video encoding device may determine the palette table for the current block, for example, by applying clustering to the samples in the current block [0107]. Further, the Applicant’s specification states, “The video encoding device determines a palette table and derives an index map (S1500) according to the first method. Here, the first method uses adjacent information of the current block, and the adjacent information includes a block vector of neighbors of the current block or a template within a previously reconstructed neighboring region of the current block.” [0149]. Further, the video encoding device may apply a palette table derivation method to the candidate block to determine a palette table. Here, the palette table derivation method may utilize quantization steps, clustering, or segmentation. [See Applicant’s specification, 0154]. Thus, one of ordinary skill in the art would not understand what is being claimed in regards to obtaining the palette table of the current block from the bitstream when the encoding method is occurring. Response to Arguments Presented arguments have been fully considered, but are rendered moot in view of new ground(s) of rejection necessitated by amendment(s) initiated by the applicant(s). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-5 and 10-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over JANG et al. (Hereafter, “Jang”) [US 2025/0142095 A1] in view of LIM (Hereafter, “Lim”) [US 2022/0295046 A1] in further view of ZHU et al. (Hereafter, “Zhu”) [US 2018/0041774 A1]. In regards to claim 1, Jang discloses a method performed by a video decoding device for reconstructing a current block ([Abstract] an image decoding method performed by an image decoding apparatus for decoding a current block [0357] Referring to FIG. 26, when the palette mode applies to the current block, the image decoding apparatus may obtain palette information and palette index prediction information of the current block from a bitstream (S2610).), the method comprising: generating a palette table of the current block ([0362] The image decoding apparatus may construct a palette table for the current block based on the palette predictor.); determining a reference block in a reconstructed region in a current frame using at least one of a block vector or template matching; deriving an index map of the current block ([0365] The image decoding apparatus may generate a palette index map for the current block based on the palette index prediction information obtained from the bitstream (S2630). Specifically, the image decoding apparatus may generate the palette index map, by mapping the palette index to each sample in the current block according to a predetermined scan method, using the palette index obtained from the bitstream, the palette sample mode and the run-value of the palette sample mode.) based on sample values of the reference block, wherein the index map comprises an index for each sample of the current block ([0359] The palette index prediction information may include information on a palette index map for the current block. In an example, the image decoding apparatus may obtain at least one palette index mapped to the current block, by decoding PaletteIndexMap[xC][yC] included in the bitstream. Here, xC and yC may coordinate indicators specifying relative positions of the current sample from the top-left sample of a CTU (or slice) to which the current block belongs.), and wherein each index indicates an entry in the palette table having a color value corresponding to each sample of the current block ([0369] mapping the value of each palette index in the palette index map to a representative color value by referring to the palette table); and reconstructing samples of the current block based on the index map of the current block and the palette table of the current block ([0369] The image decoding apparatus may decode the current block based on the palette table and the palette index map for the current block (S2640). Specifically, the image decoding apparatus may generate a prediction block for the current block, by inversely mapping the value of each palette index in the palette index map to a representative color value by referring to the palette table.). Jang discloses the performing on intra block copy (prediction) wherein IBC is a method of predicting a current picture using a previously reconstructed reference block in the current picture at a location apart from the current block by a predetermined distance. When IBC is applied, the location of the reference block in the current picture may be encoded as a vector (block vector) corresponding to the predetermined distance. [See Jang, 0084]. However, Jang fails to explicitly disclose determining a reference block in a reconstructed region in a current frame using at least one of a block vector or template matching; deriving an index map of the current block based on sample values of the reference block. Lim discloses a method performed by a video decoding device for reconstructing a current block ([Abstract] a method for decoding a video), the method comprising: generating a palette table of the current block ([Abstract] a step for configuring a current palette table on the basis of a previous palette table); determining a reference block in a reconstructed region in a current frame using at least one of a block vector or template matching ([0146-0148 and Fig. 11] A method of referring to a palette table pre-used in a block specified by a BV by using a BV (block vector). The block vector (BV) is received by the decoding device, wherein the block vector determines a region which is most similar to the current block in the current frame); deriving an index map of the current block based on sample values of the reference block ([Fig. 22] current encoding block [0209] For example, information representing that it is a pixel using a BV is assigned to index 0 of a palette table. Subsequently, for a pixel position indicated as index 0, pixels at the same position in a block searched by using a BV are assigned to a position of index 0.), wherein the index map comprises an index for each sample of the current block ([0149] Alternatively, a BV may be encoded based on a BV of a neighboring block. For example, if an encoding method using a BV was used around a current block, a corresponding BV may be used by merging with a current block. In this case, a position referring to a BV may include at least one of blocks shown in FIG. 10 or a collocated block included in a collocated picture. A position to refer to a BV is set in a manner similar to that in FIG. 10, which position was referenced is indicated as an index and it is encoded and transmitted to a decoding device.), and wherein each index indicates an entry in the palette table having a color value corresponding to each sample of the current block ([Abstract] a step for determining a palette index in units of pixels in the current block); and reconstructing samples of the current block based on the index map of the current block and the palette table of the current block ([Abstract] a step for restoring the pixels in the current block on the basis of the palette table and the palette index). It would have been obvious to modify the teachings of Jang with the teachings of Lim in order to improve the encoding/decoding efficiency of the palette mode. Zhu discloses a method performed by a video decoding device for reconstructing a current block ([Abstract] Method and apparatus for video coding using palette coding modes.), the method comprising: generating a palette table of the current block ([0004] The basic idea behind the palette mode is that the samples in the CU can be represented by a small set of representative colour values. This set of representative colour values is referred to as the palette for the block. [0042] In still another embodiment, the palette of the current block can be predicted by the palette of the reference block. In still another embodiment, the palette of the current block can be copied as the palette of the reference block.); determining a reference block in a reconstructed region in a current frame using at least one of a block vector or template matching ([0010] The reference block can be in the current picture or the current depth image. The reference block can be from a reconstructed picture/a reconstructed depth image or a prediction picture/a prediction depth image of the current block. [0041] reference block 610 is obtained [0044] The reference block can be from the reconstructed image or a prediction image of the current picture. [0045] The position of the reference block can be signalled from the encoder to the decoder explicitly. Therefore, a decoder can locate the reference block according to the signalled position information. Furthermore, the position of the reference block can be signalled from the encoder to the decoder explicitly in the same way as the signalling method of block vector (BV) in the Intra block copy (IBC) mode.); deriving an index map of the current block based on sample values of the reference block ([0041] The non-local index map is derived from a reference block comprising reference samples. The reference block 610 is first obtained, and then a colour quantization algorithm is used to derive the indices of the non-local index map 620 from the reference samples of the reference block as shown in FIG. 6. The derived non-local index map is then used to predict the current index map 630.), wherein the index map comprises an index for each sample of the current block, and wherein each index indicates an entry in the palette table having a color value corresponding to each sample of the current block ([0006] An index map 230 is generated based on the palette and palette coding is applied to the index map. As mentioned above, the term ‘index map’ refers to the indices of pixels in a block. [0004] The basic idea behind the palette mode is that the samples in the CU can be represented by a small set of representative colour values. This set of representative colour values is referred to as the palette for the block. Each sample in the block can be assigned to a nearest index in the palette. FIG. 1 illustrates an example of palette coding, where pixels in a current block 110 are represented by palette indices from a palette 120. Since each pixel can be represented by one palette index from a small-size palette, therefore, the colour index coding becomes very effective for screen content materials.); and reconstructing samples of the current block based on the index map of the current block and the palette table of the current block ([0039] a sample in the current block can be reconstructed by using the colour indicated by the corresponding index in the palette). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jang and Lim with the determination of a reconstructed reference block using a block vector method and using the non-local index map of the reference block contained reference samples to derive the current index map for the current block as taught by Zhu in order to improve coding efficiency over the conventional palette coding [See Zhu]. In regards to claim 2, the limitations of claim 1 have been addressed. Jang discloses further comprising: decoding, from a bitstream, an index map derivation flag that indicates whether to use the sample values of the reference block to derive the index map ([0253] Except for the topmost row of the current block in the horizontal traverse scan, the leftmost column of the current block in the vertical traverse scan and the case where an immediately previous palette sample mode is ‘COPY_ABOVE’, information on the palette sample mode be using may signaled a predetermined flag (e.g., copy_above_palette_indices_flag). For example, copy_above_palette_indices_flag having a first value (e.g., 0) may specify that the predetermined palette index mapped to the current block is encoded using the ‘INDEX’ mode. In contrast, copy_above_palette_indices_flag having a second value (e.g., 1) may specify that the predetermined palette index mapped to the current block is encoded using the ‘COPY_ABOVE’ mode.); and checking the index map derivation flag; wherein, when the index map derivation flag is true, the index map is derived based on the sample values of the index block ([0367] The palette sample mode may include an ‘INDEX’ mode and a ‘COPY ABOVE’ mode, as described above. In contrast, when the ‘COPY_ABOVE’ mode applies to the current sample, the value of the palette index mapped to the current sample may be determined to be the value of the palette index mapped to a neighboring sample present above (in case of horizontal traverse scan) or to the left (in case of vertical traverse scan) of the current sample.). Zhu discloses further comprising: decoding, from a bitstream, an index map derivation flag that indicates whether to use the sample values of the reference block to derive the index map ([0070] a flag to indicate whether non-local index prediction is used for the current index map); and checking the index map derivation flag; wherein, when the index map derivation flag is true, the index map is derived based on the sample values of the index block ([0035] non-local index map can be used to predict one or more indices of the index map of the current block). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jang and Lim with the teachings of Zhu in order to improve coding efficiency over the conventional palette coding [See Zhu]. In regards to claim 3, the limitations of claim 2 have been addressed. Jang discloses further comprising: when the index map derivation flag is false, decoding the index map from the bitstream ([0253] Except for the topmost row of the current block in the horizontal traverse scan, the leftmost column of the current block in the vertical traverse scan and the case where an immediately previous palette sample mode is ‘COPY_ABOVE’, information on the palette sample mode be using may signaled a predetermined flag (e.g., copy_above_palette_indices_flag). For example, copy_above_palette_indices_flag having a first value (e.g., 0) may specify that the predetermined palette index mapped to the current block is encoded using the ‘INDEX’ mode. In contrast, copy_above_palette_indices_flag having a second value (e.g., 1) may specify that the predetermined palette index mapped to the current block is encoded using the ‘COPY_ABOVE’ mode. [0367] When the ‘INDEX’ mode applies to the current sample, the value of the palette index mapped to the current sample may be directly obtained from the bitstream.). In regards to claim 4, the limitations of claim 1 have been addressed. Jang discloses wherein generating the palette table comprises: decoding from a bitstream a series of reuse flags that indicate whether entries included in a palette prediction list are to be reused; and including reused entries from the palette prediction list in the palette table based on values of the series of reuse flags ([0242] The palette table may include at least one palette entry included in the palette predictor and at least one palette index for identifying the palette entry. For each palette entry included in the palette predictor, a reuse flag specifying whether the palette entry is included in the palette table may be signaled through a bitstream. In this case, the reuse flag having a first value (e.g., 0) may specify that the palette entry is not included in the palette table. In contrast, the reuse flag having a second value (e.g., 1) may specify that the corresponding palette entry is included in the palette table. The reuse flag may be encoded, for example, using run-length coding for a value of 0.). In regards to claim 5, the limitations of claim 4 have been addressed. Jang discloses wherein generating the palette table ([0362] The image decoding apparatus may construct a palette table for the current block based on the palette predictor.) comprises: decoding new entries from the bitstream or implicitly deriving the new entries ([0243] In addition, the palette table may include at least one new palette entry which is not included in the palette predictor and at least one palette index for identifying the new palette entry. Information (e.g., a total number, a component value, etc.) on the new palette entry may be encoded, for example, using a 0-th exponential Golomb code and signaled through a bitstream. [0358] In addition, in an example, the image decoding apparatus may obtain the information on the new palette entry, by decoding new_palette_entries[cIdx][i] included in the bitstream. In PredictorPaletteEntries[cIdx][i] and new_palette_entries[cIdx][i], cIdx may mean a color component.); adding the new entries to the palette table ([0362] The palette table may include at least one of the palette entry included in the palette predictor or the new palette entry obtained from the bitstream and a palette index for identifying each palette entry.); and updating the palette prediction list ([0374] In an example, the image decoding apparatus may update the palette predictor by adding at least one palette entry included in the palette table to the palette predictor.). In regards to claim 10, the limitations of claim 1 have been addressed. Jang fails to explicitly disclose wherein deriving the index map of the current block comprises: deriving an index map of the reference block based on the sample values of the reference block and sample values using the palette table of the current block; and setting the index map of the current block equal to the index map of the reference block. Zhu discloses wherein deriving the index map of the current block ([0041] The derived non-local index map is then used to predict the current index map 630.) comprises: deriving an index map of the reference block based on the sample values of the reference block and sample values using the palette table of the current block ([0041] In yet another embodiment, the non-local index map is derived from a reference block comprising reference samples. The reference block 610 is first obtained, and then a colour quantization algorithm is used to derive the indices of the non-local index map 620 from the reference samples of the reference block as shown in FIG. 6. The derived non-local index map is then used to predict the current index map 630. Colour quantization is the process for converting samples to indices, where each sample is assigned a corresponding index according to some criteria. For example, the corresponding index is the index of the nearest, based on any distance measurement, palette entry of the current block.); and setting the index map of the current block equal to the index map of the reference block ([0039] In yet another embodiment, the non-local index map for a current block is determined first, and then the indices in the current index map can be derived by directly copying the corresponding indices in the non-local index map.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jang and Lim with the teachings of Zhu in order to improve coding efficiency over the conventional palette coding [See Zhu]. Claim 11 lists all the same elements of claim 1, but in encoding form rather than decoding form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 11. Furthermore, regarding claim 11, Jang discloses encoding information on whether a palette mode is applied to the current block ([0265] First, referring to FIG. 17, a palette mode flag pred_mode_plt_flag may specify whether the palette mode applies to the current block (or the current CU). For example, a first value (e.g., 0) of pred_mode_plt_flag may specify that the palette mode may not apply for the current block. In contrast, a second value (e.g., 1) of pred_mode_plt_flag may specify that the palette mode applies to the current block. [0400] Meanwhile, information on whether the palette mode applies to the current block may be signaled using a palette mode flag (e.g., pred_mode_plt_flag).). Claims 12-14 list all the same elements of claims 2 and 3, but in encoding form rather than decoding form. Therefore, the supporting rationales of the rejections to claims 2 and 3 apply equally as well to claims 12-14. Claims 15 and 16 list all the same elements of claims 4 and 5, but in encoding form rather than decoding form. Therefore, the supporting rationales of the rejections to claims 4 and 5 apply equally as well to claims 15 and 16. In regards to claim 17, the limitations of claim 11 have been addressed. Jang discloses wherein determining the palette table of the current block comprises: obtaining the palette table of the current block from the bitstream ([0411] The image decoding apparatus may construct a palette predictor and a palette table for the current block based on the palette information obtained from the bitstream (S2940).). Claim 18 is the same as claim 11 but in non-transitory computer-readable recording medium form rather than method form. Thus, the supporting rationale for the rejection of claim 11 applies as well to claim 18. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jang in view of Lim in further view of Zhu in even further view of NAM et al. (Hereafter, “Nam”) [US 2022/0141485 A1]. In regards to claim 7, the limitations of claim 1 have been addressed. Jang fails to explicitly disclose wherein determining the reference block comprises: composing a block vector candidate list by using block vectors present at left, top, top- left, top-right, and bottom-left positions of the current block; decoding a candidate index; and determining the reference block based on a block vector derived from the block vector candidate list by using the candidate index. Lim discloses wherein determining the reference block comprises: composing a block vector candidate list by using block vectors present at left, top, top- left, top-right, and bottom-left positions of the current block ([0149] Alternatively, a BV may be encoded based on a BV of a neighboring block. For example, if an encoding method using a BV was used around a current block, a corresponding BV may be used by merging with a current block. In this case, a position referring to a BV may include at least one of blocks shown in FIG. 10 or a collocated block included in a collocated picture.); decoding a candidate index ([0149] A position to refer to a BV is set in a manner similar to that in FIG. 10, which position was referenced is indicated as an index and it is encoded and transmitted to a decoding device.); and determining the reference block based on a block vector derived from the block vector candidate list by using the candidate index ([0147-0149] the position of the BV is referenced as an index to determine the neighboring block). Zhu discloses wherein determining the reference block comprises: determining the reference block based on a block vector derived from the block vector candidate list by using the candidate index ([0013] The reference block can be selected from multiple reference block candidates and selection of the reference block can be signalled explicitly by an encoder or implicitly derived by a decoder.). Nam discloses wherein determining the reference block comprises: composing a block vector candidate list by using block vectors present at left, top, top- left, top-right, and bottom-left positions of the current block ([0134] the block vectors in the list from neighboring candidate IBC coded blocks, wherein the merge list consists of spatial, HMVP, and pairwise candidates [0086] The neighboring reference samples of the current block may include samples adjacent to the left boundary of the current block having a size of nW×nH and a total of 2×nH samples neighboring the bottom-left, samples adjacent to the top boundary of the current block and a total of 2×nW samples neighboring the top-right, and one sample neighboring the top-left of the current block [0112] the spatial neighboring blocks may include a bottom left corner neighboring block, a left neighboring block, a top right corner neighboring block, a top neighboring block, and a top left corner neighboring block of the current block); decoding a candidate index ([0134] a merge candidate index); and determining the reference block based on a block vector derived from the block vector candidate list by using the candidate index ([0129] The decoding apparatus may derive a reference block for the current block in the current picture through the signaled block vector (motion vector), thereby driving a prediction signal (predicted block or predicted samples) for the current block. [0134] The decoding apparatus may derive a reference block for the current block in the current picture through the signaled block vector (motion vector), thereby driving a prediction signal (predicted block or predicted samples) for the current block.). It would have been obvious to modify the teachings of Jang with the teachings of Lim in order to improve the encoding/decoding efficiency of the palette mode. It would have been obvious to modify the teachings of Jang and Lim with the selection of the reference block from multiple reference block candidates being signalled to the decoder as taught by Zhu in order to in order to improve coding efficiency over the conventional palette coding [See Zhu]. It would have been obvious to modify the teachings of Jang, Lim, and Zhu with the deriving of a reference block for the current block using the merge candidate index to select the block vector in a list as taught by Nam in order to improve compression efficiency and increase coding efficiency for screen contents [See Nam]. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jang in view of Lim in further view of Zhu in even further view of Nam in even further view of NIEN et al. (Hereafter, “Nien”) [US 2023/0217013 A1]. In regards to claim 8, the limitations of claim 1 have been addressed. Jang fails to explicitly disclose wherein determining the reference block comprises: composing a block vector candidate list by using block vectors present at left, top, top- left, top-right, and bottom-left positions of the current block; and determining the reference block based on costs obtained by performing template matching on the current block and a block indicated by each candidate block vector of the block vector candidate list. Lim discloses wherein determining the reference block comprises: composing a block vector candidate list by using block vectors present at left, top, top- left, top-right, and bottom-left positions of the current block ([0149] Alternatively, a BV may be encoded based on a BV of a neighboring block. For example, if an encoding method using a BV was used around a current block, a corresponding BV may be used by merging with a current block. In this case, a position referring to a BV may include at least one of blocks shown in FIG. 10 or a collocated block included in a collocated picture.). Nam discloses wherein determining the reference block comprises: composing a block vector candidate list by using block vectors present at left, top, top- left, top-right, and bottom-left positions of the current block ([0134] the block vectors in the list from neighboring candidate IBC coded blocks, wherein the merge list consists of spatial, HMVP, and pairwise candidates [0086] The neighboring reference samples of the current block may include samples adjacent to the left boundary of the current block having a size of nW×nH and a total of 2×nH samples neighboring the bottom-left, samples adjacent to the top boundary of the current block and a total of 2×nW samples neighboring the top-right, and one sample neighboring the top-left of the current block [0112] the spatial neighboring blocks may include a bottom left corner neighboring block, a left neighboring block, a top right corner neighboring block, a top neighboring block, and a top left corner neighboring block of the current block); and determining the reference block based on costs obtained ([0116] select an optimal merge candidate among merge candidates configuring the merge candidate list based on a rate-distortion (RD) cost) by performing template matching on the current block and a block indicated by each candidate block vector of the block vector candidate list. Nien discloses wherein determining the reference block comprises: composing a block vector candidate list by using block vectors present at left, top, top- left, top-right, and bottom-left positions of the current block ([0068] With reference to FIG. 1 and FIG. 2, the decoder module 124 determines a plurality of neighboring positions neighboring the block unit. The neighboring positions may be selected from at least one of a plurality of adjacent positions adjacent to the block unit or a plurality of non-adjacent positions non-adjacent to the block unit.); and determining the reference block based on costs obtained by performing template matching on the current block and a block indicated by each candidate block vector of the block vector candidate list ([0072] In some implementations, the at least one block vector predictor may be determined from the predictor candidate list based on at least one predictor index. In some implementations, N block vector predictors may be determined based on first N of the vector predictor candidates. In some implementations, the vector predictor candidates in the predictor candidate list may be sorted based on an initial cost function. [0094] Returning to FIG. 3, at block 350, the decoder module 124 determines a first cost value between the block template region and each of the plurality of candidate template regions. [0108] Returning to FIG. 3, at block 370, the decoder module 124 selects, based on the adjusted difference list, a reference block from the current frame for reconstructing the block unit.). It would have been obvious to modify the teachings of Jang with the teachings of Lim in order to improve the encoding/decoding efficiency of the palette mode. It would have been obvious to modify the teachings of Jang with the determination of the candidate template regions based on the cost value and template matching as taught by Nien in order to improve coding efficiency. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jang in view of Lim in further view of Zhu in even further view of Nien. In regards to claim 9, the limitations of claim 6 have been addressed. Jang fails to explicitly disclose wherein determining the reference block comprises: determining the reference block by applying template matching to the reconstructed region in the current frame. Nien discloses wherein determining the reference block comprises: determining the reference block by applying template matching to the reconstructed region in the current frame ([0095] With reference to FIG. 1 and FIG. 2, the decoder module 124 may determine a candidate cost value based on the block template region and each of the candidate template regions by using a cost function. With further reference to FIG. 5, since the candidate template regions 5210, 5220, 5230, and 5240 are reconstructed prior to reconstructing the block unit 550, the decoder module 124 may directly receive a plurality of reconstructed samples of the candidate template regions 5210, 5220, 5230, and 5240. The decoder module 124 may derive the candidate cost values between the block template region 5000 and each of the candidate template regions 5210, 5220, 5230, and 5240 by calculating a difference between the reconstructed samples in the block template region 5500 and the reconstructed samples in each of the candidate template regions 5210, 5220, 5230, and 5240. [0108] Returning to FIG. 3, at block 370, the decoder module 124 selects, based on the adjusted difference list, a reference block from the current frame for reconstructing the block unit.). It would have been obvious to modify the teachings of Jang with the determination of the candidate template regions based on the cost value and template matching as taught by Nien in order to improve coding efficiency. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kaitlin A Retallick whose telephone number is (571)270-3841. The examiner can normally be reached Monday-Friday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KAITLIN A RETALLICK/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Sep 05, 2024
Application Filed
Sep 26, 2025
Non-Final Rejection — §101, §103, §112
Dec 29, 2025
Response Filed
Jan 22, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602757
SYSTEM AND COMPUTER-IMPLEMENTED METHOD FOR IMAGE DATA QUALITY ASSURANCE IN AN INSTALLATION ARRANGED TO PERFORM ANIMAL-RELATED ACTIONS, COMPUTER PROGRAM AND NON-VOLATILE DATA CARRIER
2y 5m to grant Granted Apr 14, 2026
Patent 12604045
Encoding Control Method and Apparatus, and Decoding Control Method and Apparatus
2y 5m to grant Granted Apr 14, 2026
Patent 12593058
BITSTREAM MERGING
2y 5m to grant Granted Mar 31, 2026
Patent 12587669
MOTION FLOW CODING FOR DEEP LEARNING BASED YUV VIDEO COMPRESSION
2y 5m to grant Granted Mar 24, 2026
Patent 12587678
INFORMATION PROCESSING APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
86%
With Interview (+10.7%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 515 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month