Prosecution Insights
Last updated: April 19, 2026
Application No. 18/395,526

ADAPTIVE BILATERAL FILTER IN VIDEO CODING

Non-Final OA §103§112§DP
Filed
Dec 23, 2023
Examiner
RETALLICK, KAITLIN A
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
2 (Non-Final)
75%
Grant Probability
Favorable
2-3
OA Rounds
2y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
388 granted / 515 resolved
+17.3% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
27 currently pending
Career history
542
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
58.4%
+18.4% vs TC avg
§102
7.0%
-33.0% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 515 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Application Claim 4 has been cancelled. Claim 21 has been added. Claims 1-3 and 5-21 are currently pending in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/17/2025 was filed after the mailing date of the non-final rejection on 08/05/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Allowable Subject Matter Claims 5, 19, and 21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Objections Claim 21 is objected to because of the following informalities: Claim 21 refers to claim 19 as an “non-transitory computer-readable recording medium” claim. However, claim 19 is an apparatus claim. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 21 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 21 recites all the same limitations as claim 19 which it depends upon without any additional claim limitation. Thus, claim 21 doesn’t further limit the claims. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 2, 6, 8, 12, 13, 16-18, and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 5, 11, and 17-20 of U.S. Patent No. 12,542,899 in view of Rusanovskyy et al. (Hereafter, “Rusanovskyy”) [US 2020/0296364 A1]. Although the claims at issue are not identical, they are not patentably distinct from each other because they cover mutually associated subject matter. Thus, a terminal disclaimer is required. An analysis of the claims can be seen in Table 1 below. Table 1: Instant Application 18/395,526 vs. U.S. Patent No. 12,542,899 Instant Application 18/395,526 Claims (Difference Emphasis Added) U.S. Patent No. 12,542,899 Claims (Difference Emphasis Added) 1. A method for processing video data, comprising: determining, during a conversion between picture data of the video data and a bitstream, to classify the picture data into groups based on statistical information related to the picture data and apply a bilateral filter to filter samples in each of the groups; and performing the conversion based on the bilateral filter, wherein the statistical information includes at least one of: a mean value, a variance, or a size of a video unit. 2. The method of claim 1, wherein the picture data includes video units, wherein the groups include a number of video unit groups (Nunit), wherein the statistical information is related to the video units, and wherein the video units are classified into the Nunit video unit groups based on the statistical information. 6. The method of claim 1, wherein the picture data includes samples, wherein the groups include a number of sample groups (Nsample), wherein the statistical information is related to the samples, and wherein the samples are classified into the Nsample sample groups based on the statistical information. 8. The method of claim 6, wherein the statistical information within a window is used in the classification of the samples. 1. A method for processing video data comprising: determining, during a conversion between a video and a bitstream, to apply a bilateral filter and a cross component sample adaptive offset (CCSAO) filter to samples in a current block of a current picture of the video, wherein the bilateral filter includes filter weights that vary based on a distance between surrounding samples and a central sample and differences in intensities of the surrounding samples and the central sample, and wherein the CCSAO filter utilizes a component which a current sample belongs to and at least one of two other components of the video, to classify the current sample into different categories; and performing the conversion based on the bilateral filter and the CCSAO filter. 2. The method of claim 1, wherein the CCSAO filter is applied in parallel with the bilateral filter. 11. The method of claim 1, wherein video units within a higher-level video region are classified into the Nunit video unit groups based on statistical information related to the video units and the bilateral filter is applied to filter samples in each of the groups, and wherein the statistical information includes at least one of: a mean value of the video unit, a variance of the video unit, or a size of a video unit; or alternatively, wherein samples inside a video unit are classified into the Nsample groups based on coded information or statistical information within a window and the bilateral filter is applied to filter samples in each of the groups, and wherein the coded information or statistical information within the window includes a mean value within the window or a variance within the window. 12. The method of claim 10, wherein the bilateral filter and a second filter are applied independently to the same samples, wherein the bilateral filter produces a first offset, wherein the second filter produces a second offset, and wherein an output sample is produced based on the first offset and the second offset. 3. The method of claim 2, wherein the bilateral filter and the CCSAO filter receive a same input sample, wherein the bilateral filter generates a bilateral offset based on the input sample, wherein the CCSAO filter generates a CCSAO offset based on the input sample, and wherein an output sample is derived based on the bilateral offset and the CCSAO offset. 13. The method of claim 12, wherein the output sample is further processed by a next stage. 5. The method of claim 3, wherein the output sample is further processed by a next stage. 16. The method of claim 1, wherein the conversion includes encoding the video data into the bitstream. 17. The method of claim 1, wherein the conversion includes encoding the video into the bitstream. 17. The method of claim 1, wherein the conversion includes decoding the video data from the bitstream. 18. The method of claim 1, wherein the conversion includes decoding the video from the bitstream. Claim 18 is the same as claim 1 but in apparatus form. Claim 19 is the same as claim 1 but in apparatus form. Claim 20 is the same as claim 1 but in non-transitory computer-readable recording medium form. Claim 20 is the same as claim 1 but in non-transitory computer-readable recording medium form. Some the differences in the claim limitations in the U.S. Patent are narrower than the instant application, and thus it would have been obvious to make the claim limitations in the instant application broader by removing the specific language found in the U.S. Patent. The U.S. Patent fails to explicitly disclose wherein the bilateral filter and a second filter are applied independently to the same samples, wherein the bilateral filter produces a first offset, wherein the second filter produces a second offset, and wherein an output sample is produced based on the first offset and the second offset. Rusanovskyy discloses wherein the bilateral filter and a second filter are applied independently to the same samples ([0062] Video encoder 20 and/or video decoder 30 may apply multiple different in-loop filters independent from one another. In other words, each of the multiple different in-loop filters may include separate parameters that control application of the different in-loop filters, some of which may be redundant in view of parameters provided for other in-loop filters.), wherein the bilateral filter produces a first offset, wherein the second filter produces a second offset, and wherein an output sample is produced based on the first offset and the second offset ([0067] In some instances, the combined SAO and bilateral filtering engine of video coder 20/30 may perform both SOA filtering to obtain first filtered reconstructed samples of the current block and bilateral filtering with respect to the reconstructed samples of the current block of video data to obtain second filtered reconstructed samples of the current block. The combined SAO and bilateral filtering engine may next aggregate or otherwise combine (possibly including weighted multiplication where weights may be derived and signaled or just derived) the first filtered reconstructed samples of the current block and the second reconstructed samples of the current block to obtain the combined filtered reconstructed samples of the current block of video data.). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the invention with the teachings of Rusanovskyy in order to improve the video quality without degradation and to improve the coding efficiency [See Rusanovskyy]. Claims 1, 7, 10, 15-18, and 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 8, and 17-20 of copending Application No. 18/395,524 in view of BORDES et al. (Hereafter, “Bordes”) [US 2020/0236356 A1]. Although the claims at issue are not identical, they are not patentably distinct from each other because they cover mutually associated subject matter. Thus, a terminal disclaimer is required. An analysis of the claims can be seen in Table 2 below. This is a provisional nonstatutory double patenting rejection. Table 2: Instant Application 18/395,526 vs. Co-Pending Application No. 18/395,524 Instant Application 18/395,526 Claims (Difference Emphasis Added) Co-Pending Application No. 18/395,524 Claims (Difference Emphasis Added) 1. A method for processing video data, comprising: determining, during a conversion between picture data of the video data and a bitstream, to classify the picture data into groups based on statistical information related to the picture data and apply a bilateral filter to filter samples in each of the groups; and performing the conversion based on the bilateral filter, wherein the statistical information includes at least one of: a mean value, a variance, or a size of a video unit. 1. A method for processing video data, comprising: determining, during a conversion between a current block in a current picture of a video and a bitstream, to apply a bilateral filter to samples in the current block, wherein the bilateral filter includes filter weights that vary based on a distance between surrounding samples and a central sample and differences in intensities of the surrounding samples and the central sample; and performing the conversion based on the bilateral filter, wherein the bilateral filter includes a first operation for luma components and a second operation for two chroma components, wherein the first operation and the second operation are different, and wherein the two chroma components share the same second operation. 7. The method of claim 6, wherein the bilateral filter employs different parameters when filtering samples assigned to different classes in a video unit. 4. The method of claim 1, wherein different parameters or different look-up tables are used for different samples or different positions in the current block. 10. The method of claim 1, wherein the bilateral filter is applied at a loop-filtering stage. 15. The method of claim 10, wherein the bilateral filter is applied to reconstructed samples of a coding block. 8. The method of claim 1, wherein the bilateral filter is applied to reconstructed samples. 16. The method of claim 1, wherein the conversion includes encoding the video data into the bitstream. 17. The method of claim 1, wherein the conversion includes encoding the video data into the bitstream. 17. The method of claim 1, wherein the conversion includes decoding the video data from the bitstream. 18. The method of claim 1, wherein the conversion includes decoding the video data from the bitstream. Claim 18 is the same as claim 1 but in apparatus form. Claim 19 is the same as claim 1 but in apparatus form. Claim 20 is the same as claim 1 but in non-transitory computer-readable recording medium form. Claim 20 is the same as claim 1 but in non-transitory computer-readable recording medium form. Some the differences in the claim limitations in the co-pending application are narrower than the instant application, and thus it would have been obvious to make the claim limitations in the instant application broader by removing the specific language found in the co-pending application. The Co-Pending application fails to explicitly disclose determining, during a conversion between picture data of the video data and a bitstream, to classify the picture data into groups based on statistical information related to the picture data and apply a bilateral filter to filter samples in each of the groups, wherein the statistical information includes at least one of: a mean value, a variance, or a size of a video unit; wherein the bilateral filter employs different parameters when filtering samples assigned to different classes in a video unit; wherein the bilateral filter is applied at a loop-filtering stage. Bordes discloses determining, during a conversion between picture data of the video data and a bitstream ([Abstract] method of encoding a bitstream formatted to include encoded data), to classify the picture data into groups based on statistical information related to the picture data ([0091] Blocks may be classified according to their shape and/or dimensions. [0048] the term “block” is more generally used herein to refer to a block (e.g. a coding block (CB), transform block (TB), coding group (CG), etc.) or a unit (e.g. a CU)) and apply a bilateral filter to filter samples in each of the groups ([0051] The present disclosure is directed to filtering patterns of neighboring samples used in block filtering (e.g., bilateral filtering) that take into consideration the different shapes and properties of the coding blocks. [0060] A block filter 685 may filter the reconstructed block (e.g., using a bilateral filter) after combiner (also called reconstruction module) 665.), wherein the statistical information includes at least one of: a mean value, a variance, or a size of a video unit ([0091] In one embodiment, the selection of the filter pattern may further be a function of the block shape and/or dimensions. For example, some patterns may be enabled or forbidden depending on the block shape. Blocks may be classified according to their shape and/or dimensions, and for each shape/dimension, a predefined set of patterns is possible.); wherein the bilateral filter employs different parameters when filtering samples assigned to different classes in a video unit ([0091] In one embodiment, the selection of the filter pattern may further be a function of the block shape and/or dimensions. For example, some patterns may be enabled or forbidden depending on the block shape. Blocks may be classified according to their shape and/or dimensions, and for each shape/dimension, a predefined set of patterns is possible.); wherein the bilateral filter is applied at a loop-filtering stage ([0004] In block filtering 100, each block is filtered 130 after being accessed 110, decoded and reconstructed 120. [0006] One example of block filtering is a bilateral filter (BLF), which is a non-linear, edge-preserving, and noise-reducing smoothing filter for images. [0060] block filtering 100 may also be utilized in in-loop filter(s) 665 [0063] block filter 685 may be placed in one of: inside the intra prediction module 660, inside in-loop filter(s) module 665, inside both the intra prediction module 660 and the in-loop filter(s) module 665, or inside the combiner module 655). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the invention with the teachings of Bordes in order to improve the performance of the video encoders [See Bordes, 0064]. Response to Arguments Applicant’s arguments, see pages 11-14, filed 11/05/2025, with respect to the rejection(s) of claim(s) 1-3, 6-18, and 20 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of BORDES et al. (Hereafter, “Bordes”) [US 2020/0236356 A1]. On page 14 of the Applicant’s Remarks, the Applicant states that claim 21 depends from claim 20. However, claim 21 depends from claim 19 in the claims filed on 11/05/2025. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 10, 11, 15-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over BORDES et al. (Hereafter, “Bordes”) [US 2020/0236356 A1]. In regards to claim 1, Bordes discloses a method for processing video data ([Abstract] Methods (1100, 1300) and apparatuses (600, 1200) for video coding and decoding are provided.), comprising: determining, during a conversion between picture data of the video data and a bitstream ([Abstract] method of encoding a bitstream formatted to include encoded data), to classify the picture data into groups based on statistical information related to the picture data ([0091] Blocks may be classified according to their shape and/or dimensions. [0048] the term “block” is more generally used herein to refer to a block (e.g. a coding block (CB), transform block (TB), coding group (CG), etc.) or a unit (e.g. a CU)) and apply a bilateral filter to filter samples in each of the groups ([0051] The present disclosure is directed to filtering patterns of neighboring samples used in block filtering (e.g., bilateral filtering) that take into consideration the different shapes and properties of the coding blocks. [0060] A block filter 685 may filter the reconstructed block (e.g., using a bilateral filter) after combiner (also called reconstruction module) 665.); and performing the conversion based on the bilateral filter ([0060] A block filter 685 may filter the reconstructed block (e.g., using a bilateral filter) after combiner (also called reconstruction module) 665.), wherein the statistical information includes at least one of a mean value, a variance, or a size of a video unit ([0091] In one embodiment, the selection of the filter pattern may further be a function of the block shape and/or dimensions. For example, some patterns may be enabled or forbidden depending on the block shape. Blocks may be classified according to their shape and/or dimensions, and for each shape/dimension, a predefined set of patterns is possible.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the different embodiments, aspects, and/or examples of Bordes to include the classification of blocks according to their shape and/or dimensions for the selection of the filter pattern as taught by Bordes in order to improve the performance of the video encoders [See Bordes, 0064]. In regards to claim 2, the limitations of claim 1 have been addressed. Bordes discloses wherein the picture data includes video units ([0048] the term “block” is more generally used herein to refer to a block (e.g. a coding block (CB), transform block (TB), coding group (CG), etc.) or a unit (e.g. a CU)), wherein the groups include a number of video unit groups (Nunit) ([0092-0094] Block classes 1-3), wherein the statistical information is related to the video units ([0091] blocks have shape and/or dimensions), and wherein the video units are classified into the Nunit video unit groups based on the statistical information ([0091-0094] Blocks may be classified according to their shape and/or dimensions into block classes 1-3.). In regards to claim 3, the limitations of claim 2 have been addressed. Bordes discloses wherein the bilateral filter employs different parameters when filtering video units assigned to different classes ([0091] In one embodiment, the selection of the filter pattern may further be a function of the block shape and/or dimensions. For example, some patterns may be enabled or forbidden depending on the block shape. Blocks may be classified according to their shape and/or dimensions, and for each shape/dimension, a predefined set of patterns is possible.). In regards to claim 10, the limitations of claim 1 have been addressed. Bordes discloses wherein the bilateral filter is applied at a loop-filtering stage ([0004] In block filtering 100, each block is filtered 130 after being accessed 110, decoded and reconstructed 120. [0006] One example of block filtering is a bilateral filter (BLF), which is a non-linear, edge-preserving, and noise-reducing smoothing filter for images. [0060] block filtering 100 may also be utilized in in-loop filter(s) 665 [0063] block filter 685 may be placed in one of: inside the intra prediction module 660, inside in-loop filter(s) module 665, inside both the intra prediction module 660 and the in-loop filter(s) module 665, or inside the combiner module 655). In regards to claim 11, the limitations of claim 10 have been addressed. Bordes discloses wherein the bilateral filter is applied before a deblocking filter is applied ([0060 and Fig. 6] A block filter 685 may filter the reconstructed block (e.g., using a bilateral filter) after combiner (also called reconstruction module) 665. An in-loop filter(s) (i.e., a filter within the prediction loop, module 665) may be applied to the block filtered reconstructed picture, including, for example, to perform deblocking/Sample Adaptive Offset (SAO) filtering to reduce coding artifacts. In in-loop filtering, the filtering process may be performed, e.g., after an entire slice or image/picture/frame has been reconstructed, all-in-one, so that the filtered samples can be used for Inter-prediction. Hence, post-filtering 150 may be applied to in-filter(s) 665.). In regards to claim 15, the limitations of claim 10 have been addressed. Bordes discloses wherein the bilateral filter is applied to reconstructed samples of a coding block ([0060] A block filter 685 may filter the reconstructed block (e.g., using a bilateral filter) after combiner (also called reconstruction module) 665.). In regards to claim 16, the limitations of claim 11 have been addressed. Bordes discloses wherein the conversion includes encoding the video data into the bitstream ([Abstract] The method of video encoding wherein a bitstream is formatted to include encoded data. [0057-0059] The encoder encodes a video sequence and outputs a bitstream.). In regards to claim 17, the limitations of claim 1 have been addressed. Bordes discloses wherein the conversion includes decoding the video data from the bitstream ([0134] The decoder receives a video bitstream as input and obtains the decodes video data.). Claim 18 lists all the same elements of claim 1, but in apparatus form rather than method form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 18. Regarding claim 18, Bordes discloses an apparatus for processing video data, comprising: a processor ([0061] The modules of video encoder 600 may be implemented in software and executed by a processor, or may be implemented by well-known circuits by one skilled in the art of compression.); and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor ([0165] The system 1400 may include at least one processor 1410 configured to execute instructions loaded therein for implementing the various processes as discussed above. Processor 1410 may include embedded memory, input output interface and various other circuitries as known in the art. [0178] Moreover, any of the methods 1100 and/or 1300 may be implemented as a computer program product (independently or jointly) comprising computer executable instructions which may be executed by a processor. The computer program product having the computer-executable instructions may be stored in the respective transitory or non-transitory computer-readable storage media of the system 1400, encoder 600 and/or decoder 1200.). Claim 20 lists all the same elements of claim 1, but in non-transitory computer-readable recording medium form rather than method form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 20. Claim(s) 6-9 and 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bordes in view of Rusanovskyy et al. (Hereafter, “Rusanovskyy”) [US 2020/0296364 A1]. In regards to claim 6, the limitations of claim 1 have been addressed. Bordes discloses wherein the picture data includes samples ([0076] sample in block), wherein the groups include a number of sample groups (Nsample) ([0084] group of samples (e.g., 4x4 sub-block), wherein the statistical information is related to the samples, and wherein the samples are classified into the Nsample sample groups based on the statistical information ([0076] According to the present disclosure, pattern diversity is introduced for performing the block filtering 685 and for computing the filter weights. The pattern may be explicitly signaled in the stream, or it may be inferred or determined at the decoder depending on the characteristics of reconstructed samples, using pixel classification for instance. The pattern may be fixed for a current picture, for a current block, for all samples in a block, or may change for each picture, each block, each sub-block (e.g., 4×4 sub-block) and/or each sample in a block. [0084] The filter pattern shape may vary per sequence, per picture, per slice or per block. Advantageously, the filter pattern shape may vary inside the block, per sample or per group of samples (e.g., 4×4 sub-block). The selection may be made at the encoder, for instance, based on the computation of the local reconstructed signal properties (such as the local gradients). [0091] In one embodiment, the selection of the filter pattern may further be a function of the block shape and/or dimensions. For example, some patterns may be enabled or forbidden depending on the block shape. Blocks may be classified according to their shape and/or dimensions, and for each shape/dimension, a predefined set of patterns is possible.). Rusanovskyy discloses wherein the picture data includes samples ([0019] multiple blocks of video data [0036] In HEVC and other video coding specifications, video data includes a series of pictures. [0038] video encoder 20 may encode blocks of the picture [0057] In HEVC, the region (the unit for SAO parameters signaling) is defined, as one example, to be a coding tree unit (CTU).), wherein the groups include a number of sample groups (Nsample) ([0057] classifying the region samples into multiple categories with a selected classifier), wherein the statistical information is related to the samples ([0057] mean sample distortion of a region, selected classifier), and wherein the samples are classified into the Nsample sample groups based on the statistical information ([0057] classifying the region samples into multiple categories with a selected classifier). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bordes with the teachings of Rusanovskyy in order to improve the video quality without degradation and to improve the coding efficiency [See Rusanovskyy]. In regards to claim 7, the limitations of claim 6 have been addressed. Bordes discloses wherein the bilateral filter employs different parameters when filtering samples assigned to different classes in a video unit ([0076] According to the present disclosure, pattern diversity is introduced for performing the block filtering 685 and for computing the filter weights. The pattern may be explicitly signaled in the stream, or it may be inferred or determined at the decoder depending on the characteristics of reconstructed samples, using pixel classification for instance. The pattern may be fixed for a current picture, for a current block, for all samples in a block, or may change for each picture, each block, each sub-block (e.g., 4×4 sub-block) and/or each sample in a block.). Rusanovskyy discloses wherein the bilateral filter employs different parameters when filtering samples assigned to different classes in a video unit ([0167] Filter unit 216 may implement the BIF according to JVET-J0021 in which case the BIF is applied in the reconstruction samples domain as an additional stage preceding in-loop filters. In some examples, filter unit 216 may explicitly derive the filter parameters of BIF, e.g., weights from the coded information. [0170] To better capture statistical properties of the video signal, and potentially improve performance of the BIF, filter unit 216 may adjust weight functions set forth in Equation (2) by the α.sub.d parameter, tabulated in a Table that filter unit 216 may provide to video decoder 30 as side information and being dependent on coding mode and parameters of block partitioning (minimal size). [0185] Moreover, filter unit 216 may filter samples I.sub.a, I.sub.c as outlined below: [0186] Depending on the SAO classification index (edgeIdx), 1D or 2D dimensional bilateral filter can be applied: [0187] In some instances, filtering can be implemented as bilateral filter process: I′.sub.c=I.sub.C+((w.sub.aΔI.sub.a+w.sub.bΔI.sub.b+c.sub.1)>>c.sub.2), [0188] where c.sub.1 and c.sub.2 is system defined integer value.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bordes with the teachings of Rusanovskyy in order to improve the video quality without degradation and to improve the coding efficiency [See Rusanovskyy]. In regards to claim 8, the limitations of claim 6 have been addressed. Bordes discloses wherein the statistical information within a window is used in the classification of the samples ([0076] According to the present disclosure, pattern diversity is introduced for performing the block filtering 685 and for computing the filter weights. The pattern may be explicitly signaled in the stream, or it may be inferred or determined at the decoder depending on the characteristics of reconstructed samples, using pixel classification for instance. The pattern may be fixed for a current picture, for a current block, for all samples in a block, or may change for each picture, each block, each sub-block (e.g., 4×4 sub-block) and/or each sample in a block. [0084] The filter pattern shape may vary per sequence, per picture, per slice or per block. Advantageously, the filter pattern shape may vary inside the block, per sample or per group of samples (e.g., 4×4 sub-block). The selection may be made at the encoder, for instance, based on the computation of the local reconstructed signal properties (such as the local gradients). [0091] In one embodiment, the selection of the filter pattern may further be a function of the block shape and/or dimensions. For example, some patterns may be enabled or forbidden depending on the block shape. Blocks may be classified according to their shape and/or dimensions, and for each shape/dimension, a predefined set of patterns is possible.). In regards to claim 9, the limitations of claim 8 have been addressed. Bordes discloses wherein a shape of the window is a square shape ([0076] According to the present disclosure, pattern diversity is introduced for performing the block filtering 685 and for computing the filter weights. The pattern may be explicitly signaled in the stream, or it may be inferred or determined at the decoder depending on the characteristics of reconstructed samples, using pixel classification for instance. The pattern may be fixed for a current picture, for a current block, for all samples in a block, or may change for each picture, each block, each sub-block (e.g., 4×4 sub-block) and/or each sample in a block. [0084] The filter pattern shape may vary per sequence, per picture, per slice or per block. Advantageously, the filter pattern shape may vary inside the block, per sample or per group of samples (e.g., 4×4 sub-block). The selection may be made at the encoder, for instance, based on the computation of the local reconstructed signal properties (such as the local gradients). [0091] In one embodiment, the selection of the filter pattern may further be a function of the block shape and/or dimensions. For example, some patterns may be enabled or forbidden depending on the block shape. Blocks may be classified according to their shape and/or dimensions, and for each shape/dimension, a predefined set of patterns is possible.). In regards to claim 12, the limitations of claim 10 have been addressed. Bordes fails to explicitly disclose wherein the bilateral filter and a second filter are applied independently to the same samples, wherein the bilateral filter produces a first offset, wherein the second filter produces a second offset, and wherein an output sample is produced based on the first offset and the second offset. Rusanovskyy discloses wherein the bilateral filter and a second filter are applied independently to the same samples ([0062] Video encoder 20 and/or video decoder 30 may apply multiple different in-loop filters independent from one another. In other words, each of the multiple different in-loop filters may include separate parameters that control application of the different in-loop filters, some of which may be redundant in view of parameters provided for other in-loop filters.), wherein the bilateral filter produces a first offset, wherein the second filter produces a second offset, and wherein an output sample is produced based on the first offset and the second offset ([0067] In some instances, the combined SAO and bilateral filtering engine of video coder 20/30 may perform both SOA filtering to obtain first filtered reconstructed samples of the current block and bilateral filtering with respect to the reconstructed samples of the current block of video data to obtain second filtered reconstructed samples of the current block. The combined SAO and bilateral filtering engine may next aggregate or otherwise combine (possibly including weighted multiplication where weights may be derived and signaled or just derived) the first filtered reconstructed samples of the current block and the second reconstructed samples of the current block to obtain the combined filtered reconstructed samples of the current block of video data.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bordes with the teachings of Rusanovskyy in order to improve the video quality without degradation and to improve the coding efficiency [See Rusanovskyy]. In regards to claim 13, the limitations of claim 12 have been addressed. Bordes fails to explicitly disclose wherein the output sample is further processed by a next stage. Rusanovskyy discloses wherein the output sample is further processed by a next stage ([0099] Video encoder 200 stores reconstructed blocks in DPB 218. In examples where operations of filter unit 216 are needed, filter unit 216 may store the filtered reconstructed blocks to DPB 218. Motion estimation unit 222 and motion compensation unit 224 may retrieve a reference picture from DPB 218, formed from the reconstructed (and potentially filtered) blocks, to inter-predict blocks of subsequently encoded pictures. In addition, intra-prediction unit 226 may use reconstructed blocks in DPB 218 of a current picture to intra-predict other blocks in the current picture.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bordes with the teachings of Rusanovskyy in order to improve the video quality without degradation and to improve the coding efficiency [See Rusanovskyy]. In regards to claim 14, the limitations of claim 10 have been addressed. Bordes fails to explicitly disclose wherein the bilateral filter is applied to prediction samples before generating reconstructed samples. Rusanovskyy discloses wherein the bilateral filter is applied to prediction samples before generating reconstructed samples ([0005] In general, this disclosure describes filtering techniques that may be used in a post-processing stage, as part of in-loop coding, or in a prediction stage of video coding.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Bordes with the teachings of Rusanovskyy in order to improve the video quality without degradation and to improve the coding efficiency [See Rusanovskyy]. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kaitlin A Retallick whose telephone number is (571)270-3841. The examiner can normally be reached Monday-Friday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KAITLIN A RETALLICK/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Dec 23, 2023
Application Filed
Jul 31, 2025
Non-Final Rejection — §103, §112, §DP
Nov 05, 2025
Response Filed
Jan 28, 2026
Non-Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602757
SYSTEM AND COMPUTER-IMPLEMENTED METHOD FOR IMAGE DATA QUALITY ASSURANCE IN AN INSTALLATION ARRANGED TO PERFORM ANIMAL-RELATED ACTIONS, COMPUTER PROGRAM AND NON-VOLATILE DATA CARRIER
2y 5m to grant Granted Apr 14, 2026
Patent 12604045
Encoding Control Method and Apparatus, and Decoding Control Method and Apparatus
2y 5m to grant Granted Apr 14, 2026
Patent 12593058
BITSTREAM MERGING
2y 5m to grant Granted Mar 31, 2026
Patent 12587669
MOTION FLOW CODING FOR DEEP LEARNING BASED YUV VIDEO COMPRESSION
2y 5m to grant Granted Mar 24, 2026
Patent 12587678
INFORMATION PROCESSING APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
75%
Grant Probability
86%
With Interview (+10.7%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 515 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month