Prosecution Insights
Last updated: April 19, 2026
Application No. 19/025,888

CONTENT ADAPTIVE DEBLOCKING DURING VIDEO ENCODING AND DECODING

Non-Final OA §102§112§DP
Filed
Jan 16, 2025
Examiner
LE, PETER D
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
97%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
491 granted / 613 resolved
+22.1% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
35 currently pending
Career history
648
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
49.5%
+9.5% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 613 resolved cases

Office Action

§102 §112 §DP
Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions Preliminary Amendment, filed 01/31/2025, has been entered. Claim 1 is cancelled. Claims 2 – 21 are pending. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Claims 2, 10 and 15 is rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1 of U.S Patent Nos. 12,267,519; 11,843,797; 11,528,499; 10,965,950; and 8,787,443. Although the conflicting claims are not identical, they are not patentably distinct from each other because the instant claims are similar to the claims in the U.S patents to meet the limitations claimed in the U.S patents. Table 1 shows comparison between the instant claims and the U.S patent claims This is a non-provisionally obviousness-type double patenting rejection because the conflicting claims have in fact been patented. Table 1: Comparison of claims in instant Application No. 19/025888 vs. U.S. Patent Nos. 12,267,519; 11,843,797; 11,528,499; 10,965,950; and 8,787,443 Appl. 19/025888 Appl. 18/386960 (US Pat. 12,267,519) Appl. 17/983263 (US Pat. 11,843,797) Appl. 17/188784 (US Pat. 11,528,449) Appl. 16/404534 (US Pat. 10,965,950) Appl. 12/924836 (US Pat. 8,787,443) 2. A computer system comprising one or more processing units and memory, wherein the computer system implements a video decoder configured to perform operations comprising: receiving encoded data for a video frame; reconstructing, using the encoded data, the video frame; applying a deblocking filter to at least one component of the reconstructed video frame, including applying the deblocking filter to luminance values of the reconstructed video frame, thereby producing a deblocked, reconstructed video frame; and based at least in part on a flag, in the encoded data, for part of the video frame: for a block in the deblocked, reconstructed video frame, determining edge locations throughout the block based at least in part on analysis of pixel values of the block in the deblocked, reconstructed video frame; selecting a filter from two or more candidate filters associated with different edge orientations, the two or more candidate filters including a candidate filter associated with a horizontal edge orientation, a candidate filter associated with a vertical edge orientation, and multiple candidate filters associated with different diagonal edge orientations; and selectively applying the selected filter to the block. 1. In a computer system that implements a video decoder, a method comprising: receiving encoded data for a video frame; reconstructing, using the encoded data, the video frame; applying a deblocking filter to at least one component of the reconstructed video frame, including applying the deblocking filter to luminance values of the reconstructed video frame, thereby producing a deblocked, reconstructed video frame; for a block in the deblocked, reconstructed video frame, determining edge locations throughout the block based at least in part on analysis of pixel values of the block in the deblocked, reconstructed video frame; selecting a filter from two or more candidate filters associated with different edge orientations, the two or more candidate filters including a candidate filter associated with a horizontal edge orientation, a candidate filter associated with a vertical edge orientation, and multiple candidate filters associated with different diagonal edge orientations; and selectively applying the selected filter to the block. 1. One or more non-transitory computer-readable media having stored thereon computer-executable instructions for causing one or more processing units, when programmed thereby, to perform operations comprising: receiving, in a bitstream for at least part of a video sequence, encoded data for a video frame of the video sequence; reconstructing, using the encoded data, the video frame; buffering the reconstructed video frame; applying a deblocking filter to at least one component of the reconstructed video frame, including applying the deblocking filter to luminance values of the reconstructed video frame, thereby producing a deblocked, reconstructed video frame; for a block in the deblocked, reconstructed video frame, determining edge locations throughout the block based at least in part on analysis of pixel values of the block in the deblocked, reconstructed video frame; selecting a filter from two or more candidate filters associated with different edge orientations, the two or more candidate filters including a candidate filter associated with a horizontal edge orientation, a candidate filter associated with a vertical edge orientation, and multiple candidate filters associated with different diagonal edge orientations; and selectively applying the selected filter to the block. 1. In a computer system comprising one or more processing units and memory, the computer system implementing a video decoder, a method comprising: receiving, in a bitstream for at least part of a video sequence, encoded data for a video frame of the video sequence; reconstructing, using the encoded data, the video frame; buffering the reconstructed video frame; applying a deblocking filter to at least one component of the reconstructed video frame, including applying the deblocking filter to luminance values of the reconstructed video frame, producing a deblocked, reconstructed video frame; for a block in the deblocked, reconstructed video frame, determining edge locations throughout the block based at least in part on analysis of pixel values of the block in the deblocked, reconstructed video frame; selecting a filter from two or more candidate filters associated with different edge orientations, the two or more candidate filters including a candidate filter associated with a horizontal edge orientation, a candidate filter associated with a vertical edge orientation, and multiple candidate filters associated with different diagonal edge orientations; and selectively applying the selected filter to the block. 1. A method for reducing block artifacts during video compression, comprising: buffering a video frame reconstructed during block-based motion-predictive encoding; applying a deblocking filter to at least one component of the reconstructed video frame, including applying the deblocking filter to luminance values of the reconstructed video frame, producing a deblocked, reconstructed video frame; determining edge locations and edge orientations throughout one or more blocks in the deblocked, reconstructed video frame based at least in part on analysis of pixel values of the one or more blocks in the deblocked, reconstructed video frame; for a selected block of the one or more blocks, selecting a filter from two or more candidate filters based at least in part on the edge orientations in the selected block; and applying the selected filter to the selected block. 1. A method for reducing block artifacts during video compression or decompression, comprising: buffering a video frame reconstructed during block-based motion-predictive encoding or decoding; determining edge locations and edge orientations throughout one or more blocks in the video frame; for a selected block of the one or more blocks in the video frame, selecting a deblocking filter from two or more candidate deblocking filters based at least in part on the edge orientations in the selected block, the candidate deblocking filters including two or more directional deblocking filters adapted to reduce blockiness at a block boundary while also maintaining a real edge of a directional feature of the video frame at the block boundary, each of the directional deblocking filters being adapted for a different non-horizontal and non-vertical orientation of the directional feature of the video frame, each of the directional deblocking filters further comprising a filter bank of multiple filters in which the filters in the filter bank have the same non-horizontal and non-vertical orientation as one another; and applying the selected deblocking filter to the selected block and one or more blocks neighboring the selected block. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The limitation “to facilitate decoding” can’t explicitly describe the functional relationship between the stored encoded data and the “decoding or decoder” operations. Therefore, the scope of the “decoding or decoder” operations can’t be determined. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States. Claims 10-14 rejected under 35 U.S.C. 102(b) as being anticipated by Yamaguchi et al. (“Yamaguchi”) [U.S Patent Application Pub. 2007/00140574 A1] Regarding claim 10, Yamaguchi meets the claim limitations as follows: One or more non-transitory computer-readable media having stored thereon encoded data for a video frame, the encoded data being organized to facilitate decoding, using a computer-implemented video decoder, with operations comprising [para. 0012, claim 19: ‘a computer readable medium that stores a computer program for causing a computer to decode image data’. Note: To be given patentable weight, recording medium and the encoded data or bitstream (i.e. descriptive material) must be in a functional relationship. A functional relationship can be found where the descriptive material performs some function with respect to the recording medium to which it is associated. See MPEP §2111.05(I)(A). When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists”. MPEP §2111.05(III). The storage medium storing the claimed encoded data merely services as a support for the storage of the bitstream data and provides no functional relationship between the stored bitstream and storage medium. Therefore the structure of the encoded, which scope is implied by the method steps, is non-functional descriptive material and given no patentable weight. MPEP §2111.05(III).]: reconstructing, using the encoded data, the video frame; applying a deblocking filter to at least one component of the reconstructed video frame, including applying the deblocking filter to luminance values of the reconstructed video frame, thereby producing a deblocked, reconstructed video frame; and based at least in part on a flag, in the encoded data, for part of the video frame: for a block in the deblocked, reconstructed video frame, determining edge locations throughout the block based at least in part on analysis of pixel values of the block in the deblocked, reconstructed video frame; selecting a filter from two or more candidate filters associated with different edge orientations, the two or more candidate filters including a candidate filter associated with a horizontal edge orientation, a candidate filter associated with a vertical edge orientation, and multiple candidate filters associated with different diagonal edge orientations; and selectively applying the selected filter to the block. Regarding claims 11-14 dependent on claim 10, all claim limitations are rejected as per discussion for claim 10. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See form 892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER D LE whose telephone number is (571)270-5382. The examiner can normally be reached on Monday - Alternate Friday: 10AM-6:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH PERUNGAVOOR can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER D LE/ Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Jan 16, 2025
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582306
SCANNER FOR DENTAL TREATMENT, AND DATA TRANSMISSION METHOD OF SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12585104
IMAGE PICKUP MODULE, ENDOSCOPE, AND METHOD FOR MANUFACTURING IMAGE PICKUP MODULE
2y 5m to grant Granted Mar 24, 2026
Patent 12574478
SECURITY OPERATIONS OF PARKED VEHICLES
2y 5m to grant Granted Mar 10, 2026
Patent 12568184
TECHNIQUES TO GENERATE INTERPOLATED VIDEO FRAMES
2y 5m to grant Granted Mar 03, 2026
Patent 12568210
METHOD AND DEVICE FOR ENCODING/DECODING IMAGE, AND RECORDING MEDIUM IN WHICH BITSTREAM IS STORED
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
97%
With Interview (+16.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 613 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month