Prosecution Insights
Last updated: April 19, 2026
Application No. 17/698,705

Filter Flags for Subpicture Deblocking

Non-Final OA §103
Filed
Mar 18, 2022
Examiner
CHANG, DANIEL
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Huawei Technologies Co., Ltd.
OA Round
6 (Non-Final)
64%
Grant Probability
Moderate
6-7
OA Rounds
2y 10m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
233 granted / 367 resolved
+5.5% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
45 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
51.4%
+11.4% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 367 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to the remark entered on 12/04/2025. Claims 1-4 & 21-36 are pending in the instant application. Claims 5-20 are cancelled. Response to Arguments Applicant's remarks filed 12/04/2025, page 11, regarding the rejection of claim 1, and similar claims 4, 23, 26, 29 & 32 under 35 USC 103 have been fully considered, and are moot upon further consideration and a new ground(s) of rejection made under 35 U.S.C. § 103 as being unpatentable over Chen et al. (US 2015/0085929 A1) (hereinafter Chen) in view of Coban et al., "Support of Independent Sub-Pictures," 9th. JCT-VC MEETING; 20120427 - 20120507; Geneva; (Joint Collaborative Team on Video Coding of ITU-T SG16 WP6 and ISO/IEC JTC 1/SC 29/WG 11), no. JCTVC-I0356 17 April 2012 (2012-04-17) (hereinafter Coban), and further in view of Lai et al. (US 2022/0303587 A1, with priority to 62/896,032) (hereinafter Lai) as outlined below. In response to Applicant’s remark that Examiner’s previously-cited references, specifically Choi, do not show the Applicant’s newly-recited claim limitations, the Examiner directs Applicant’s attention to the rejection of claim 1 below, wherein Applicant’s newly-recited limitations are addressed by Coban for the reasons as outlined below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 21-34 & 36 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 2015/0085929 A1) (hereinafter Chen) in view of Coban et al., "Support of Independent Sub-Pictures," 9th. JCT-VC MEETING; 20120427 - 20120507; Geneva; (Joint Collaborative Team on Video Coding of ITU-T SG16 WP6 and ISO/IEC JTC 1/SC 29/WG 11), no. JCTVC-I0356 17 April 2012 (2012-04-17) (hereinafter Coban), and further in view of Lai et al. (US 2022/0303587 A1, with priority to 62/896,032) (hereinafter Lai). Regarding claim 1, Chen discloses a method implemented by a video decoder [Paragraph [0325]-[0328], video decoder 30, performing decoding process] and comprising: receiving, by the video decoder, a video bitstream comprising a picture, a first flag, and a second flag, wherein the picture comprises a first subpicture and a second subpicture, wherein a first boundary of the first subpicture is shared with a second boundary of the second subpicture, and wherein the first boundary is a right boundary and the second boundary is a left boundary, or the first boundary is a bottom boundary and the second boundary is a top boundary [Paragraph [0083], [0117] & [0249]-[0250], Fig. 8, Decoder receiving bitstream, decoding syntax elements as flags that include first and second flags, and decoding of video data containing slices, or CTUs as subpictures, with boundaries between P0/P1, P0/Q0]; and applying the deblocking filter process to second subblock edges and second transform block edges of the second subpicture at the second boundary [Paragraph [0044]-[0045], [0113] & [0129], Deblocking filter processes applied to sub-PU boundaries, as subblock edges, and transform unit (TU) boundaries, as transform block edges]. However, Chen does not explicitly disclose wherein the first subpicture and the second subpicture are rectangular regions of one or more slices within the picture. Coban teaches that wherein the first subpicture and the second subpicture are rectangular regions of one or more slices within the picture [I. Introduction, Fig. 1, Partitioning picture into rectangular sub-pictures (sub-pic 0, sub-pic 1, sub-pic 2, each sub-picture starts a new slice]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Chen to integrate rectangular subpictures in Coban as above, in order to support more flexible parallelism for multiple independent decoders that don’t share reference picture regions or use neighboring regions’ (e.g. neighboring tiles) decoded pixel samples (Coban, Abstract). However, Chen and Coban do not explicitly disclose receiving, by the video decoder, a video bitstream comprising a picture, a first loop_filter_across_subpic_ enabled flag, and a second loop filter across_subpic_enabled _flag, wherein the picture comprises a first subpicture corresponding to the first loop _filter_across subpic enabled flag and a second subpicture corresponding to the second loop_filter_across_subpic _enabled _flag; not applying a deblocking filter process to first subblock edges and first transform block edges of the first subpicture at the first boundary when the first loop filter_across _subpic_enabled_flag is equal to 0; and applying the deblocking filter process to second subblock edges and second transform block edges of the second subpicture at the second boundary when the second loop_filter_across_subpic_enabled _flag is equal to 1. Lai teaches of receiving, by the video decoder, a video bitstream comprising a picture, a first loop_filter_across_subpic_enabled flag, and a second loop_filter _across_subpic_enabled_flag, wherein the picture comprises a first subpicture corresponding to the first loop_filter_across subpic enabled flag and a second subpicture corresponding to the second loop_filter_across_subpic_enabled _flag [Paragraph [0088]-[0089] supported in pgs. 3-4 in provisional, loop_filter _across_subpic_enabled_flag[i] equal to 1 specifies that in-loop filtering operations may be performed across the boundaries of the i-th sub-picture in each coded picture in the CVS, wherein i is an integer from 0 to NumSubPics within for loop, with corresponding i-th sub-picture with respective loop_filter_across_subpic_enabled_flag[i]]; not applying a deblocking filter process to first subblock edges and first transform block edges of the first subpicture at the first boundary when the first loop filter_across_subpic_enabled_flag is equal to 0 [Paragraph [0088]-[0089] supported in pgs. 3-4 in provisional, loop_filter_across_subpic_enabled_flag[1] equal to 0 specifies that in-loop filtering operations are not performed across the boundaries of the 1-th or first sub-picture in each coded picture in the CVS]; and applying the deblocking filter process to second subblock edges and second transform block edges of the second subpicture at the second boundary when the second loop_filter_across_subpic_enabled_flag is equal to 1 [Paragraph [0088]-[0089] supported in pgs. 3-4 in provisional, loop_filter_across_subpic_enabled_flag[2] equal to 1 specifies that in-loop filtering operations may be performed across the boundaries of the 2-th or second sub-picture in each coded picture in the CVS]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Chen to integrate and implement the loop_filter_across_subpic_enabled_flag syntax in Lai as above, in order to improve and enhance video picture quality (Lai, Paragraphs [0005]). Regarding claim 2, Chen, Coban, and Lai disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Lai teaches wherein the first loop_filter_across_subpic_enabled _flag equal to 1 or the second loop_filter_across_subpic_enabled_flag equal to 1 specifies that in-loop filtering operations may be performed across boundaries of a subpicture in each coded picture in a coded video sequence (CVS) [Paragraph [0088]-[0089] supported in pgs. 3-4 in provisional, loop_filter_across_subpic_enabled _flag[i] equal to 1 specifies that in-loop filtering operations may be performed across the boundaries of the i-th sub-picture in each coded picture in the CVS]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Chen to integrate and implement the loop_filter_across_subpic_enabled_flag syntax in Lai as above, in order to improve and enhance video picture quality (Lai, Paragraphs [0005]). Regarding claim 3, Chen, Coban, and Lai disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Lai teaches wherein the first loop_filter_across_subpic_enabled _flag equal to 0 or the second loop_filter_across subpic_enabled flag equal to 1 equal to 0 specifies that in-loop filtering operations are not performed across boundaries of a subpicture in each coded picture in a coded video sequence (CVS) [Paragraph [0088]-[0089] supported in pgs. 3-4 in provisional, loop_filter_across_subpic_enabled _flag[i] equal to 0 specifies that in-loop filtering operations are not performed across the boundaries of the i-th sub-picture in each coded picture in the CVS]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Chen to integrate and implement the loop_filter_across_subpic_enabled_flag syntax in Lai as above, in order to improve and enhance video picture quality (Lai, Paragraphs [0005]). Regarding claims (4 & 21-22), claims (4 & 21-22) are drawn to the video decoder using/performing the same method as claimed in claims (1-3). Therefore claims (4 & 21-22) correspond to method claims (1-3), respectively, and are rejected for the same reasons of obvious as used above. Furthermore, Chen discloses the video decoder comprising: a memory configured to store instructions; and a processor coupled to the memory and configured to execute the instructions [Paragraphs [0066] & [0357]-[0358], video decoder containing one or more microprocessors, wherein the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors]. Regarding claim (23-25), computer program product comprising instructions claims (23-25) correspond to the same method as claimed in claims (1-3), and therefore are also rejected for the same reasons of obviousness as listed above. Furthermore, Chen discloses of a computer program product comprising instructions that are stored on a non-transitory computer-readable medium [Paragraphs [0066] & [0357]-[0358], video decoder containing one or more microprocessors, wherein the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors]. Regarding claim (26-28), non-transitory computer-readable medium claims (26-28) correspond to the same method as claimed in claims (1-3), and therefore are also rejected for the same reasons of obviousness as listed above. Furthermore, Chen discloses of a non-transitory computer-readable medium storing instructions [Paragraphs [0066] & [0357]-[0358], video decoder containing one or more microprocessors, wherein the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors]. Regarding claims (29-31), claims (29-31) are drawn to a method implemented by a video encoder having limitations similar to the decoding method of using the same as claimed in claims (1-3) treated in the above rejection. Therefore, method claims (29-31) correspond to method claims (1-3) and are rejected for the same reasons of obviousness as used above. Furthermore, Chen discloses of a method implemented by a video encoder [Paragraphs [0064]-[0066], video encoder 20 running encoding process]. Regarding claims (32-34), claims (32-34) are drawn to the video encoder using/performing the same method as claimed in claims (29-31). Therefore claims (32-34) correspond to method claims (29-32), respectively, and are rejected for the same reasons of obvious as used above. Furthermore, Chen discloses the video encoder comprising: a memory configured to store instructions; and a processor coupled to the memory and configured to execute the instructions [Paragraphs [0064]-[0066] & [0357]-[0358], video encoder containing one or more microprocessors, wherein the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors]. Regarding claim 36, Chen, Coban, and Lai disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Chen discloses wherein a transform block is a rectangular MxN block of samples resulting from a transform in a decoding process [Paragraphs [0064]-[0071] & [0075], CU sizes include 2N×N as MxN, with TUs being typically sized based on the size of PUs within a given CU defined for a partitioned CTU or LCU, and thus are also 2N×N as MxN]. Claim 35 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 2015/0085929 A1) (hereinafter Chen), Coban et al., "Support of Independent Sub-Pictures," 9th. JCT-VC MEETING; 20120427 - 20120507; Geneva; (Joint Collaborative Team on Video Coding of ITU-T SG16 WP6 and ISO/IEC JTC 1/SC 29/WG 11), no. JCTVC-I0356 17 April 2012 (2012-04-17) (hereinafter Coban), and Lai et al. (US 2022/0303587 A1, with priority to 62/896,032) (hereinafter Lai) in view of Zhu et al. (WO 2019/188944 A1) (hereinafter Zhu). Regarding claim 35, Chen, Coban, and Lai disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Chen discloses of applying the deblocking filter process to the first subblock edges and the first transform block edges [Paragraph [0044]-[0045], [0113] & [0129], Deblocking filter processes applied to sub-PU boundaries, as subblock edges, and transform unit (TU) boundaries, as transform block edges]. However, Chen, Coban, and Lai do not explicitly disclose applying the deblocking filter process to the first subblock edges and the first transform block edges not at the first boundary when the first loop_filter_across subpic enabled flag is equal to 0. Zhu teaches applying the deblocking filter process to the first subblock edges and the first transform block edges not at the first boundary when the first loop_filter _across_subpic_enabled_flag is equal to 0 [Paragraph [0029]-[0030] & [0039]-[0041], Further, in ITU-T H.265, the deblocking filter may be applied differently to CTU boundaries that coincide with slice and tile boundaries compared with CTU boundaries that do not coincide with slice and tile boundaries, wherein a flag, loop_filter_across_tiles_enabled_flag, reading as loop_filter_across_subpic _enabled_flag, present in a PPS enables/disables the deblocking filter across CTU boundaries that coincide with tile boundaries, while enabling the deblocking filter for CTU boundaries, reading as subblock edges and transform block edges, that do not coincide slice and tile boundaries, as subpicture boundaries]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Chen to integrate and implement the loop-filtering techniques across CTU boundaries in Zhu as above, to prevent the use of support samples when a deblocking filter exceeds a boundary and instead use of padding operations to create support samples to avoid a blurring or artifacts (Zhu, Paragraphs [0159] & [0164]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL CHANG whose telephone number is (571)272-5707. The examiner can normally be reached M-Sa, 12PM - 10 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL CHANG/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Mar 18, 2022
Application Filed
Feb 10, 2024
Non-Final Rejection — §103
May 13, 2024
Response Filed
Aug 23, 2024
Final Rejection — §103
Oct 28, 2024
Response after Non-Final Action
Jan 06, 2025
Request for Continued Examination
Jan 22, 2025
Response after Non-Final Action
Feb 21, 2025
Non-Final Rejection — §103
May 12, 2025
Response Filed
May 30, 2025
Final Rejection — §103
Jul 30, 2025
Applicant Interview (Telephonic)
Aug 01, 2025
Response after Non-Final Action
Sep 03, 2025
Request for Continued Examination
Sep 12, 2025
Response after Non-Final Action
Sep 20, 2025
Non-Final Rejection — §103
Dec 04, 2025
Response Filed
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593069
LOW MEMORY DESIGN FOR MULTIPLE REFERENCE LINE SELECTION SCHEME
2y 5m to grant Granted Mar 31, 2026
Patent 12587672
DECOUPLED MODE INFERENCE AND PREDICTION
2y 5m to grant Granted Mar 24, 2026
Patent 12574541
IMAGE PROCESSING METHOD AND ASSOCIATED IMAGE PROCESSING CIRCUIT
2y 5m to grant Granted Mar 10, 2026
Patent 12570145
AUTOSTEREOSCOPIC CAMPFIRE DISPLAY
2y 5m to grant Granted Mar 10, 2026
Patent 12574513
METHOD AND DEVICE FOR ENCODING/DECODING VIDEO SIGNAL BY USING OPTIMIZED CONVERSION BASED ON MULTIPLE GRAPH-BASED MODEL
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+13.0%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 367 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month