Prosecution Insights
Last updated: April 18, 2026
Application No. 18/289,975

METHOD, DEVICE, AND MEDIUM FOR VIDEO PROCESSING

Non-Final OA §103
Filed
Nov 08, 2023
Examiner
BRUMFIELD, SHANIKA M
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
82%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
263 granted / 386 resolved
+10.1% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 386 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 14 January 2026 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 47 - 67 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. On pages 8 – 9, applicant argues that Chuang does not teach “determining…whether a template-based process is applied to the current video block based on coding information of the current coding block” because the previously cited portions of Chuang (pars. 74 – 75) teach determining whether a template-based process is applied to a current block based on coding information of a neighboring block and not coding information of the current block as currently amended. While applicant’s arguments are understood, examiner respectfully disagrees. Examiner relies on Chuang in maintaining the rejection. Chuang teaches determining whether a template-based process is applied to the current video block based on coding information of the current video block as claimed at least at pars. 67 - 69. There, Chuang teaches that the system determines whether to applied DIMD to a current block based on the block size, block width, and/or block height of the current block, wherein block size, block width, and block height are the equivalent of coding information, and wherein DIMD is the equivalent of the template-based process (see, e.g. par. 13: describing that DIMD is a template based process). The rejection, therefore, is maintained. Examiner Remarks The claims are interpreted in the alternative only. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 47 - 66 is/are rejected under 35 U.S.C. 103 as being unpatentable over Esenlik et al. (US 2020/0137413) (hereinafter Esenlik), as cited by applicant in view of Chuang et al. (US 2017/0374369) (hereinafter Chuang), as cited by applicant. Regarding claims 47, 65, and 66, Esenlik teaches a method for video processing, an apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor cause the processor to perform the method, and a non-transitory computer-readable storage medium storing instructions that cause a processor to perform the method, the method comprising: determining, during a conversion between a current video block of a video and a bitstream of the video (e.g. Figs. 1 and 2, and pars. 66 and 75-76: depicting and describing that the system converts between a video block and a bitstream by encoding or encoding the video block), one of the following: whether coded information of a reference video block is used to code a first piece of motion information of the current video block based on a coding pattern applied to the reference video block (e.g. par. 97: describing that the system determines whether motion information of a reference block is used to derive motion information of the current block based on whether the reference block used template matching to derive motion information, wherein the motion information is the equivalent of the coded information, and the template matching is the equivalent of the coding pattern); and performing the conversion based on the determining (e.g. Figs. 1 and 2, and pars. 66 and 75-76: depicting and describing that the system converts between a video block and a bitstream by encoding or encoding the video block). Esenlik does not explicitly teach: whether a template-based process is applied to the current video block based on coding information of the current video block. Chuang, however, teaches a method, apparatus and a non-transitory computer readable medium storing instructions for video processing: whether a template-based process is applied to the current video block based on coding information of the current video block (e.g. par. 67 - 69: describing that the system determines whether decoder side intra mode derivation is applied to a current block based on the block size, block width, and/or block height of the current block, wherein the block size, block width, and block height of the current block is the equivalent of the coding information of the current video block and wherein the decoder side intra mode derivation is the equivalent of the template-based process [see, e.g. par. 13: describing that decoder side intra mode derivation is a template based process]). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Esenlik by adding the teachings of Chuang in order to determine whether a template-based process is applied to the current video block based on coding information of the current video block. One of ordinary skill in the art would have been motivated to make such a modification because the modification resolves parsing issues created when intra mode dependent tools require a coding mode and a block uses decoder side intra mode derivation (Chuang, e.g. par. 13: describing a desire to resolve parsing issues by the intra mode dependent tools and the decoder side intra mode derivation). Turning to claim 48, Esenlik and Chuang teach all of the limitations of claim 47, as discussed above. Esenlik further teaches: wherein the coding information comprises one of: a second piece of motion information of the reference video block, or coding information derived or refined by a template-based mode, or wherein the current video block is in the same picture, slice, tile, or subpicture as the reference video block, or wherein the current video block is in the different picture, slice, tile, or subpicture as the reference video block (e.g. Fig. Fig. 7 and pars. 99 – 109: depicting and describing that the motion information of the adjacent/neighbor is motion information after application of the template matching process, wherein motion information of the adjacent/neighbor block after application of the template matching process is the equivalent of the second motion information of the reference block and coding information derived by the template-based mode, the current block being in the same picture, slice, tile or subpicture as the reference video block [see, e.g. par. 11: describing that the neighboring block may be a spatial neighbor or a temporal neighbor, wherein it is known to those of ordinary skill in the art that a spatial neighbor of a current block is necessarily a reference block in the same picture, slice, tile or subpicture, and wherein it is known to those of ordinary skill in the art that a temporal neighbor of a current block is necessarily a reference block in a different picture, slice, tile, or subpicture]). Regarding claim 49, Esenlik and Chuang teach all of the limitations of claims 47 and 48, as discussed above. Esenlik further teaches: wherein determining whether the coding information of the reference video block is used comprises: in response to that the reference video block is template-based-coded, determining that the second piece of the motion information of the reference video block is not used for coding the first piece of the motion information of the current video block (e.g. Fig. Fig. 7 and pars. 97 and 99 – 109: depicting and describing that when it is determined that the adjacent/neighbor block uses template matching, the system does not used the derived motion information of the adjacent/neighbor block to determine the motion information of the current block, wherein derived motion information of the adjacent block from the template matching process is the equivalent of the second piece of motion information of the reference video block). Turning to claim 50, Esenlik and Chuang teach all of the limitations of claims 47 -49, as discussed above. Esenlik further teaches: wherein the reference video block is marked as unavailable or intra-coded during a coding process of the current video block or a decoding process of the current video block, or wherein the motion information of the reference video block is set to one or more default values during a coding process of the current video block or a decoding process of the current video block, or wherein the method further comprises: coding the first piece of the motion information of the current video block by using a third piece of the motion information of the reference video block, wherein the third piece is obtained before the template-based coding process (e.g. Fig. 7 and pars. 99 – 109: depicting and describing that the system codes or decodes motion information of the current block by using unrefined motion information of the reference video block, wherein the unrefined motion information of the reference video block is the equivalent of the third piece of the motion information of the reference video block, the third piece is obtained before the template-based coding process; e.g. Figs. 12, and pars. 146 – 149: describing that when the neighbor block motion information is uses template matching, the system skips the use of motion information of that block in predicting the motion of the current block, wherein skipping the use of motion information of neighboring blocks using template matching is the equivalent of the reference video block is marked as unavailable or intra-coded during a coding process of the current video block or a decoding process of the current video block). Regarding claim 51, Esenlik and Chuang teach all of the limitations of claims 47 and 48, as discussed above. Esenlik further teaches: wherein whether the second piece of the motion information of the reference video block is used to code the first piece of the motion information of the current video block in a deblocking filtering is determined based on the coding pattern applied to the reference video block (e.g. Fig. 7 and pars. 99 – 109: depicting and describing that the system determines whether motion information of an adjacent block is used to reconstruct the current block based on whether template matching was used on the reference block, the reconstruction of the current block including deblock filtering [see, e.g. Fig. 7 and par. 87: depicting and describing that reconstruction of the current block includes filtering, the filtering including deblock filtering], wherein the motion information of the adjacent block is the equivalent of the second piece of motion information of the reference video block). Turning to claim 52, Esenlik and Chuang teach all of the limitations of claims 47, 48, and 51, as discussed above. Esenlik further teaches: wherein the second piece of the motion information of the reference video block is not used in the deblocking filtering if the reference video block is template-based-coded (e.g. Fig. 7 and pars. 99 – 109: depicting and describing that motion information of an adjacent block is used to reconstruct the current block when template matching was used on the reference block, the reconstruction of the current block including deblock filtering [see, e.g. Fig. 7 and par. 87: depicting and describing that reconstruction of the current block includes filtering, the filtering including deblock filtering], wherein the motion information of the adjacent block is the equivalent of the second piece of motion information of the reference video block). Regarding claim 53, Esenlik and Chuang teach all of the limitations of claims 47, 48, 51, and 52 as discussed above. Esenlik further teaches: wherein the reference video block is marked as unavailable or intra-coded in the deblocking filtering, or wherein the motion information of the reference video block is set to one or more default values in the deblocking filtering, or wherein a third piece of the motion information of the reference video block is used in the deblocking filtering, and wherein the third piece is obtained before the template-based coding process (e.g. Fig. 7 and pars. 99 – 109: depicting and describing that unrefined motion information of an adjacent block is used to reconstruct the current block when template matching was used on the reference block, the reconstruction of the current block including deblock filtering [see, e.g. Fig. 7 and par. 87: depicting and describing that reconstruction of the current block includes filtering, the filtering including deblock filtering], wherein the unrefined motion information of the adjacent block is the equivalent of the third piece of motion information of the reference video block, the third piece being obtained before the template-based coding process). Turning to claim 54, Esenlik and Chuang teach all of the limitations of claims 47, 48, and 51 – 53, as discussed above. Esenlik further teaches: wherein the third piece of the motion information instead of the second piece of the motion information is stored for the reference video block (e.g. par. 107: describing that the non-refined motion vector of the adjacent block is stored for the adjacent block, wherein the non-refined motion vector is the equivalent of the third piece of motion information). Regarding claim 55, Esenlik and Chuang teach all of the limitations of claims 47 and 48, as discussed above. Esenlik further teaches: wherein the second piece of motion information comprises at least one of: a motion vector, a reference index, a reference list, weighting values, or parameters of a and b in local illumination compensation (LIC), or wherein the coding information derived or refined by a template-based mode comprises an intra prediction mode (e.g. Fig. 7 and pars. 97 and 99 – 109: depicting and describing that the motion information is a motion vector). Turning to claim 56, Esenlik and Chuang teach all of the limitations of claims 47, 48, and 55, as discussed above. Esenlik does not explicitly teach: wherein for the reference video block coded with the decoder-side intra mode derivation mode (DIMD), a derived intra prediction mode is disallowed to be used during at least one of the following: a coding process of the current video block in current slice, tile, subpicture or picture, a decoding process of the current video block in in current slice, tile, subpicture or picture, or a deblocking filter process Chuang, however, teaches a method for video processing: wherein for the reference video block coded with the decoder-side intra mode derivation mode (DIMD), a derived intra prediction mode is disallowed to be used during at least one of the following: a coding process of the current video block in current slice, tile, subpicture or picture, a decoding process of the current video block in in current slice, tile, subpicture or picture, or a deblocking filter process (e.g. pars. 73 – 75: describing that when a neighboring block is coded with DIMD, DIMD is disabled for coding or decoding the current intra mode block, wherein it is known to those of ordinary skill in the art that an intra coded block is necessarily in the current slice, tile, subpicture or picture) It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Esenlik by adding the teachings of Chuang in order for the reference video block coded with the decoder-side intra mode derivation mode (DIMD), a derived intra prediction mode is disallowed to be used during at least one of the following: a coding process of the current video block in current slice, tile, subpicture or picture, a decoding process of the current video block in in current slice, tile, subpicture or picture, or a deblocking filter process. One of ordinary skill in the art would have been motivated to make such a modification because the modification resolves parsing issues created when intra mode dependent tools require a coding mode and a block uses decoder side intra mode derivation (Chuang, e.g. par. 13: describing a desire to resolve parsing issues by the intra mode dependent tools and the decoder side intra mode derivation). Regarding claim 57, Esenlik and Chuang teach all of the limitations of claims 47, 48, 55, and 56, as discussed above. Esenlik does not explicitly teach: wherein the intra prediction mode is derived according to a template process or a DIMD process, or wherein one or more derived intra prediction modes are not included in a most probable modes (MPM) list. Chuang, however, teaches a method for video processing: wherein the intra prediction mode is derived according to a template process or a DIMD process, or wherein one or more derived intra prediction modes are not included in a most probable modes (MPM) list (e.g. Fig. 7 and par. 83: depicting and describing that the intra prediction mode of the current block is derived according to a DIMD process, the DIMD process being a template based process [e.g. par. 43: describing that the DIMD is a template based intra mode]; e.g. 52 – 55: describing that the derived intra prediction mode is not included in the MPM list). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Esenlik by adding the teachings of Chuang in order for the intra prediction mode is derived according to a template process or a DIMD process, or wherein one or more derived intra prediction modes are not included in a most probable modes (MPM) list. One of ordinary skill in the art would have been motivated to make such a modification because the modification resolves parsing issues created when intra mode dependent tools require a coding mode and a block uses decoder side intra mode derivation (Chuang, e.g. par. 13: describing a desire to resolve parsing issues by the intra mode dependent tools and the decoder side intra mode derivation). Turning to claim 58, Esenlik and Chuang teach all of the limitations of claims 47, 48, and 55 - 57, as discussed above. Esenlik does not explicitly teach: wherein the one or more derived intra prediction modes are derived by using the neighboring reconstructed samples of current video block , or wherein the MPM list comprises at least one of a primary MPM list or a secondary MPM list. Chuang, however, teaches a method for video processing: wherein the one or more derived intra prediction modes are derived by using the neighboring reconstructed samples of current video block , or wherein the MPM list comprises at least one of a primary MPM list or a secondary MPM list (e.g. pars. 47 - 51: describing that the system derives an intra prediction mode using an MPM list, wherein the MPM list is the primary MPM list, and wherein it is known to those of ordinary skill in the art that an MPM list derives intra prediction mode using neighboring blocks to the current block). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Esenlik by adding the teachings of Chuang in order for the intra prediction mode is derived according to a template process or a DIMD process, or wherein one or more derived intra prediction modes are not included in a most probable modes (MPM) list. One of ordinary skill in the art would have been motivated to make such a modification because the modification resolves parsing issues created when intra mode dependent tools require a coding mode and a block uses decoder side intra mode derivation (Chuang, e.g. par. 13: describing a desire to resolve parsing issues by the intra mode dependent tools and the decoder side intra mode derivation). Regarding claim 59, Esenlik and Chuang teach all of the limitations of claims 47, 48, and 55 - 56, as discussed above. Esenlik does not explicitly teach: wherein a partial of derived intra prediction modes are included in a primary MPM list or a secondary MPM list. Chuang, however, teaches a method for video processing: wherein a partial of derived intra prediction modes are included in a primary MPM list or a secondary MPM list (e.g. par. 48: describing that a derived intra prediction mode is included in the MPM list). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Esenlik by adding the teachings of Chuang in order for a partial of derived intra prediction modes are included in a primary MPM list or a secondary MPM list. One of ordinary skill in the art would have been motivated to make such a modification because the modification resolves parsing issues created when intra mode dependent tools require a coding mode and a block uses decoder side intra mode derivation (Chuang, e.g. par. 13: describing a desire to resolve parsing issues by the intra mode dependent tools and the decoder side intra mode derivation). Turning to claim 60, Esenlik and Chuang teach all of the limitations of claims 47, 48, and 55, as discussed above. Esenlik further teaches: wherein for the reference video block coded with intra block copy (IBC) with the template-based mode, a derived reference video block or a refined reference video block is disallowed to be used during at least one of the following: a coding process of the current video block in current slice, tile, subpicture or picture, a decoding process of the current video block in in current slice, tile, subpicture or picture, or a deblocking filter process (e.g. Fig. 13 and pars. 150 – 156: depicting and describing that during the coding or decoding of a current intra mode block, reconstructed samples of a neighboring block that has a motion vector determined by template matching is disallowed in the prediction of the current block, wherein a neighboring block with a motion vector determined using template matching reasonably suggests an intra block copy coded block with a block vector determined by template matching). Regarding claim 61, Esenlik and Chuang teach all of the limitations of claims 47, 48, 55 and 60, as discussed above. Esenlik further teaches: wherein the reference video block is derived or refined according to a template process (e.g. Fig. 13 and pars. 150 – 156: depicting and describing that the neighboring block may be derived using template matching). Turning to claim 62, Esenlik and Chuang teach all of the limitations of claim 47, as discussed above. Esenlik further teaches: wherein the coding information of the current video block comprises at least one of the following: a width of the current video block, a height of the current video block, a size of the current video block, a coding tree depth of the current video block, a coding mode of the current video block, a prediction direction of the current video block, a reference information of the current video block, or a texture characteristic of the current video block, or wherein whether the template-based process is applied to the current video block is determined further based on coding information of at least one neighboring video block of the current block and the coding information of the at least one neighboring video block comprises at least one of the following: respective widths of the at least one neighboring video block, respective heights of the at least one neighboring video block, respective sizes of the at least one neighboring video block, respective coding tree depths of the at least one neighboring video block, or respective coding modes of the at least one neighboring video block, or wherein if a neighboring block above the current video block satisfies one or more predefined conditions, above neighboring samples covered by the neighboring block above the current video block are not included in a template of the template-based process, or wherein if a neighboring block located in the left of the current video block satisfies one or more predefined conditions, left neighboring samples covered by the neighboring block in the left of the current video block are not included in a template of the template-based process, or wherein prediction samples of a neighboring video block are used to obtain a template for a template-based-coded block (e.g. Fig. 14 and pars. 130 – 134: depicting and describing that when a left neighboring block or an above neighboring block is within a specified region of the current block and use refined or derived motion vectors, the samples of the neighboring block are not used to derive motion vectors of the current block, wherein the derivation method is template matching [see, e.g. par. 134: describing that the derivation method used for the current block is template matching], wherein a left neighboring block or an above neighboring block is within a specified region of the current block and use refined or derived motion vectors is the equivalent of the left neighboring block and the above neighboring block satisfying one or more predefined conditions). Regarding claim 63, Esenlik and Chuang teach all of the limitations of claims 47 and 62, as discussed above. Esenlik further teaches: wherein the one or more predefined conditions comprises at least one of the following: the at least one neighboring video block is inter-coded; the at least one neighboring video block is intra-coded; the at least one neighboring video block is intra block copy (IBC)-coded; residues of the at least one neighboring video block are equal to zero; residues of the at least one neighboring video block are not equal to zero; the at least one neighboring video block is template-based-coded; or the at least one neighboring video block is not template-based-coded, or wherein prediction samples of the neighboring video block are used to obtain the template for at least one of the following: an inter coded block, an intra coded block, an intra block copy coded block, or a palette coded block (e.g. par. 97: describing that a condition for using samples of a neighboring block is whether the neighboring block was derived using template matching). Turning to claim 64, Esenlik and Chuang teach all of the limitations of claim 47, as discussed above. Esenlik further teaches: wherein the conversion comprises decoding the current video block from the bitstream of the video, or wherein the conversion comprises encoding the current video block into the bitstream of the video (e.g. par. 167: describing that the conversion from a current block and a bitstream includes encoding the current block or decoding the current block). Claim(s) 67 is/are rejected under 35 U.S.C. 103 as being unpatentable over Esenlik et al. (US 2020/0137413) (hereinafter Esenlik), as cited by applicant in view of Chuang et al. (US 2017/0374369) (hereinafter Chuang), as cited by applicant as applied to claim 47 above, and further in view of Chuang et al. (US 2018/0098070) (hereinafter Chuang 2). Regarding claim 67, Esenlik and Chuang teach all of the limitations of claim 47, as discussed above. Esenlik does not explicitly teach: Storing the bitstream in a non-transitory computer-readable recording medium. Chuang 2, however, teaches a method for video processing: Storing the bitstream in a non-transitory computer-readable recording medium (e.g. Fig. 1 and par. 84: depicting and describing that the system stores encoded video bitstream data in a computer readable storage medium). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Esenlik by adding the teachings of Chuang 2 in order to store the bitstream in a non-transitory computer-readable recording medium. One of ordinary skill in the art would have been motivated to make such a combination because the combination improves coding efficiency. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US2017/0353730 – disclosing that the system determines whether template-based intra prediction is applied to a block based on the size of the block US2022/0224922 – disclosing that whether to enable a template-matching process for a current block is based on a size, shape, width, or height of the current block Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANIKA M BRUMFIELD whose telephone number is (571)270-3700. The examiner can normally be reached M-F 8:30 - 5 PM AWS. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHANIKA M. BRUMFIELD Examiner Art Unit 2487 /SHANIKA M BRUMFIELD/Examiner, Art Unit 2487 /Dave Czekaj/Supervisory Patent Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Nov 08, 2023
Application Filed
Nov 08, 2023
Response after Non-Final Action
Apr 16, 2025
Non-Final Rejection — §103
Jul 21, 2025
Response Filed
Oct 08, 2025
Final Rejection — §103
Dec 15, 2025
Response after Non-Final Action
Jan 14, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Mar 30, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598369
SURFACE TOPOGRAPHY MEASUREMENT SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591125
Microscopy System and Method for Checking a Rotational Position of a Microscope Camera
2y 5m to grant Granted Mar 31, 2026
Patent 12587642
ENCODING METHOD, DECODING METHOD, CODE STREAM, ENCODER, DECODER AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12581070
EDGE OFFSET FOR CROSS COMPONENT SAMPLE ADAPTIVE OFFSET (CCSAO) FILTER
2y 5m to grant Granted Mar 17, 2026
Patent 12581090
QUANTIZATION PARAMETER FOR CHROMA DEBLOCKING FILTERING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
82%
With Interview (+14.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 386 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month