Prosecution Insights
Last updated: April 19, 2026
Application No. 18/287,603

METHOD, DEVICE, AND MEDIUM FOR VIDEO PROCESSING

Non-Final OA §103
Filed
Oct 19, 2023
Examiner
BRUMFIELD, SHANIKA M
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
82%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
263 granted / 386 resolved
+10.1% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 386 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 07 January 2026 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 40 - 50 and 52 – 59 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. On pages 8 – 10, applicant argues that Chen does not teach determining the use of second coding data as claimed because Chen only teaches not using MVs that are refined for the following coding blocks. While applicant’s arguments are understood, examiner respectfully disagrees. Examiner relies on Chen in maintaining the rejection. Under MPEP 2123, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. Merck & Co. v. Biocraft Labs., Inc. 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir. 1989), cert. denied, 493 U.S. 975 (1989). At present, the teachings of Chen reasonably suggest to a person of ordinary skill in the art that the system determines the use of refined motion data as claimed. Chen first teaches that the system obtains refined motion data for a first block, the refined motion data based on unrefined motion data of the first block and a refinement process. See, Chen, e.g. pars. 109 – 114: describing that the system obtains refined motion data of a current block based on unrefined motion data of the current block using a template based refinement mode, wherein the refined motion data is the equivalent of the second coding data, the unrefined motion data is the equivalent of the first coding data, and the current coding block is the equivalent of the first block. Chen next teaches that the system determines whether to use refined motion data for the coding of a block subsequent to the first block based on a target coding mode used to refine the unrefined motion data of the first block. See, Chen, e.g. pars. 109 – 114: describing that the system determines whether to use the refined motion data of the first block for blocks subsequent to the first block, wherein the refined motion data is the equivalent of the second data. Chen then teaches that the system does not use the refined coding data for subsequent blocks only when the coding mode used to refine the unrefined motion data is a template based refinement mode. See, Chen, e.g. pars. 109 – 114: describing that when the unrefined motion data is refined using a template based refinement mode, the system does not use the refined data for subsequent blocks. In other words, Chen teaches that refined motion data is used for subsequent blocks when the refined motion data was obtained using a refinement process other than a template based refinement mode. Chen defines template based refinement modes as Decoder Side Motion Vector Refinement (DMVR). See, Chen, e.g. par. 109: describing that decoder side motion vector refinement modes include template based refinement modes. It is known to those of ordinary skill in that art that motion data may be refined by a number of different refinement coding modes other than template based refinement modes. See, e.g. Auyeung et al. (US 2020/0404306) (hereinafter Auyeung), par. 124 – 125: describing that motion data for a block is refined using merge with motion vector difference mode (MMVD). See, also, e.g. Huang et al. (US 2020/0389656) (hereinafter Huang), pars. 24 – 25 and 148 - 152: describing that motion data for a block is refined using bi-directional optical flow (BDOF) mode or MMVD mode, the BDOF mode and MMVD mode being refinement modes distinct from DMVR modes. The teachings of Chen, therefore, reasonably suggest to a person of ordinary skill in the art that the system determines the use of refined motion data as claimed. Examiner Remarks Examiner interprets the claims in the alternative only. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 40 – 42, 46 -49, 52, and 54 - 62 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 2019/0320197) (hereinafter Chen), supported by Huang et al. (US 2020/0389656) (hereinafter Huang). Regarding claims 40, 58, and 59, Chen teaches a method for video processing, an apparatus for video processing comprising a processor and a non-transitory memory coupled to the processor and having instruction stored thereon, wherein the instructions upon execution by the processor cause the processor to perform the method (e.g. pars. 152 – 154: describing that system includes a processor and computer-readable storage media coupled to the processor), and a non-transitory computer-readable storage medium storing instructions that cause a processor to perform the method (e.g. par. 152-153: describing that the system is a non-transitory computer readable storage media storing instructions), the method comprising: obtaining, during a conversion between a first video unit in a target picture of a video and a bitstream of the video, second coding data of the first video unit based on first coding data of the first video unit and a refinement process (e.g. par. 109 – 114: describing that during a coding of a block, the system obtains refined motion data, the refined motion data based on unrefined motion data of the block being coded by a template based refinement mode, wherein the refined motion data is the equivalent of the second coding data, the unrefined motion data is the equivalent of the first coding data); determining the use of the second coding data for processing a second video unit subsequent to the first video unit based on a target coding mode with which the first coding data is coded (e.g. par. 109 – 114: describing that for blocks subsequent to the first block, the system determines whether to use refined motion information or pre-defined motion information based on the type of refinement process applied to the block according to a coding mode of the block, the system using unrefined motion data only when the motion data is refined using a template based mode, wherein using unrefined motion data only when the motion data is refined using a template based refinement mode reasonably suggests that refined motion data is used for subsequent blocks for non-template based refinement modes, wherein it is known to those of ordinary skill in the art that motion data may be refined using refinement modes other than template based refinement modes [see, Huang, e.g. pars. 24 – 25 and 148 - 152: describing that motion data for a block is refined using bi-directional optical flow (BDOF) mode or MMVD mode, the BDOF mode and MMVD mode being refinement modes distinct from DMVR modes, the DMVR modes being template based refinement modes], wherein the pre-defined motion is the equivalent of the first coding data and the refined motion information is the equivalent of the second coding data); and performing the conversion based on the determination (e.g. Fig. 14, and pars. 144 – 149: depicting and describing that the system codes the video data based on the determination of the coding mode of the coding block). Turning to claim 41, Chen teaches all of the limitations of claim 40, as discussed above. Chen further teaches: wherein the second coding data comprises refined motion information of the first video unit, and the refined motion information is used for generating motion compensated prediction samples of the first video unit (e.g. pars. 109 – 114: describing that system generates refined motion information of the block, the refined motion information used for generating motion compensation of the block). Regarding claim 42, Chen teaches all of the limitations of claims 40 and 41, as discussed above. Chen further teaches: wherein the first coding data comprises original motion information of the first video unit without refinement, and the original motion information is used for generating motion compensated prediction samples the first video unit (e.g. pars. 109 – 114: describing that coding data of the block is original/pre-defined motion information, the motion information used for generating prediction samples). Turning to claim 46, Chen teaches all of the limitations of claim 40, as discussed above. Chen further teaches: wherein the second coding data comprises refined motion information of the first video unit, and the refined motion information is used for deriving motion information of the second video unit in the target picture (e.g. Fig. 14, and pars. 109 – 114 and 144 - 149: depicting and describing the system generates refined motion data of the block based on the coding mode of the block, the refined motion data used to derive motion information of following blocks [see, e.g. par. 114: describing that motion refined using non-DMVD methods may be used for predicting motion information of following blocks]). Regarding claim 47, Chen teaches all of the limitations of claims 40 and 46, as discussed above. Chen further teaches: wherein the refined motion information comprises a refined motion vector of the first video unit and is stored on a subblock basis, or wherein the refined motion information comprises a refined motion vector of the first video unit and is stored on a coded unit (CU) basis (e.g. pars. 110 – 11: describing that the motion information of the current block is stored for the block, reasonably suggesting that the motion information is stored on a coded unit basis). Turning to claim 48, Chen teaches all of the limitations of claims 40 and 46, as discussed above. Chen further teaches: wherein the refined motion information comprises a refined motion vector of the first video unit and is stored for deriving spatial motion candidate of the second video unit, or wherein the first coding data comprises original motion information of the first video unit without refinement, and the original motion information is stored for deriving spatial motion candidate of the second video unit (e.g. pars. 114, and 144 – 146: describing that the motion information for the block is stored for deriving a spatial motion candidate of a second block when the second block). Regarding claim 49, Chen teaches all of the limitations of claims 40 and 46, as discussed above. Chen further teaches: wherein the refined motion information comprises a refined motion vector of the first video unit and is stored for deriving a temporal motion candidate of the second video unit (e.g. pars. 114, and 144 – 146: describing that the motion information for the block is stored for deriving a temporal motion candidate of a second block when the second block). Turning to claim 52, Chen teaches all of the limitations of claim 40, as discussed above. Chen further teaches: wherein the refinement process is based on a method explicitly indicated in the bitstream (e.g. par. 74 – 75: describing that the refinement method is signaled in the bitstream), and the method is based on delta information of the target video unit, and the delta information comprises one of the following: at least one motion vector difference, at least one intra mode delta value, at least one prediction block or sample delta value, or at least one reconstruction block or sample delta value (e.g. par. 112: describing that the refinement method is based on difference of the current block, the difference information being a motion vector difference). Turning to claim 54, Chen teaches all of the limitations of claim 40, as discussed above. Chen further teaches: wherein the refinement process is based on motion information of at least one neighboring video unit, and the at least one neighboring video unit comprises at least one of video units adjacent or non-adjacent to the first video unit, and wherein the refinement process is based on an overlapped block-based motion compensation (OBMC) technique (e.g. pars. 104 – 107 and 109 – 114: describing that the refinement process is based on template matching of motion information from neighboring blocks, the refinement process used for OBMC coding technique). Regarding claim 55, Chen teaches all of the limitations of claim 40, as discussed above. Chen further teaches: wherein the refinement process is based on a bilateral matching technique comprising at least a decoder side motion vector refinement (DMVR) mode, and wherein the refinement process comprises the DMVR mode, and the second coding data comprises a prediction sample difference between a LO prediction block and a L1 prediction block of the video unit (e.g. Fig. 9 and pars. 99 - 102: depicting and describing that motion information of the current block undergoes a bilateral matching process for decoder side motion vector derivation, the motion information determined based on a difference between a prediction between two prediction blocks in two reference pictures). Turning to claim 56, Chen teaches all of the limitations of claim 40, as discussed above. Chen further teaches: wherein the refinement process is based on reconstruction samples of at least one neighboring video unit, and the at least one neighboring video unit comprises at least one of video units adjacent or non-adjacent to the first video unit, and wherein the refinement process is based on a templated matching related technique comprising one of a frame-rate up conversion (FRUC) mode, TM merge, a temporal motion (TM) mode, an adaptive motion vector resolution prediction (AMVP) mode, a TM intra block copy (IBC) mode, or a bi-directional optical flow (BDOF) mode (e.g. par. 76: describing that the refinement process is a template matching or bilateral matching process, the refinement process being FRUC [see, e.g. par. 74-75: describing that the refinement technique Is based on Frame-rate up conversion], bi-directional optical flow [see, e.g. Fig. 7 and pars. 87 – 96: describing that the refinement process may be bi-directional optical flow]). Regarding claim 57, Chen teaches all of the limitations of claim 40, as discussed above. Chen further teaches: wherein the conversion comprises decoding the target picture from the bitstream of the video, or encoding the target picture into the bitstream of the video (e.g. Fig. 14, and pars. 144 – 149: depicting and describing that the system decodes the target picture from the bitstream; Fig. 12 and par. 115: depicting and describing that the system encodes the picture into the bitstream). Turning to claim 60, Chen teaches all of the limitations of claim 40, as discussed above. Chen further teaches: storing the bitstream in a non-transitory computer-readable recording medium (e.g. Fig. 1, element 26, and par. 33: depicting and describing that the system stores encoded data in a storage device, wherein encoded data is the equivalent of the bitstream, and wherein the storage device is the equivalent of non-transitory computer readable recording medium). Regarding claim 61, Chen teaches all of the limitations of claim 40, as discussed above. Chen further teaches: wherein determining the use of the second coding data comprises: determining whether to use the second coding data or use the first coding data for processing the second video unit subsequent to the first video unit based on a type of the target coding mode; and in response to the target coding mode being a first coding mode, determining the use of the second coding data, wherein the method further comprises: in response to the target coding mode being a second coding mode different from the first coding mode, determining the use of the first coding data (e.g. pars. 109 – 114: describing that the system determines whether to use refined motion data or unrefined motion data for processing a subsequent block based on a type of refinement mode used to refine the motion data, the system using unrefined motion data only when the refinement mode is a template based refinement mode reasonably suggesting that the refined data is used when the refinement mode is not a template based refinement mode, wherein it is known those of ordinary skill in the art that motion data may be refined by a number of refinement modes other than template based refinement modes [see, e.g. Huang et al. (US 2020/0389656) (hereinafter Huang), e.g. pars. 24 – 25 and 148 – 152: describing BDOF and MMVD are modes of refinement that are distinctly different from template based refinement modes (DMVR modes)], wherein refined motion data is the equivalent of the second coding data, unrefined motion data is the equivalent of the first coding data, the non-template based refinement mode is the equivalent of the first coding mode, and the template based refinement mode is the equivalent of the second coding mode). Regarding claim 62, Chen teaches all of the limitations of claim 61, as discussed above. Chen further teaches: wherein the second coding data is used when the first coding mode is not a template based refinement mode (e.g. pars. 109 -114: describing that the system uses unrefined motion information for subsequent blocks only when the motion data is refined using a template based refinement mode, reasonably suggesting that refined motion data is used when the motion data is refined using a non-template based refinement mode, wherein the unrefined motion is the equivalent of the first coding data, the refined motion is the equivalent of the second coding data). Chen does not explicitly teach: wherein the first coding mode is based on one of the following: an adaptive motion vector resolution prediction (AMVP) candidate-based coding technique, a merge candidate-based coding technique, a combined inter-intra prediction (CIIP) mode, a merge mode with motion vector differences (MMVD), a geometric partitioning mode (GPM), a multi-hypothesis prediction (MHP) mode, a whole-block-based coding technique wherein all samples of the target video unit have the same coding information, wherein the whole-block-based coding technique comprises one of a regular merge mode, a regular adaptive motion vector resolution prediction (AMVP) mode, a combined inter-intra prediction (CIIP) mode, or a multi-hypothesis prediction (MHP) mode, a subblock-based coding technique wherein at least two of sub-blocks in the target video unit have different first coding data, and a subblock-based coding technique, wherein at least two of sub-blocks in the target video unit have different first coding data, and the subblock-based coding technique comprises one of an affine mode, or a subblock-based temporal motion vector prediction (SbTMVP) mode, an intra sub-partitions (ISP) mode, a geometric partitioning mode (GPM), a geometric merge mode (GEO), or a triangular prediction mode (TPM), an inter prediction-based technique, or an intra prediction-based technique and comprises one of an intra coding mode, a matrix weighted intra prediction (MIP) mode, a combined inter-intra prediction (CIIP) mode, an intra sub-partitions (ISP) mode, a linear model (LM) mode, an intra block copy (IBC) mode, or a block-based differential pulse-code modulation (BDPCM). Huang, however, teaches a method for video processing: wherein the wherein the first coding mode is based on one of the following: an adaptive motion vector resolution prediction (AMVP) candidate-based coding technique, a merge candidate-based coding technique, a combined inter-intra prediction (CIIP) mode, a merge mode with motion vector differences (MMVD), a geometric partitioning mode (GPM), a multi-hypothesis prediction (MHP) mode, a whole-block-based coding technique wherein all samples of the target video unit have the same coding information, wherein the whole-block-based coding technique comprises one of a regular merge mode, a regular adaptive motion vector resolution prediction (AMVP) mode, a combined inter-intra prediction (CIIP) mode, or a multi-hypothesis prediction (MHP) mode, a subblock-based coding technique wherein at least two of sub-blocks in the target video unit have different first coding data, and a subblock-based coding technique, wherein at least two of sub-blocks in the target video unit have different first coding data, and the subblock-based coding technique comprises one of an affine mode, or a subblock-based temporal motion vector prediction (SbTMVP) mode, an intra sub-partitions (ISP) mode, a geometric partitioning mode (GPM), a geometric merge mode (GEO), or a triangular prediction mode (TPM), an inter prediction-based technique, or an intra prediction-based technique and comprises one of an intra coding mode, a matrix weighted intra prediction (MIP) mode, a combined inter-intra prediction (CIIP) mode, an intra sub-partitions (ISP) mode, a linear model (LM) mode, an intra block copy (IBC) mode, or a block-based differential pulse-code modulation (BDPCM) (e.g. pars. 148 – 152: describing that the coding mode is a merge mode with motion vector differences (MMVD), the MMVD mode used to refine motion data of a block). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Chen by adding the teachings of Huang in order for the first coding mode to be based on a merge mode with motion vector differences. One of ordinary skill in the art would be motivated to make such a modification because the modification improves coding efficiency. Claim(s) 43 – 45 and 53 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 2019/0320197) (hereinafter Chen), supported by Huang et al. (US 2020/0389656) (hereinafter Huang) as applied to claim 40 above, and further in view of Xu et al. (US 2020/0045325) (hereinafter Xu). Regarding claim 43, Chen teaches all of the limitations of claim 40, as discussed above. Chen does not explicitly teach: wherein the second coding data comprises refined motion information of the first video unit, and the refined motion information is used for determining parameters in a loop filter process for the video. Xu, however, teaches a method of video processing: wherein the second coding data comprises refined motion information of the first video unit, and the refined motion information is used for determining parameters in a loop filter process for the video (e.g. par. 123: describing that deblock filtering strength is determined based on the refined MV when a block is coded using motion refinement, wherein the deblock filtering strength is the equivalent of the parameters in a loop filter process). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Chen by adding the teachings of Xu in order for the second coding data comprises refined motion information of the first video unit, and the refined motion information is used for determining parameters in a loop filter process for the video. One of ordinary skill in the art would have therefore been motivated to make such a modification because the modification improves accuracy of the deblocking boundary strength decision (Xu, e.g. par. 98: describing a desire to improve deblocking boundary strength decision accuracy). Turning to claim 44, Chen and Xu teach all of the limitations of claims 40 and 43, as discussed above. Chen does not explicitly teach: wherein the refined motion information is used for deblocking strength determination for the first video unit. Xu, however, teaches a method for video processing: wherein the refined motion information is used for deblocking strength determination for the first video unit ( e.g. par. 123: describing that deblock filtering strength is determined based on the refined MV when a block is coded using motion refinement). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Chen by adding the teachings of Xu in order for the refined motion information is used for deblocking strength determination for the first video unit. One of ordinary skill in the art would have therefore been motivated to make such a modification because the modification improves accuracy of the deblocking boundary strength decision (Xu, e.g. par. 98: describing a desire to improve deblocking boundary strength decision accuracy). Regarding to claim 45, Chen and Xu teach all of the limitations of claims 40 and 43, as discussed above. Chen does not explicitly teach: wherein the first coding data comprises original motion information of the first video unit without refinement, and the original motion information is used for deblocking strength determination for the first video unit. Xu, however, teaches a method for video processing: wherein the first coding data comprises original motion information of the first video unit without refinement, and the original motion information is used for deblocking strength determination for the first video unit (e.g. par. 123: describing that the refined motion information is used for determining deblock filtering strength only when the block is coded using DMVR mode, reasonably suggesting that when the block is not coded using the DMVR mode, unrefined motion is used to determine deblock filtering strength). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Chen by adding the teachings of Xu in order for the first coding data comprises original motion information of the first video unit without refinement, and the original motion information is used for deblocking strength determination for the first video unit. One of ordinary skill in the art would have therefore been motivated to make such a modification because the modification improves accuracy of the deblocking boundary strength decision (Xu, e.g. par. 98: describing a desire to improve deblocking boundary strength decision accuracy). Turning to claim 53, Chen teaches all of the limitations of claim 40, as discussed above. Chen does not explicitly teach: wherein the refinement process is based on at least one filtering parameter for filtering the first coding data. Xu, however, teaches a method for video processing: wherein the refinement process is based on at least one filtering parameter for filtering the first coding data (e.g. par. 123: describing that deblock filtering strength is determined based on the refined MV). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Chen by adding the teachings of Xu in order for the refinement process is based on at least one filtering parameter for filtering the first coding data. One of ordinary skill in the art would have therefore been motivated to make such a modification because the modification improves accuracy of the deblocking boundary strength decision (Xu, e.g. par. 98: describing a desire to improve deblocking boundary strength decision accuracy). Claim(s) 50 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 2019/0320197) (hereinafter Chen) as applied to claim 40 above, and further in view of Wang et al. (US 11290736) (hereinafter Wang). Regarding claim 50, Chen teaches all of the limitations of claim 40, as discussed above. Chen does not explicitly teach: wherein the refined motion information comprises a refinement intra prediction mode for the first video unit and is stored for generating an intra most probable mode (MPM) list of the second video unit. Wang, however, teaches a method for video processing: wherein the refined motion information comprises a refinement intra prediction mode for the first video unit and is stored for generating an intra most probable mode (MPM) list of the second video unit (col 23, lines 5 – 14: describing that the system uses decoder side intra mode derivation (DIMD), the DIMD mode stored for generating an MPM list for prediction units of the coding unit, wherein the prediction unit is the equivalent of the second video unit and the coding unit is the equivalent of the first video unit). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Chen by adding the teachings of Wang in order for the refined motion information comprises a refinement intra prediction mode for the first video unit and is stored for generating an intra most probable mode (MPM) list of the second video unit. One of ordinary skill in the art would have therefore been motivated to make such a modification because the modification improves coding efficiency. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANIKA M BRUMFIELD whose telephone number is (571)270-3700. The examiner can normally be reached M-F 8:30 - 5 PM AWS. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHANIKA M. BRUMFIELD Examiner Art Unit 2487 /SHANIKA M BRUMFIELD/Examiner, Art Unit 2487 /Dave Czekaj/Supervisory Patent Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Oct 19, 2023
Application Filed
Apr 19, 2025
Non-Final Rejection — §103
Jul 24, 2025
Response Filed
Oct 03, 2025
Final Rejection — §103
Dec 08, 2025
Response after Non-Final Action
Jan 08, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598369
SURFACE TOPOGRAPHY MEASUREMENT SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591125
Microscopy System and Method for Checking a Rotational Position of a Microscope Camera
2y 5m to grant Granted Mar 31, 2026
Patent 12587642
ENCODING METHOD, DECODING METHOD, CODE STREAM, ENCODER, DECODER AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12581070
EDGE OFFSET FOR CROSS COMPONENT SAMPLE ADAPTIVE OFFSET (CCSAO) FILTER
2y 5m to grant Granted Mar 17, 2026
Patent 12581090
QUANTIZATION PARAMETER FOR CHROMA DEBLOCKING FILTERING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
82%
With Interview (+14.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 386 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month