Prosecution Insights
Last updated: April 19, 2026
Application No. 17/645,233

BI-DIRECTIONAL OPTICAL FLOW IN VIDEO CODING

Final Rejection §102§103
Filed
Dec 20, 2021
Examiner
JEBARI, MOHAMMED
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Qualcomm Incorporated
OA Round
4 (Final)
55%
Grant Probability
Moderate
5-6
OA Rounds
3y 9m
To Grant
71%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
266 granted / 487 resolved
-3.4% vs TC avg
Strong +16% interview lift
Without
With
+16.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
46 currently pending
Career history
533
Total Applications
across all art units

Statute-Specific Performance

§101
4.4%
-35.6% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
18.2%
-21.8% vs TC avg
§112
17.2%
-22.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 487 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments 2. Applicant's arguments filed 11/17/2025 have been fully considered but they are not persuasive. On page 17 of the amendment, Applicant explained that claims 25-31 are directed to a “device for encoding video data,” whereas Claims 9-15 are directed to a “device for decoding video data.” Although the preamble of claim 9 teaches “a device for decoding video data” and the preamble of claim 25 teaches “a device for encoding video data,” the steps of claim 9 are identical to the steps of claim 25. Claim 25 should teach a limitation directed to encoding video data. Therefore, the objection is maintained. On pages 18-19 of the amendment, Applicant argued that Ye fails to disclose the feature(s) of “determining, for each sub-block of the one or more sub-blocks of the plurality of sub-blocks and based on the respective distortion value, between applying per-pixel BDOF and bypassing BDOF.” However, the Examiner respectfully disagrees. Ye clearly teaches determining, for each sub-block of the one or more sub-blocks of the plurality of sub-blocks and based on the respective distortion value, between applying per-pixel BDOF and bypassing BDOF (FIG. 24 shows that the distortion (i.e., D) for each sub-block inside the current CU is calculated and used to determine performing BIO-based motion refinement at the sub-block level, also said distortion is used to determine skipping BIO-based motion refinement at the sub-block level; paragraph 0123, the distortion for each sub-block inside the current CU may be calculated and used to determine whether to skip the BIO process at the sub-block level; see also paragraphs 0120 and 0124). Claim Objections 3. Applicant is advised that should claims 9-15 be found allowable, claims 25-31 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 102 4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 5. Claim(s) 1-3, 5-11, 13-21, 23, 25-27 and 29-31 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ye et al. (US 2020/0221122) published on 01/10/2019 as WO2019/010156, hereinafter “Ye”. As per claim 1, Ye discloses a method of decoding video data, the method comprising: determining that block-level bi-directional optical flow (BDOF) is enabled for a block of the video data (paragraph 0003, A device for performing video data coding may be configured to determine whether to enable or disable bi-directional optical flow (BIO) for a current coding unit (e.g., a block and/or a sub-block); see also paragraph 0107, BIO may be applied at the regular MC process for a current CU…for example when the BIO is enabled); dividing the block into a plurality of sub-blocks based on the determination that block-level BDOF is enabled for the block (paragraph 0107, BIO may be applied at the regular MC process for a current CU coded with a sub-block mode (e.g., FRUC, affine mode, ATMVP, and/or STMVP). For the CUs coded by one or more, or any, of those sub-block modes, the CU may further split into one or more, or multiple, sub-blocks and one or more, or each, sub-block may be assigned one or more unique motion vectors (e.g., uni-prediction and/or bi-prediction). Perhaps for example when the BIO is enabled, the decision on whether to apply the BIO or not and/or the BIO operation itself may be performed separately for one or more, or each, of the sub-blocks BIO may be applied at the regular MC process for a current CU coded with a sub-block mode (e.g., FRUC, affine mode, ATMVP, and/or STMVP). For the CUs coded by one or more, or any, of those sub-block modes, the CU may further split into one or more, or multiple, sub-blocks and one or more, or each, sub-block may be assigned one or more unique motion vectors (e.g., uni-prediction and/or bi-prediction). Perhaps for example when the BIO is enabled, the decision on whether to apply the BIO or not and/or the BIO operation itself may be performed separately for one or more, or each, of the sub-blocks; see also paragraphs 0109 and 0124); determining, for each sub-block of one or more sub-blocks of the plurality of sub-blocks, respective distortion values (paragraph 0123, the distortion for each sub-block inside the current CU may be calculated); determining, for each sub-block of the one or more sub-blocks of the plurality of sub-blocks and based on the respective distortion value, between applying per-pixel BDOF and bypassing BDOF (FIG. 24 shows that the distortion (i.e., D) for each sub-block inside the current CU is calculated and used to determine performing BIO-based motion refinement at the sub-block level, also said distortion is used to determine skipping BIO-based motion refinement at the sub-block level; paragraph 0123, the distortion for each sub-block inside the current CU may be calculated and used to determine whether to skip the BIO process at the sub-block level; see also paragraphs 0120 and 0124), wherein a process of applying based on a determination to apply per-pixel BDOF to a sub-block of the one or more sub-blocks, per-pixel BDOF (paragraph 0056, Bi-directional optical flow (BIO) may be applied to compensate such motion for one or more, or every, sample inside at least one block…The BIO may be a sample-wise motion refinement) comprises applying a first motion refinement vector to a first pixel in the sub-block and applying a second motion refinement vector to a second pixel in the sub-block where the first motion refinement vector and the second motion refinement vector are different (paragraph 0056, the motion refinement (v.sub.x, v.sub.y) at (x,y) can be derived by equation (1); thus, it is clear from equation (1) that a motion refinement (v.sub.x1, v.sub.y1) at (x1,y1) is different from the motion refinement (v.sub.x, v.sub.y) at (x,y) in the case (x1,y1) is different from (x,y)); determining prediction samples for each sub-block of the one or more sub-blocks based on the determination between applying per-pixel BDOF and bypassing BDOF (see equation (2) in paragraph 0056; see also fig. 24 which shows a prediction generation process after the BIO is performed and after the BIO is skipped; paragraphs 0111, 0124-0125); and reconstructing the block based on the prediction samples (paragraph 0003, the current coding unit may be reconstructed with BIO disabled when the two prediction signals are determined to be similar). As per claim 2, Ye discloses the method of claim 1, wherein determining, for each sub-block of one or more sub-blocks of the plurality of sub-blocks, respective distortion values comprises: for a first sub-block of the one or more sub-blocks, determining a first distortion value of the respective distortion values; and for a second sub-block of the one or more sub-blocks, determining a second distortion value of the respective distortion values (paragraph 0123, the distortion for each sub-block inside the current CU may be calculated; see also paragraph 0124), wherein determining between applying per-pixel BDOF and bypassing BDOF for each sub-block of the one or more sub-blocks of the plurality of sub-blocks based on the respective distortion values (FIG. 24 shows that the distortion (i.e., D) for each sub-block inside the current CU is calculated and used to determine performing BIO-based motion refinement at the sub-block level, also said distortion is used to determine skipping BIO-based motion refinement at the sub-block level; paragraph 0123, the distortion for each sub-block inside the current CU may be calculated and used to determine whether to skip the BIO process at the sub-block level; see also paragraphs 0120 and 0124) comprises: for the first sub-block of the plurality of sub-blocks, determining that per-pixel BDOF is to be applied for the first sub-block based on the first distortion value (paragraph 0124, the distortion for each sub-block inside the sub-block group may be calculated and used to determine whether to skip the BIO process for the sub-block); based on the determination that per-pixel BDOF is to be applied for the first sub-block, determining per-pixel motion refinement for refining a first set of prediction samples for the first sub-block (paragraph 0105, the regular MC may be applied to generate the motion-compensated prediction signal (e.g., Pred.sub.i) for one or more, or each, sub-block inside the CU. Perhaps if the BIO is used, for example, the BIO-based motion refinement may be performed to obtain the modified prediction signal Pred.sup.BIO.sub.i for the sub-block; see also paragraphs 0056 and 0106, which teach that the BIO may be a sample-wise motion refinement); for the second sub-block of the plurality of sub-blocks, determining that BDOF is to be bypassed based on the second distortion value (paragraph 0124, the distortion for each sub-block inside the sub-block group may be calculated and used to determine whether to skip the BIO process for the sub-block); and based on the determination that BDOF is to be bypassed for the second block, bypassing determining per-pixel motion refinement for refining a second set of prediction samples for the second sub-block (see fig. 21 and paragraphs 0105-0106, when BIO is used, the BIO-based motion refinement may be performed to obtain the modified prediction signal Pred.sup.BIO.sub.i for the sub-block, and when the OBMC is used, for example, it may be performed for one or more, or each, sub-block of the CU by following the same procedure(s) as described herein to generate the corresponding OBMC prediction signal…a prediction generation process after the OBMC, which may be performed without the BIO), and wherein determining the prediction samples for each sub-block of the one or more sub-blocks based on the determination between applying per-pixel BDOF and bypassing BDOF (see equation (2) in paragraph 0056; see also fig. 24 which shows a prediction generation process after the BIO is performed and after the BIO is skipped; paragraphs 0111, 0124-0125) comprises: for the first sub-block, determining the refined first set of prediction samples of the first sub-block based on the per-pixel motion refinement for the first sub-block (as shown in fig. 24, the modified prediction signal is obtained based on performing BIO-based motion refinement); and for the second sub-block, determining the second set of prediction samples without refining the second set of prediction samples based on the per-pixel motion refinement for refining the second set of prediction samples (paragraph 0122, a multi-stage early termination may be performed, where the BIO process may be skipped based on the distortion values calculated from different block levels, this means that the BIO-based motion refinement is skipped as shown in fig. 24). As per claim 3, Ye discloses the method of claim 1, wherein determining between applying per-pixel BDOF and bypassing BDOF for each sub-block of the one or more sub-blocks of the plurality of sub-blocks based on the respective distortion values (FIG. 24 shows that the distortion (i.e., D) for each sub-block inside the current CU is calculated and used to determine performing BIO-based motion refinement at the sub-block level, also said distortion is used to determine skipping BIO-based motion refinement at the sub-block level; paragraph 0123, the distortion for each sub-block inside the current CU may be calculated and used to determine whether to skip the BIO process at the sub-block level; see also paragraphs 0120 and 0124) comprises determining that per-pixel BDOF is to be applied for a first sub-block of the one or more sub-blocks (BIO is applied for the i-th sub-block as shown in fig. 24, wherein the BIO can be a sample-wise motion refinement and may be applied to the one or more predication samples of the sub-blocks as taught in paragraphs 0056 and 0105-106 and 0129), the method further comprising determining, for each sample in the first sub-block, respective motion refinements (paragraph 0105, the regular MC may be applied to generate the motion-compensated prediction signal (e.g., Pred.sub.i) for one or more, or each, sub-block inside the CU. Perhaps if the BIO is used, for example, the BIO-based motion refinement may be performed to obtain the modified prediction signal Pred.sup.BIO.sub.i for the sub-block; see also paragraphs 0056 and 0106, which teach that the derivation of BIO-based motion refinement may be a sample-based operation), and wherein determining the prediction samples for each sub-block of the one or more sub-blocks based on the determination that per-pixel BDOF is to be applied comprises determining, for each sample in the first sub-block, respective refined sample values from samples in a prediction block for the first sub-block based on the respective motion refinements (see equation (2) in paragraph 0056; see also fig. 24 which shows a prediction generation process after the BIO is performed; paragraphs 0106). As per claim 5, Ye discloses the method of claim 1, further comprising: determining a first set of sample values in a first reference block for a first sub-block of the one or more sub-blocks (i.e, I.sup.(0)(x,y), see paragraphs 0119-0120); scaling the first set of sample values with a scale factor to generate a first set of scaled sample values (i.e., I.sub.h.sup.(0)(x,y), see paragraphs 0126-0127); determining a second set of sample values in a second reference block for the first sub-block of the one or more sub-blocks (i.e, I.sup.(1)(x,y), see paragraphs 0119-0120); and scaling the second set of sample values with the scale factor to generate a second set of scaled sample values (i.e., I.sub.h.sup.(1)(x,y), see paragraphs 0126-0127), wherein determining, for each sub-block of one or more sub-blocks of the plurality of sub-blocks, the respective distortion values comprises determining, for the first sub-block, a distortion value of the respective distortion values based on the first set of scaled sample values and the second set of scaled sample values (see equation (33) in paragraph 0127). As per claim 6, Ye discloses the method of claim 5, wherein determining between applying per-pixel BDOF and bypassing BDOF for each sub-block of the one or more sub-blocks of the plurality of sub-blocks based on the respective distortion values (FIG. 24 shows that the distortion (i.e., D) for each sub-block inside the current CU is calculated and used to determine performing BIO-based motion refinement at the sub-block level, also said distortion is used to determine skipping BIO-based motion refinement at the sub-block level; paragraph 0123, the distortion for each sub-block inside the current CU may be calculated and used to determine whether to skip the BIO process at the sub-block level; see also paragraphs 0120 and 0124) comprises determining that per-pixel BDOF is to be applied for the first sub-block (BIO is applied for the i-th sub-block as shown in fig. 24, wherein the BIO can be a sample-wise motion refinement and may be applied to the one or more predication samples of the sub-blocks as taught in paragraphs 0056 and 0105-106 and 0129), the method further comprising reusing the first set of scaled sample values and the second set of scaled sample values for determining per-pixel motion refinement for per-pixel BDOF (paragraphs 0126-0128, the generated first and second scaled sample values I.sub.h.sup.(0)(x,y) and I.sub.h.sup.(1)(x,y) are reused for instance in equation (33) that is part of BIO process, wherein the BIO process provides a sample-wise motion refinement and may be applied to the one or more predication samples of the sub-blocks as taught in paragraphs 0056 and 0105-106 and 0129). As per claim 7, arguments analogous to those applied for claim 6 are applicable for claim 7. As per claim 8, Ye discloses the method of claim 1, wherein reconstructing the block comprises: receiving residual values indicative of a difference between the prediction samples and samples of the block; and adding the residual values to the prediction samples to reconstruct the block (see fig. 2 and paragraph 0055). As per claims 9-11 and 13-16, arguments analogous to those applied for claims 1-3 and 5-8 are applicable for claims 9-11 and 13-16; in addition, Ye discloses using a memory configured to store the video data; and processing circuitry coupled to the memory and configured to perform the claimed method (paragraph 0185). As per claim 17, Ye discloses the device of claim 9, further comprising a display configured to display decoded video data (see fig. 2 and paragraph 0055). As per claim 18, Ye discloses the device of claim 9, wherein the device comprises one or more of a camera, a computer, a mobile device, a broadcast receiver device, or a set-top box (paragraph 0132). As per claims 19-21 and 23, arguments analogous to those applied for claims 1-3 and 5 are applicable for claims 19-21 and 23; in addition, Ye discloses a computer-readable storage medium storing instructions thereon that when executed cause one or more processor to perform the claimed method (paragraph 0185). As per claims 25-27 and 29-31, arguments analogous to those applied for claims 1-3 and 5-7 are applicable for claims 25-27 and 29-31; in addition, Ye discloses using a memory configured to store the video data; and processing circuitry coupled to the memory and configured to perform the claimed method (paragraph 0185). Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 8. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 9. Claim(s) 4, 12, 22 and 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. (US 2020/0221122) in view of Zhang et al. (US 2022/0264086) hereinafter “Zhang”. As per claim 4, Ye discloses the method of claim 1, further comprising: performing a left-shift operation on the intermediate value based on a second scale factor to generate a threshold value (paragraph 0127, see equation (35)); and comparing a distortion value of the respective distortion values for the first sub-block with the threshold value, wherein determining between applying per-pixel BDOF and BDOF for each sub-block of the one or more sub-blocks of the plurality of sub-blocks based on the respective distortion values comprises determining between applying per-pixel BDOF and bypassing BDOF the first sub-block based on the comparison (FIG. 24 shows that the distortion (i.e., D) for each sub-block inside the current CU is calculated, compared to a threshold, and said comparison is used to determine performing BIO-based motion refinement at the sub-block level, also said comparison is used to determine skipping BIO-based motion refinement at the sub-block level; see also paragraphs 0120, 0122, 0124, the BIO process may be conditionally skipped for the current CU or the sub-blocks inside the current CU whose distortion between its two prediction signals may be no larger than a threshold. The calculation of the distortion measurement and the BIO process may be performed on a sub-block basis and may be invoked frequently for the sub-blocks in the current CU). While Ye defines thresholds based on desired coding performance (bitdepth or gradient as taught in paragraphs 0120, 0127 and 0129), Ye does not explicitly disclose defining thresholds based on sub-block size, specifically, multiplying a width of a first sub-block of the one or more sub-blocks, a height of the first sub-block of the one or more sub-blocks, and a first scale factor to generate an intermediate value. In the same field of endeavor, Zhang teaches whether to perform BDOF based on comparing a distortion value for a sub-block with a threshold value and defines the threshold value based on sub-block size (paragraph 0259, When the SAD value is smaller than a threshold (2*subblock width*subblock height), there is no need to perform BDOF anymore). One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Ye, with those of Zhang, because both references are drawn to the same field of endeavor, because indeed both references describe whether to perform BDOF based on comparing a distortion value for a sub-block with a threshold value, and because such a combination represents a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Ye and Zhang used in this Office Action unless otherwise noted. As per claims 12, 22 and 28, arguments analogous to those applied for claim 4 are applicable for claims 12, 22 and 28. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED JEBARI whose telephone number is (571)270-7945. The examiner can normally be reached Mon-Fri: 09:00am-06:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached on 571-272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED JEBARI/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Dec 20, 2021
Application Filed
Nov 11, 2024
Non-Final Rejection — §102, §103
Feb 14, 2025
Response Filed
May 21, 2025
Final Rejection — §102, §103
Aug 11, 2025
Request for Continued Examination
Aug 14, 2025
Response after Non-Final Action
Aug 23, 2025
Non-Final Rejection — §102, §103
Nov 17, 2025
Response Filed
Mar 10, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598337
DYNAMIC AIRPLANE VIDEO-ON-DEMAND BANDWIDTH MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12593134
CYLINDRICAL PANORAMA HARDWARE
2y 5m to grant Granted Mar 31, 2026
Patent 12584763
ENVIRONMENT MAP GENERATION PROGRAM AND THREE-DIMENSIONAL SENSOR CONTROL DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12574506
METHOD AND DEVICE FOR CODING IMAGE ON BASIS OF INTER PREDICTION
2y 5m to grant Granted Mar 10, 2026
Patent 12568208
IMAGE AND VIDEO CODING USING MACHINE LEARNING PREDICTION CODING MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
55%
Grant Probability
71%
With Interview (+16.4%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 487 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month