Prosecution Insights
Last updated: April 18, 2026
Application No. 18/122,692

INTER CODING FOR ADAPTIVE RESOLUTION VIDEO CODING

Final Rejection §103
Filed
Mar 16, 2023
Examiner
LOTFI, KYLE M
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Alibaba Group Holding Limited
OA Round
3 (Final)
64%
Grant Probability
Moderate
4-5
OA Rounds
2y 8m
To Grant
71%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
226 granted / 355 resolved
+5.7% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
22 currently pending
Career history
377
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
25.8%
-14.2% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 355 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, filed 3/17/2026, with respect to the rejections of claims 1-3, 9-11 under 35 USC 102(a)(1) have been fully considered and are persuasive. Specifically, the Examiner is persuaded that Wu does not disclose: locating a predictor block of the reference picture in accordance with a translated inter predictor or motion vector by rounding a translated coordinate to either a level of accuracy supported by a video decoder or a lower-granularity level of accuracy. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of the newly found prior art, US 2020/0177908 A1, hereafter “Lee”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, and 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Wu, US 2016/0119642 A1, in view of Lee, US 2020/0177908 A1. Regarding claim 1, Wu discloses: a method comprising: obtaining a current frame of a bitstream; obtaining a reference picture from a reference frame buffer, the reference picture having a resolution different from a resolution of the current frame (See [0005]: Hence, when a current frame is encoded as an inter-frame, the resolution of the current frame can be different from the resolution of a reference frame used for inter prediction.); translating one or more inter predictors of a current block of the current frame and/or translating one or more motion vectors of the current block of the current frame ([0005]: Therefore, the block size of the current block in the current frame is scaled to map to a block size in the reference frame, a motion vector found within a search area in the current frame is scaled to map to a motion vector in the reference frame, and a search area size in the current frame is scaled to map to an effective search area size in the reference frame.); (See figure 2, and [0019], “In accordance with the RRF feature specified in VP9 coding standard, the block size of the block BK in the current frame 202 is scaled to map to a block size of a mapped block BK1 in the reference frame 201.”); and performing motion prediction on the current block by reference to the located predictor block of the reference picture (See figure 2, as disclosed in [0018]: “Hence, the block BK in the current frame 202 is encoded with pixel information of a reference frame 201.”) Wu discloses the above limitations, but does not disclose the following limitation in its entirety: locating a predictor block of the reference picture in accordance with a translated inter predictor or motion vector by rounding a translated coordinate to either a level of accuracy supported by a video decoder or a lower-granularity level of accuracy However, in an analogous art directed to determining a prediction motion vector (PMV) for a current block, Lee discloses downscaling a motion vector of a default MV and further discloses rounding the x and y-coordinates of the downscaled MV to an integer pixel value, when the MV does not otherwise indicate an integer value. See Lee [0405]. It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the feature of rounding a downscaled MV to integer x and y-coordinate values, as disclosed in Lee, in order to improve prediction accuracy, or prevent ambiguities in motion prediction. Incorporating this feature would have entailed combining the elements respectively disclosed in Wu and in Lee, with no change to their respective functioning, and the results would have been predictable for one having ordinary skill in the art. KSR Int'l Co. v. Teleflex Inc. See 2143.1.A. 550 U.S. at 416, 82 USPQ2d at 1395. Regarding claim 2, the combination of Wu in view of Lee discloses the limitations of claim 1, upon which claim 2 depends. This combination, specifically Wu, further discloses: the method of claim 1, wherein translating the one or more inter predictors is performed in accordance with a ratio of the resolution of the current frame and the resolution of the reference pictures (See [0020].); and inputting the reconstructed frame into the reference frame buffer as a reference picture (See [0005].). Regarding claim 3, the combination of Wu in view of Lee discloses the limitations of claim 1, upon which claim 3 depends. This combination, specifically Wu, further discloses: the method of claim 1, wherein translating the one or more motion vectors is performed in accordance with a ratio of the resolution of the current frame and the resolution of the one or more reference pictures, to match the resolution of the current frame (See [0019]-[0020], disclosing formulates for scaling and mapping a block between a reference frame and a current frame according to a resolution ratio between the two.); and further comprising: inputting the reconstructed frame into the reference frame buffer as a reference picture (See [0018], disclosing a reconstructed block is used as a reference frame 201 for a current frame 202.). System claims 9-11 correspond, respectively, to method claims 1-3, and are rejected for the same reasons of obviousness as given above with respect to method claims 1-3. Claims 4-8 and 12-16 are rejected under 35 U.S.C. 103 as being unpatentable over Wu, in view of Lee, in view of Chuang II, US 2020/0077111 A1. Regarding claim 4, the combination of Wu in view of Lee discloses the limitations of claim 1, upon which depends claim 4. Wu does not disclose: the method of claim 1, further comprising deriving an affine merge candidate list or an AMVP candidate list for a block of the current frame, the affine merge candidate list or the AMVP candidate list comprising a plurality of CPMVP candidates or AMVP candidates, respectively. However, Chuang II discloses this limitation in an analogous art. Chuang II discloses deriving an affine Merge candidate list for a current block, comprising control point motion vector predictors and AMVP candidates. It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate affine merge motion candidates into a candidate list for a current block for a current frame, as disclosed in Chuang II, in order to allow for prediction from reference blocks based on rotations and deformations, as disclosed in [0006]-[0007]. Regarding claim 5, the combination of Wu, in view of Lee, in view of Chuang II discloses the limitations of claim 4, upon which depends claim 5. This combination, specifically Chuang II, further discloses: the method of claim 4, wherein deriving the affine merge candidate list or the AMVP candidate list comprises deriving up to two inherited affine merge candidates (See Chuang II [0024].). Regarding claim 6, the combination of Wu, in view of Lee, in view of Chuang II discloses the limitations of claim 4, upon which depends claim 6. This combination, specifically Chuang II, further discloses: the method of claim 4, wherein deriving the affine merge candidate list or the AMVP candidate list comprises deriving a constructed affine merge candidate (See [0029], which discloses an affine sub-block MV derivation for inter coding.). Regarding claim 7, the combination of Wu, in view of Lee, in view of Chuang II discloses the limitations of claim 4, upon which depends claim 7. This combination, specifically Chuang II, further discloses: the method of claim 4, further comprising: selecting a CPMVP candidate or AMVP candidate from the derived affine merge candidate list or AMVP candidate list (See [0013].), respectively; and deriving motion information of the CPMVP candidate or the AMVP candidate as motion information of the block of the current frame (See figure 5, step 530, which is disclosed in [0057] as a conversion process to generate motion vectors. See last five lines: “The current block or motion information of the current block is encoded using said one more converted MVs at the video encoder side or the current block or the motion information of the current block is decoded using said one or more converted MVs at the video decoder side in step 540.”). Regarding claim 8, the combination of Wu, in view of Lee, in view of Chuang II discloses the limitations of claim 7, upon which depends claim 8. This combination, specifically Chuang II, further discloses: the method of claim 7, wherein the motion information comprises a reference to a reference picture (See [0006], disclosing that affine prediction is a form of inter prediction.), and deriving motion information of the CPMVP candidate or the AMVP candidate further comprises: generating a plurality of control point motion vectors (CPMVs) based on the reference to motion information of a reference picture (See [0039]-[0040], disclosing deriving sets of three control point motion vectors when a 6-parameter affine motion model is used.). System claims 12-16 correspond, respectively, to method claims 4-8, and are rejected for the same reasons of obviousness given above for method claims 4-8, respectively. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Wu 2, US 2017/0105018 A1, in further view of Lee. Regarding claim 17, Wu 2 discloses: a system comprising: one or more processors and memory communicatively coupled to the one or more processors, the memory storing computer-executable modules executable by the one or more processors that, when executed by the one or more processors, perform associated operations, the computer-executable modules including: a frame obtaining module configured to obtain a current frame of a bitstream (See non-transitory storage 112.); and a reference picture obtaining module configured to obtain one or more reference pictures from a reference frame buffer and compare resolutions of the one or more determine that reference pictures having a same resolution as a resolution of a current frame are not available (See [0059], which discloses with respect to three reference image frame cases, that comparison of the resolution of the reference image frame with the resolution of the target frame is necessary, to determine whether to adjust the resolution of the reference image frame.). Wu discloses the above limitations, but does not disclose the following limitation in its entirety: locating a predictor block of the reference picture in accordance with a translated inter predictor or motion vector by rounding a translated coordinate to either a level of accuracy supported by a video decoder or a lower-granularity level of accuracy However, in an analogous art directed to determining a prediction motion vector (PMV) for a current block, Lee discloses downscaling a motion vector of a default MV and further discloses rounding the x and y-coordinates of the downscaled MV to an integer pixel value, when the MV does not otherwise indicate an integer value. See Lee [0405]. It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the feature of rounding a downscaled MV to integer x and y-coordinate values, as disclosed in Lee, in order to improve prediction accuracy, or prevent ambiguities in motion prediction. Incorporating this feature would have entailed combining the elements respectively disclosed in Wu and in Lee, with no change to their respective functioning, and the results would have been predictable for one having ordinary skill in the art. KSR Int'l Co. v. Teleflex Inc. See 2143.1.A. 550 U.S. at 416, 82 USPQ2d at 1395. Claims 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wu 2, in view of Lee, in further view of Gao, US 2020/0374514 A1. Regarding claim 18, the combination of Wu 2 in view of Lee discloses the limitations of claim 17, upon which depends claim 18. Wu 2 does not disclose: a bi-predicting module configured to performs bi-prediction upon the current frame based on a first reference frame and a second reference frame of the reference frame buffer. However, Gao discloses bi-prediction in an analogous art. (See [0045], “bi-prediction”. See also [0027], and disclosed “first prediction unit 206”, which performs intra and inter frame prediction, of which bi-directional prediction is a type.) It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate bi-directional prediction into the disclosure of Wu 2 in view of Lee, because bi-directional prediction was well known in the art, and was part of the compression standard upon which Wu 2 is also based. Therefore, incorporating bi-prediction would have entailed nothing more combining known prior art elements to yield a predictable result. MPEP 2184.I.A. Regarding claim 19, the combination of Wu 2, in view of Lee, in view of Gao discloses the limitations of claim 18, upon which claim 19 depends. This combination, specifically Gao, further discloses: the system of claim 18, further comprising: a vector refinement module configured to perform vector refinement during the bi-prediction process based on a first reference frame and a second reference frame of the reference frame buffer ((See [0139], which discloses performing encoder-side motion vector refinement, to improve motion vector accuracy. This is a general disclosure of performing motion vector refinement in the context of the invention. [0192] discloses performing bi-directional prediction.). Regarding claim 20, the combination of Wu 2, in view of Lee, in view of Gao discloses the limitations of claim 18, upon which claim 19 depends. This combination, specifically Gao, further discloses: the system of claim 19, further comprising: a reconstructed frame generating module configured to generate a reconstructed frame from the current frame based on the first reference frame and the second reference frame (See “reconstruction path” as disclosed in [0030] with respect to figure 2; reconstruction unit 216, first loop filtering unit 218, etc.); and a buffer inputting module configured to input the reconstructed frame into at least one of the reference frame buffer and a display buffer (See playback and storage unit 318 in figure 2.). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE M LOTFI whose telephone number is (571)272-8762. The examiner can normally be reached 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE M LOTFI/Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Mar 16, 2023
Application Filed
Nov 30, 2024
Non-Final Rejection — §103
Mar 04, 2025
Response Filed
Sep 16, 2025
Request for Continued Examination
Oct 05, 2025
Response after Non-Final Action
Dec 13, 2025
Non-Final Rejection — §103
Mar 17, 2026
Response Filed
Apr 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598317
HYBRID SPATIO-TEMPORAL NEURAL MODELS FOR VIDEO COMPRESSION
2y 5m to grant Granted Apr 07, 2026
Patent 12593070
SYSTEMS AND METHODS FOR SIGNALING SOURCE PICTURE TIMING INFORMATION FOR TEMPORAL SUBLAYERS IN VIDEO CODING
2y 5m to grant Granted Mar 31, 2026
Patent 12587646
NETWORK BASED IMAGE FILTERING FOR VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12581061
MATRIX BASED INTRA PREDICTION WITH MODE-GLOBAL SETTINGS
2y 5m to grant Granted Mar 17, 2026
Patent 12574527
METHODS FOR ENCODING AND DECODING FEATURE DATA, AND DECODER
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
64%
Grant Probability
71%
With Interview (+7.2%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 355 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month