Prosecution Insights
Last updated: April 19, 2026
Application No. 18/516,013

VIDEO ENCODING AND DECODING METHOD AND APPARATUS, STORAGE MEDIUM, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT

Final Rejection §103
Filed
Nov 21, 2023
Examiner
NASRI, MARYAM A
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
76%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
339 granted / 462 resolved
+15.4% vs TC avg
Minimal +3% lift
Without
With
+2.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
22 currently pending
Career history
484
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
29.5%
-10.5% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 462 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is a response to an amendment filed on 11/24/2025, in which claims 1-20 are pending and ready for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in China on 03/16/2022. It is noted, however, that applicant has not filed a certified copy of the China 202210260543.1 application as required by 37 CFR 1.55. Response to Arguments Applicant’s arguments with respect to claim 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 2022/0360814 A1) in view of Takehara (US 2022/0109829 A1). Regarding claim 1, Wu discloses: A video encoding method, comprising: generating a prediction vector candidate list according to displacement vectors of an encoded block whose reference frame is a current frame (see Wu, paragraph 113, motion vector associated with the at least one other block that is intra coded, such as a position of the at least one other block in a frame); selecting a prediction vector of a to-be-encoded current block from the prediction vector candidate list (see Wu, paragraph 115, a motion vector predictor is selected); and encoding a current block based on the prediction vector of the current block (see Wu, Fig. 14 and paragraph 112, image or video data received by the encoder to be encoded). Wu does not explicitly disclose: in response to determining that a number of vectors within the prediction vector candidate list is less than a threshold, selecting additional vectors to be added to fill the prediction vector candidate list. However, Takehara from the same or similar endeavor discloses: in response to determining that a number of vectors within the prediction vector candidate list is less than a threshold, selecting additional vectors to be added to fill the prediction vector candidate list (see Takehara, paragraph 198, when the number of merge candidates in the list is smaller than the maximum number of merge candidates, additional merge candidates are added to the list until the maximum number is reached). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to “in response to determining that a number of vectors within the prediction vector candidate list is less than a threshold, selecting additional vectors to be added to fill the prediction vector candidate list” as taught by Takehara in the video coding method and apparatus taught by Wu to achieve highly efficient and low load picture coding/decoding process (see Takehara, paragraph 7). Regarding claim 2, the combination of Wu and Takehara discloses: The video encoding method according to claim 1, wherein the selecting comprises: decoding a code stream to obtain prediction vector index information (see Wu, paragraph 112 and 70, The index of the selected motion vector prediction candidates can then be signaled to the decoder); and selecting, according to the prediction vector index information, a prediction vector from a corresponding location in the prediction vector candidate list as the prediction vector of the to-be-encoded current block (see Wu, paragraph 70). Regarding claim 3, the combination of Wu and Takehara discloses: The video encoding method according to claim 2, wherein the prediction vector index information is encoded by using a context-based multi-symbol arithmetic (see Wu, paragraph 104), wherein the code stream comprises a maximum value of the prediction vector index information (see Wu, paragraph 100), wherein the maximum value is located in a sequence header, an image header, or a tile header (see Wu, paragraph 63 and 106), and wherein the decoding the code stream comprises: if it is determined according to a length of the prediction vector candidate list that the prediction vector index information needs to be decoded, decoding the code stream to obtain the prediction vector index information (see Wu, paragraph 100). Regarding claim 4, the combination of Wu and Takehara discloses: The video encoding method according to claim 1, wherein the selecting the prediction vector comprises: selecting, according to a specified selection policy, a prediction vector from the prediction vector candidate list as the prediction vector of the to-be-encoded current block (see Wu, paragraph 70). Regarding claim 5, the combination of Wu and Takehara discloses: The video encoding method according to claim 1, wherein the generating the prediction vector candidate list comprises: obtaining, from the displacement vectors of the encoded block whose reference frame is the current frame, displacement vectors of an encoded block adjacent to the current block (see Wu, paragraph 70 and Fig. 10); sorting the displacement vectors of the encoded block adjacent to the current block in a specified sequence to obtain a first displacement vector list (see Wu, paragraph 114); and generating the prediction vector candidate list according to the first displacement vector list (see Wu, paragraph 100). Regarding claim 6, the combination of Wu and Takehara discloses: The video encoding method according to claim 5, wherein the adjacent encoded block comprises at least one of the following: one or more encoded blocks in n1 rows above the current block and one or more encoded blocks in n2 columns on the left of the current block, and wherein n1 and n2 are positive integers (see Fig 10). Regarding claim 7, the combination of Wu and Takehara discloses: The video encoding method according to claim 1, wherein the generating the prediction vector candidate list comprises: obtaining, from the displacement vectors of the encoded block whose reference frame is the current frame, displacement vectors of a historical encoded block (see Wu, paragraph 100); adding the displacement vectors of the historical encoded block to a queue of a specified length in a first-in first-out manner according to a specified sequence, to obtain a second displacement vector list (see Wu, paragraph 100); and generating the prediction vector candidate list according to the second displacement vector list (see Wu, paragraph 100). Regarding claim 8, the combination of Wu and Takehara discloses: The video encoding method according to claim 7, further comprising: when the displacement vectors of the historical coded block are added to the queue and if a same displacement vector already exists in the queue, deleting the same displacement vector existing in the queue (see Wu, paragraph 100), and wherein the second displacement vector list corresponds to at least one to-be-encoded area, and the at least one to-be-encoded area comprises one of the following: a superblock (SB) in which the current block is located, a row in the SB in which the current block is located, and a tile in which the current block is located (see Wu, paragraph 101-102). Regarding claim 9, the combination of Wu and Takehara discloses: The video encoding method according to claim 8, further comprising: if a displacement vector in the second displacement vector list corresponding to a target to-be-encoded area exceeds a specified value, stopping adding a displacement vector to the second displacement vector list corresponding to the target to-be-encoded area (see Wu, paragraph 62 and 100); and if a quantity of times of adding the displacement vector to the second displacement vector list corresponding to the target to-be-encoded area exceeds a specified quantity of times, stopping adding a displacement vector to the second displacement vector list corresponding to the target to-be-encoded area (see Wu, paragraph 62 and 100). Regarding claim 10, the combination of Wu and Takehara discloses: The video encoding method according to claim 1, wherein the generating the prediction vector candidate list comprises: obtaining, from the displacement vectors of the encoded block whose reference frame is the current frame, displacement vectors of an encoded block adjacent to the current block (see Wu, paragraph 70 and Fig. 10); sorting the displacement vectors of the encoded block adjacent to the current block in a specified sequence to obtain a first displacement vector list (see Wu, paragraph 114); obtaining, from the displacement vectors of the encoded block whose reference frame is the current frame, displacement vectors of a historical encoded block (see Wu, paragraph 100); adding the displacement vectors of the historical encoded block to a queue of a specified length in a first-in first-out manner according to a specified sequence, to obtain a second displacement vector list (see Wu, paragraph 100); and generating the prediction vector candidate list according to the first displacement vector list and the second displacement vector list (see Wu, paragraph 100). Regarding claims 11-19, claims 11-19 are drawn to a device having limitations similar to the method claimed in claims 1-1 treated in the above rejections. Therefore, device claims 11-19 correspond to method claims 1-10 and are rejected for the same reasons of anticipation as used above. Regarding claim 20, claim 20 is drawn to a computer readable storage medium having limitations similar to the method claimed in claim 1 treated in the above rejections. Therefore, computer readable storage medium claim 20 corresponds to method claim 1 and is rejected for the same reasons of anticipation as used above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARYAM A NASRI whose telephone number is (571)270-7158. The examiner can normally be reached 10:00-8:00 M-T. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARYAM A NASRI/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Nov 21, 2023
Application Filed
Sep 20, 2025
Non-Final Rejection — §103
Oct 22, 2025
Examiner Interview Summary
Oct 22, 2025
Applicant Interview (Telephonic)
Nov 24, 2025
Response Filed
Mar 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604010
METHOD, DEVICE, AND MEDIUM FOR VIDEO PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12604013
THRESHOLD OF SIMILARITY FOR CANDIDATE LIST
2y 5m to grant Granted Apr 14, 2026
Patent 12598305
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12598296
VIDEO DECODING METHOD USING BI-PREDICTION AND DEVICE THEREFOR
2y 5m to grant Granted Apr 07, 2026
Patent 12598304
IMAGE PROCESSING METHOD, AND DEVICE FOR SAME
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
76%
With Interview (+2.6%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 462 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month