Prosecution Insights
Last updated: April 19, 2026
Application No. 18/570,287

VIDEO SIGNAL ENCODING/DECODING METHOD AND RECORDING MEDIUM HAVING BITSTREAM STORED THEREIN

Non-Final OA §102§103
Filed
Dec 14, 2023
Examiner
HESS, MICHAEL J
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Kt Corporation
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
52%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
183 granted / 418 resolved
-14.2% vs TC avg
Moderate +8% lift
Without
With
+7.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
66 currently pending
Career history
484
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 418 resolved cases

Office Action

§102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 15 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by a prior art DVD or similar. This rejection is dictated by Technology Center policy. Product-by-process claims must be distinguishable by their physical characteristics without regard to the process by which they are made. Claim 15 is alternatively rejected under 35 U.S.C. 103, infra. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1–3 and 9–15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US 2022/0295088 A1) and Chiang et al., “CE10.1.1: Multi-hypothesis prediction for improving AMVP mode, skip or merge mode, and intra mode,” JVET-L0100-v3, 12th Meeting: Macao, CN, Oct. 2018 (herein “Chiang”). Regarding claim 1, the combination of Zhang and Chiang teaches or suggests a method of decoding a video, the method comprising: obtaining a first prediction block for a current block based on a first inter-prediction mode (Examiner interprets the subject matter of this claim consistent with the claim set as a whole, including claims 2–5 and 9–12; Zhang, ¶ 0220: teaches bi-prediction template matching between the current block and a frame from reference list 0 and between the current frame and a frame from reference list 1; Examiner notes that bi-prediction template matching is not required of the claim and that other embodiments are described in Zhang); obtaining a second prediction block for the current block based on a second inter-prediction mode (Chiang, Abstract and Section 2.2: teach the final prediction is a weighted average of two merge candidates wherein the first inter-prediction merge candidate in the list is weighted heavier than the second inter-prediction merge candidate); and obtaining a final prediction block for the current block based on the first prediction block and the second prediction block (Chiang, Abstract and Section 2.2: teach the final prediction is a weighted average of two merge candidates wherein the first inter-prediction merge candidate in the list is weighted heavier than the second inter-prediction merge candidate; Zhang, ¶ 0156: teaches that a template matching candidate can be inserted at the beginning of the list and can also include a second candidate; These teachings, when combined teaches multi-hypothesis prediction wherein two inter prediction candidates are weighted to produce a final prediction and wherein one of the inter-prediction modes is bi-prediction template matching). One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Zhang, with those of Chiang, because both references are drawn to the same field of endeavor such that one wishing to practice AMVP or merge inter-prediction would be led to their relevant teachings and because, as evidenced by Chiang, combining Zhang’s template merge candidate with a second merge candidate is nothing more than a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Zhang and Chiang used in this Office Action unless otherwise noted. Regarding claim 2, the combination of Zhang and Chiang teaches or suggests the video decoding method of claim 1, wherein at least one of the first inter-prediction mode or the second inter-prediction mode is a decoder-side motion estimation mode in which a decoder performs motion estimation in the same manner as an encoder using a previously reconstructed reference picture (Zhang, ¶¶ 0210 and 0300: teaches TM-DMVD is template matched decoder side motion vector derivation which means it is not signaled by derived at the decoder side and can be used for both merge and AMVP modes of inter-prediction). Regarding claim 3, the combination of Zhang and Chiang teaches or suggests the video decoding method of claim 2, wherein the motion estimation includes a process of searching for a combination with an optimal cost among combinations of a current template composed of a previously reconstructed area around the current block and a reference template of the same size as the current template within the reference picture (Examiner notes this claim is merely describing the prior art’s template matching approach; Zhang, ¶ 0008: explains that template matching calculates a cost (difference) between the current block’s template and a reference template; Zhang, Fig. 23B: illustrates the template and reference template(s) are the same size; Zhang, ¶ 0156: explains the closest match (cost) between the current template and reference template is chosen wherein the templates are the same size; see also Zhang, ¶ 0324: teaching that while the size may be calculated based on the size of the current block, there is no mention that the reference template would be calculated any differently, i.e. Zhang draws no distinction between the current template and reference template; Examiner notes it makes sense that the templates are envisaged to be the same size since the templates are matched). Regarding claim 9, the combination of Zhang and Chiang teaches or suggests the video decoding method of claim 2, wherein the motion estimation includes a process of searching for a combination with an optimal cost among combinations of an L0 reference block included in an L0 reference picture and an L1 reference block included in an L1 reference picture (Zhang, ¶ 0220: teaches bi-prediction template matching between the current block and a frame from reference list 0 and between the current frame and a frame from reference list 1). Regarding claim 10, the combination of Zhang and Chiang teaches or suggests the video decoding method of claim 9, wherein an output order of the current picture is in between an output order of the L0 reference picture and an output order of the L1 reference picture (Zhang, ¶ 0220: teaches bi-prediction template matching between the current block and a frame from reference list 0 and between the current frame and a frame from reference list 1; Examiner notes Zhang explains the pictures are before and after the current picture in at least one described embodiment of bi-prediction template matching). Regarding claim 11, the combination of Zhang and Chiang teaches or suggests the video decoding method of claim 1, wherein the first inter-prediction mode is used for L0 direction prediction of the current block, and the second inter-prediction mode is used for L1 direction prediction of the current block (Zhang, ¶ 0219: teaches uni-prediction template matching as one embodiment). Regarding claim 12, the combination of Zhang and Chiang teaches or suggests the video decoding method of claim 1, wherein the final prediction block is derived on the basis of a weighted sum operation of the first prediction block and the second prediction block, and a first weight assigned to the first prediction block and a second weight assigned to the second prediction block during the weighted sum operation are adaptively determined depending on a type of the first inter-prediction mode or the second inter-prediction mode (Chiang, Abstract and Section 2.2: teach the final prediction is a weighted average of two merge candidates wherein the first inter-prediction merge candidate in the list is weighted heavier than the second inter-prediction merge candidate; Zhang, ¶ 0156: teaches that a template matching candidate can be inserted at the beginning of the list and can also include a second candidate; These teachings, when combined teaches multi-hypothesis prediction wherein two inter prediction candidates are weighted to produce a final prediction and wherein one of the inter-prediction modes is given preference based on its mode type). Regarding claim 13, the combination of Zhang and Chiang teaches or suggests the video decoding method of claim 12, wherein the first weight has a value greater than the second weight in a case where the first inter-prediction mode is a decoder-side motion estimation mode and the second inter-prediction mode is a motion information signaling mode (Chiang, Abstract and Section 2.2: teach the final prediction is a weighted average of two merge candidates wherein the first inter-prediction merge candidate in the list is weighted heavier than the second inter-prediction merge candidate; Zhang, ¶ 0156: teaches that a template matching candidate can be inserted at the beginning of the list and can also include a second candidate; These teachings, when combined teaches multi-hypothesis prediction wherein two inter prediction candidates are weighted to produce a final prediction and wherein one of the inter-prediction modes is given preference based on its mode type). Claim 14 lists the same elements as claim 1, but is drawn to the corresponding encoding method rather than the decoding method. Therefore, the rationale for the rejection of claim 1 applies to the instant claim. Claim 15 lists the same elements as claim 1, but is drawn to a product-by-process claim. Therefore, the rationale for the rejection of claim 1 applies to the instant claim. Furthermore, because product claims are evaluated for their physical structure rather than the method by which they are made, claim 15 is also alternatively rejected under 35 U.S.C. 102(a)(1), supra. Claims 4–8 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, Chiang, and Xiu (US 2021/0051340 A1). Regarding claim 4, the combination of Zhang, Chiang, and Xiu teaches or suggests the video decoding method of claim 3, wherein the motion estimation is performed on each of reference pictures having reference picture indices less than a threshold value in a reference picture list (Zhang, ¶ 0188: teaches the reference pictures are both short-term reference pictures, i.e. having reference picture indices less than a threshold, i.e. the threshold that would make them long-term reference pictures; Xiu, ¶ 0090: teaches reference picture distances below a pre-defined threshold indicates steady motion which can indicate a smaller search range). One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Zhang and Chiang, with those of Xiu, because all three references are drawn to the same field of endeavor such that one wishing to practice Zhang’s template matching would be led to Xiu’s relevant teachings and because, as evidenced by Xiu, combining Zhang’s template matching with Xiu’s adaptive search ranges is a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Zhang, Chiang, and Xiu used in this Office Action unless otherwise noted. Regarding claim 5, the combination of Zhang, Chiang, and Xiu teaches or suggests the video decoding method of claim 3, wherein the motion estimation is performed on each of reference pictures whose output order differences from a current picture are equal to or less than a threshold value in the reference picture list (Zhang, ¶ 0188: teaches the reference pictures are both short-term reference pictures, i.e. having reference picture indices less than a threshold, i.e. the threshold that would make them long-term reference pictures; Xiu, ¶ 0090: teaches reference picture distances, determined by POC, which is output order, below a pre-defined threshold indicates steady motion which can indicate a smaller search range). Regarding claim 6, the combination of Zhang, Chiang, and Xiu teaches or suggests the video decoding method of claim 3, wherein the reference template is searched for within a search range set in the reference picture, and the search range is set on the basis of initial motion information of the current block (Xiu, ¶ 0090: teaches adaptive search ranges wherein the smaller the motion, the smaller the search range and the larger the motion the larger the search range; Zhang, ¶¶ 0253 and 0312: teaches a search range set to 8 samples away from the initial MV (starting point)). Regarding claim 7, the combination of Zhang, Chiang, and Xiu teaches or suggests the video decoding method of claim 6, wherein the initial motion information is motion information on an area larger than the current block (Xiu, ¶ 0090: teaches the motion correlation used to determine if the motion is big or small can include a spatial neighborhood of the current block, which is an area larger than the current block). Regarding claim 8, the combination of Zhang, Chiang, and Xiu teaches or suggests the video decoding method of claim 3, wherein the reference template is searched for within the search range set in the reference picture, the search range is determined on the basis of motion characteristics of an area including the current block, and the motion characteristics of the area are set to one of an area with strong motion or an area with weak motion (Xiu, ¶ 0090: teaches adaptive search ranges wherein the smaller the motion, the smaller the search range and the larger the motion the larger the search range). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Liu (US 2020/0382807 A1) teaches template matching and FRUC wherein the weighting factor given to a MV difference is determined based on reference picture index or POC distance (¶¶ 0113–0114), which is relevant to at least original claims 4 and 5. Zhang (US 2021/0006788 A1) teaches template matching and FRUC wherein motion candidates are determined based on reference picture index or POC distance within a certain range (¶ 0694). Du (US 2021/0250597 A1) teaches restricting temporal source frames to those that satisfy a temporal distance (e.g. POC difference) that is less than a threshold (¶ 0214). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael J Hess whose telephone number is (571)270-7933. The examiner can normally be reached Mon - Fri 9:00am-5:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8933. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL J HESS/Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Dec 14, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §102, §103
Aug 20, 2025
Response Filed
Aug 20, 2025
Response after Non-Final Action
Sep 25, 2025
Examiner Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12563195
Method And An Apparatus for Encoding and Decoding of Digital Image/Video Material
2y 5m to grant Granted Feb 24, 2026
Patent 12563208
PICTURE CODING METHOD, PICTURE CODING APPARATUS, PICTURE DECODING METHOD, AND PICTURE DECODING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12556737
MOTION COMPENSATION FOR VIDEO ENCODING AND DECODING
2y 5m to grant Granted Feb 17, 2026
Patent 12556747
ARRAY BASED RESIDUAL CODING ON NON-DYADIC BLOCKS
2y 5m to grant Granted Feb 17, 2026
Patent 12549728
METHOD AND APPARATUS FOR CODING VIDEO DATA IN TRANSFORM-SKIP MODE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
52%
With Interview (+7.7%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 418 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month