Prosecution Insights
Last updated: April 19, 2026
Application No. 18/207,524

VIDEO SIGNAL PROCESSING METHOD AND DEVICE

Final Rejection §103
Filed
Jun 08, 2023
Examiner
UHL, LINDSAY JANE KILE
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Kt Corporation
OA Round
4 (Final)
80%
Grant Probability
Favorable
5-6
OA Rounds
2y 4m
To Grant
89%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
324 granted / 404 resolved
+22.2% vs TC avg
Moderate +9% lift
Without
With
+8.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
38 currently pending
Career history
442
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
65.4%
+25.4% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 404 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is in response to the amendment filed on August 27, 2025. Claims 16, 23, and 28-31 are pending and are examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendments made to original claims 16, 23, and 28-30 and the cancellation of claim 27 have been fully considered. Response to Argument Applicant's arguments and amendments received August 27, 2025 have been fully considered. With regard to 35 U.S.C. § 103, Applicant argues that the cited prior art fails to disclose “wherein under the merge mode, one of a first method and a second method is selected based on information explicitly decoded from a bitstream to modify the initial motion vector, wherein in response to the first method being selected, the initial motion vector is modified by a refinement motion vector which is derived based on refinement vector information explicitly decoded from the bitstream, and wherein in response to the second method being selected, the initial motion vector is modified by a delta motion vector which is derived at a decoder side without using the refinement vector information. These arguments pertain to newly amended claim language which is further addressed below. See the rejection below for how a newly added reference reads on the newly amended language as well as the examiner's interpretation of the cited art in view of the presented claim set. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the “wherein under the merge mode, one of a first method and a second method is selected based on information explicitly decoded from a bitstream to modify the initial motion vector, wherein in response to the first method being selected, the initial motion vector is modified by a refinement motion vector which is derived based on refinement vector information explicitly decoded from the bitstream, and wherein in response to the second method being selected, the initial motion vector is modified by a delta motion vector which is derived at a decoder side without using the refinement vector information” must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 16, 23, and 28-31 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2017/0085906 (“Chen”) in view of U.S. Patent Publication No. 2019/0110058 (“Chien 2”), which corresponds to a provisional application filed October 2017. With respect to claim 16, Chen discloses the invention substantially as claimed, including: A method of decoding an image (see Abstract, Fig. 3, describing a decoder for decoding video data), the method comprising: determining whether a merge mode is applied to a current block (see ¶¶4, 19, Table 1, describing determining whether a merge mode is used for, i.e., applied to, a current block, e.g., by the use of mergeFlag or mergeRefinedFlag); in response to the merge mode being applied to the current block, generating a merge candidate list of a current block, the merge candidate list including a plurality of merge candidates (see Fig. 3, item 330, ¶¶4, 8, 22, describing the generation of a candidate set, i.e., list, including a plurality of merge candidates where merge mode is applied to the current block); determining an initial motion vector of a current block based on one of the plurality of merge candidates in the merge candidate list (see Fig. 3, item 340, ¶¶4, 8, 15, 19, 22, Table 1, describing determining motion information, e.g., a motion vector predictor, i.e., initial motion vector, of the current block based on an indexed merge candidate from the merge candidate set); obtaining a motion vector of the current block by modifying the initial motion of the current block (see citations with respect to elements above and Abstract, Fig. 3, item 350, ¶¶13-15, 18-20, 24, describing refining, i.e., modifying the motion vector predictor to obtain a motion vector of the current block); and performing inter prediction on the current block based on the motion vector (see ¶¶4, 9, 13-14, describing that these motion vectors are used for inter prediction); wherein under the merge mode, one of a first method and a second method is selected based on information explicitly decoded from a bitstream to modify the initial motion vector (see ¶¶8, 13-15, 18-20, 24, Table 1, describing that the system may use a merge refinement mode which uses a coded motion vector difference in the bitstream, i.e., a first method, or a merge mode, i.e., a second method, to modify the initial motion vector based on the merge flag and merge refined flag, i.e., based on information explicitly decoded from the bitstream); wherein in response to the first method being selected, the initial motion vector is modified by a refinement motion vector which is derived based on refinement vector information explicitly decoded from the bitstream (see citations with respect to element above, describing that for the first method, the initial motion vectors for bi-prediction may be modified by motion vector differences in each direction which are signaled in the bitstream, i.e., refinement vectors derived based on refinement vector information explicitly decoded from the bitstream), and wherein in response to the second method being selected, … without using the refinement vector information (see citations with respect to element above, describing that where merge mode is selected, such MVD is not explicitly decoded from the bitstream). Chen does not explicitly disclose that, in the merge mode, the initial motion vector is modified by a delta motion vector which is derived at a decoder side. However, in the same field of endeavor, Chien 2 discloses that it was known to perform decoder side modification of merge candidates, i.e.: the initial motion vector is modified by a delta motion vector which is derived at a decoder side (see Abstract, Fig. 7, ¶¶115-117, 125-127, describing that it was known, in merge mode, to further modify the initial motion vector/merge mode candidate, e.g., by adding and subtracting offset dMV, a delta motion vector, during a decoder side motion vector derivation process, i.e., derived at the decoder side). Chien 2 discloses that modifying the initial merge mode candidates/motion vectors with DMVD may reduce the bit cost of motion information and increase coding efficiency (see ¶107). At the time of filing, one of ordinary skill would have been familiar with merge mode and with DMVD and would also have understood the importance of reducing bit costs and increasing coding efficiency. Accordingly, such a person would have been motivated to include DMVD in the merge mode of Chen in order to obtain this advantage. Like Chien 2, Chen also describes the importance of bit reduction for coding motion information. Chen describes its merge refinement mode as a more costly, but higher-quality alternative to merge mode and a less costly alternative to AMVP. Chen shows in its Table 1 that each of merge mode, merge refined mode, and AMVP may be used for any given block. This clearly allows the coding system to selectively prioritize quality vs. bit cost as bandwidth and content allows. One of ordinary skill in the art at the time of filing would have understood that applying the DMVD refinement in Chien 2 to the lower-quality but lower bit cost merge mode of Chen would allow for an increase the coding efficiency/quality of the merge mode when reducing bit cost further from the merge refinement mode was desired. Accordingly, such a person would have been motivated to have combined the DMVD refinement of Chien 2 with the merge mode of Chen. Moreover, doing so would have represented nothing more than the combination of prior art elements according to predictable results and/or the simple substitution of one known element for another to obtain predictable results. Therefore, it would have been obvious to one having ordinary skill in the art at the time of filing to include a mechanism for applying DMVD refinement as described in Chien 2 to the selected merge mode candidate at the decoder in the coding system of Chen as taught by Chien 2. With respect to claim 23, Chen discloses the invention substantially as claimed. As detailed above, Chen in view of Chien 2 discloses each and every element of independent claim 16. Chen/Chien 2 additionally discloses: wherein the refinement vector information on the refinement motion vector includes magnitude information indicating one of multiple magnitude candidates and direction information indicating one of multiple direction candidates (see citations and arguments with respect to claim 16 above and Chen ¶18, describing that the MVD may have magnitude and direction, e.g., vertical and horizontal, i.e., indicating one of multiple magnitude candidates and one of multiple direction candidates). The reasons for combining the cited prior art with respect to claim 16 also apply to claim 23. With respect to claim 28, Chen discloses the invention substantially as claimed. As detailed above, Chen in view of Chien 2 discloses each and every element of independent claim 16. Chen/Chien 2 additionally discloses: A method of encoding an image (see Chen Abstract, Fig. 2, describing an encoder for encoding video/image data), the method comprising: generating a merge candidate list of a current block, the merge candidate list including a plurality of merge candidates (see citations and arguments with respect to corresponding element of claim 16 above and Chen Fig. 2, item 230, ¶¶7, 21, describing the generation of a candidate set, i.e., list, including a plurality of merge candidates where merge mode is applied to the current block); determining an initial motion vector of a current block based on one among the plurality of merge candidates (see citations and arguments with respect to corresponding element of claim 16 above and Chen Fig. 2, item 240, ¶¶7, 21, describing determining motion information, e.g., a motion vector predictor, i.e., initial motion vector, of the current block based on an indexed merge candidate from the merge candidate set); obtaining a motion vector of the current block by modifying the initial motion vector of the current block (see citations with respect to elements above and Chen Fig. 2, items 250-270, describing refining, i.e., modifying, the motion vector predictor to obtain a motion vector of the current block); and performing inter prediction on the current block based on the motion vector (see citations and arguments with respect to corresponding element of claim 16 above, describing that both encoder and decoder perform such an inter prediction); wherein a merge flag indicating whether a merge mode is applied to the current block or not is encoded into a bitstream (see citations and arguments with respect to corresponding element of claim 16 above); wherein in response to the merge flag being encoded to indicate that the merge mode being applied, information to select one of a first method and a second method for modifying the initial motion vector of the current block is further encoded into the bitstream (see citations and arguments with respect to corresponding element of claim 16 above), wherein in response to the first method being selected, the initial motion vector is modified by a refinement motion vector which is derived based on refinement vector information explicitly decoded from the bitstream (see citations and arguments with respect to corresponding element of claim 16 above), and wherein in response to the second method being selected, the initial motion vector is modified by a delta motion vector which is derived at a decoder side without using the refinement vector information (see citations and arguments with respect to corresponding element of claim 16 above). The reasons for combining the cited prior art with respect to claim 16 also apply to claim 28. With respect to claim 29, Chen discloses the invention substantially as claimed. As detailed above, Chen in view of Chien 2 discloses each and every element of independent claim 16. Chen/Chien 2 additionally discloses: A transmission method of a video signal (see Chen ¶¶3, 13-15, describing transmitting a video signal), comprising: generating a merge candidate list of a current block, the merge candidate list including a plurality of merge candidates (see citations and arguments with respect to corresponding element of claims 16 and 28 above); determining an initial motion vector of a current block based on one among the plurality of merge candidates (see citations and arguments with respect to corresponding element of claims 16 and 28 above); obtaining a motion vector of the current block by modifying the initial motion vector of the current block (see citations and arguments with respect to corresponding element of claim 16 and 28 above); and generating a bitstream by encoding the current block based on the motion vector (see citations with respect to preamble above, describing generating a bitstream by encoding the current block based on the motion vector); and transmitting the video signal including the bitstream (see citations with respect to preamble above, describing transmitting the video signal including the bitstream); wherein a merge flag indicating whether a merge mode is applied to the current block or not is encoded into the bitstream (see citations and arguments with respect to corresponding element of claims 16 and 28 above), and wherein in response to the merge flag being encoded to indicate that the merge mode being applied, information to select one of a first method and a second method for modifying the initial motion vector of the current block is further encoded into the bitstream (see citations and arguments with respect to corresponding element of claim 16 above), wherein in response to the first method being selected, the initial motion vector is modified by a refinement motion vector which is derived based on refinement vector information explicitly decoded from the bitstream (see citations and arguments with respect to corresponding element of claim 16 above), and wherein in response to the second method being selected, the initial motion vector is modified by a delta motion vector which is derived at a decoder side without using the refinement vector information (see citations and arguments with respect to corresponding element of claim 16 above). The reasons for combining the cited prior art with respect to claim 16 also apply to claim 29. With respect to claim 30, Chen discloses the invention substantially as claimed. As detailed above, Chen in view of Chien 2 discloses each and every element of independent claim 16. Chen/Chien 2 additionally discloses: wherein a number of merge candidates available for the current block, when it is determined to modify the initial motion vector with the motion refinement vector, is less than when it is determined not to modify the initial motion vector (see Chen ¶18, describing that in merge refinement mode, i.e., when it is determined to modify the initial motion vector with the motion refinement vector, the merge candidates available in the set may be less than those available for merge mode, i.e., when it is determined not to modify the initial motion vector – for example, in merge refinement mode, merge candidates that would result in an MVD of 0 are not available). The reasons for combining the cited prior art with respect to claim 16 also apply to claim 30. With respect to claim 31, Chen discloses the invention substantially as claimed. As detailed above, Chen in view of Chien 2 discloses each and every element of independent claim 16. Chen/Chien 2 additionally discloses: wherein when it is determined to modify the initial motion vector, the number of merge candidates available for the current block is 2 (see citations and arguments with respect to claim 30 above, describing that for merge refinement mode, i.e., when it is determined to modify the initial motion vector, bi-prediction may be used, i.e., the number of merge candidates available for the current block may be 2; in addition, it is clear that the candidate information for the current block may be indicated in its x and y components, i.e., the number of merge candidates available is 2). The reasons for combining the cited prior art with respect to claim 16 also apply to claim 31. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINDSAY JANE KILE UHL whose telephone number is (571)270-0337. The examiner can normally be reached 8:30 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. LINDSAY J UHL Primary Examiner Art Unit 2481 /LINDSAY J UHL/Primary Examiner, RD00
Read full office action

Prosecution Timeline

Jun 08, 2023
Application Filed
May 10, 2024
Non-Final Rejection — §103
Aug 15, 2024
Response Filed
Oct 16, 2024
Final Rejection — §103
Dec 23, 2024
Response after Non-Final Action
Jan 18, 2025
Request for Continued Examination
Jan 19, 2025
Response after Non-Final Action
May 22, 2025
Non-Final Rejection — §103
Aug 27, 2025
Response Filed
Nov 14, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604000
SYSTEMS AND METHODS FOR PARTITION-BASED PREDICTION MODE REORDERING
2y 5m to grant Granted Apr 14, 2026
Patent 12604030
METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
2y 5m to grant Granted Apr 14, 2026
Patent 12598329
SYNTAX DESIGN METHOD AND APPARATUS FOR PERFORMING CODING BY USING SYNTAX
2y 5m to grant Granted Apr 07, 2026
Patent 12593032
METHOD AND DEVICE FOR PROCESSING VIDEO SIGNAL BY USING INTER PREDICTION
2y 5m to grant Granted Mar 31, 2026
Patent 12587636
GEOMETRIC PARTITION MODE WITH MOTION VECTOR REFINEMENT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
80%
Grant Probability
89%
With Interview (+8.7%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 404 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month