Prosecution Insights
Last updated: April 19, 2026
Application No. 18/975,816

VIDEO SIGNAL PROCESSING METHOD AND DEVICE USING MOTION COMPENSATION

Non-Final OA §112§DP
Filed
Dec 10, 2024
Examiner
SENFI, BEHROOZ M
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Wilus Institute Of Standards And Technology Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
858 granted / 1039 resolved
+24.6% vs TC avg
Moderate +10% lift
Without
With
+10.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
20 currently pending
Career history
1059
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
42.6%
+2.6% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1039 resolved cases

Office Action

§112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 2. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is noted that, the claim is directed to; “A non-transitory computer readable medium storing a bitstream, the bitstream being decoded by a decoding method”. The limitation as recited in the preamble does not make it clear whether the computer readable medium contains instructions to perform the decoding process or not. Additionally, it is not clear if the decoder performing parsing process is operating on the bitstream stored on the medium or not since the claim appears to recite two separate functions. In accordance with compact prosecution as prescribed in MPEP 2173.06, claim language is interpreted as follows: Patentable weight is given to data stored on a computer-readable medium when there exists a functional relationship between the data and its associated substrate. MPEP 2111.05 III. For example, if a claim is drawn to a computer-readable medium containing programming, a functional relationship exists if the programming "performs some function with respect to the computer with which it is associated." Id. However, if the claim recites that the computer-readable medium merely serves as a support for information or data, no functional relationship exists and the information or data is not given patentable weight. Id. However, claim 10 is directed to a non-transitory medium storing a bitstream, the bitstream being decoded. The body of the claim appears to indicate how the bitstream is being generated or decoded. These elements or steps are not performed by an intended computer, and the bitstream is not a form of programming that causes functions to be performed by an intended computer. This shows that the computer-readable medium merely serves as support for the bitstream and provides no functional relationship between the steps/elements that describe the generation or parsing syntax of the bitstream and intended computer system. Therefore, those claim elements are not given patentable weight. Therefore, claim 10 considered as vague and indefinite. Double Patenting 3. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). 4. A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). 5. The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. 6. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-10 are rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over patented claims 1-10 of U.S. Patent No. 12200200 and patented claims 1-16 of U.S. Patent No. 11849106, either alone or in-combination. Although the conflicting claims are not identical, they are not patentably distinct from each other because they claim the same scope of the invention, but using different variations. 18/975816 US 12200200 1. A video signal decoding method comprising: parsing a first syntax element when a merge mode is applied to a current block and a first predefined condition is satisfied, wherein the first syntax element indicates whether a first mode or a second mode is applied to the current block, when the first predefined condition is not satisfied, the first syntax element is inferred based on a third syntax element indicating whether a subblock-based merge mode is applied to the current block; determining whether to parse a second syntax element based on a second predefined condition when the first mode and the second mode are not applied to the current block, wherein the second syntax element indicates a mode applied to the current block among a third mode and a fourth mode, wherein a syntax element related to the third mode and a syntax element related to the fourth mode are located later than the first syntax element in a decoding sequence in a merge data syntax; determining a mode applied to the current block based on the first syntax element or the second syntax element; deriving motion information of the current block based on the determined mode; and generating a prediction block of the current block by using the motion information of the current block, wherein the first predefined condition includes at least one of a condition by which the third mode is usable and a condition by which the fourth mode is usable. 1. A video signal decoding method comprising: parsing a first syntax element indicating whether a merge mode is applied to a current block; parsing a second syntax element when the merge mode is applied to the current block and a first predefined condition is satisfied, wherein the second syntax element indicates whether a first mode or a second mode is applied to the current block, when the first predefined condition is not satisfied, the second syntax element is inferred based on a fourth syntax element indicating whether a subblock-based merge mode is applied to the current block; determining whether to parse a third syntax element based on a second predefined condition when the first mode and the second mode are not applied to the current block, wherein the third syntax element indicates a mode applied to the current block among a third mode and a fourth mode, wherein a syntax element related to the third mode and a syntax element related to the fourth mode are located later than the second syntax element in a decoding sequence in a merge data syntax; determining a mode applied to the current block based on the second syntax element or the third syntax element; deriving motion information of the current block based on the determined mode; and generating a prediction block of the current block by using the motion information of the current block, wherein the first predefined condition includes at least one of a condition by which the third mode is usable and a condition by which the fourth mode is usable. 2. The video signal decoding method of claim 1, wherein the second predefined condition includes a condition by which the fourth mode is usable. 2. The video signal decoding method of claim 1, wherein the second predefined condition includes a condition by which the fourth mode is usable. 3. The video signal decoding method of claim 1, wherein the second predefined condition includes at least one of conditions relating to whether the third mode is usable in a current sequence, whether the fourth mode is usable in the current sequence, whether a maximum number of candidates for the fourth mode is greater than 1, whether a width of the current block is smaller than a first predefined size, and whether a height of the current block is smaller than a second predefined size. 3. The video signal decoding method of claim 1, wherein the second predefined condition includes at least one of conditions relating to whether the third mode is usable in a current sequence, whether the fourth mode is usable in the current sequence, whether a maximum number of candidates for the fourth mode is greater than 1, whether a width of the current block is smaller than a first predefined size, and whether a height of the current block is smaller than a second predefined size. 4. The video signal decoding method of claim 1, further comprising, when the first syntax element has a value of 1, obtaining a fourth syntax element indicating whether the mode applied to the current block is the first mode or the second mode. 4. The video signal decoding method of claim 1, further comprising, when the second syntax element has the value of 1, obtaining a fifth syntax element indicating whether a mode applied to the current block is the first mode or the second mode. 5. A video signal decoding apparatus comprising a processor, wherein the processor is configured to: parse a first syntax element when a merge mode is applied to a current block and a first predefined condition is satisfied, wherein the first syntax element indicates whether a first mode or a second mode is applied to the current block, when the first predefined condition is not satisfied, the first syntax element is inferred based on a third syntax element indicating whether a subblock-based merge mode is applied to the current block; determine whether to parse a second syntax element based on a second predefined condition when the first mode and the second mode are not applied to the current block, wherein the second syntax element indicates a mode applied to the current block among a third mode and a fourth mode, wherein a syntax element related to the third mode and a syntax element related to the fourth mode are located later than the first syntax element in a decoding sequence in a merge data syntax; determine a mode applied to the current block based on the first syntax element or the second syntax element; derive motion information of the current block based on the determined mode; and generate a prediction block of the current block by using the motion information of the current block, wherein the first predefined condition includes at least one of a condition by which the third mode is usable and a condition by which the fourth mode is usable. 5. A video signal decoding apparatus comprising a processor, wherein the processor is configured to: parse a first syntax element indicating whether a merge mode is applied to a current block; parsing a second syntax element when the merge mode is applied to the current block and a first predefined condition is satisfied, wherein the second syntax element indicates whether a first mode or a second mode is applied to the current block, when the first predefined condition is not satisfied, the second syntax element is inferred based on a fourth syntax element indicating whether a subblock-based merge mode is applied to the current block; determine whether to parse a third syntax element based on a second predefined condition when the first mode and the second mode are not applied to the current block, wherein the third syntax element indicates a mode applied to the current block among a third mode and a fourth mode, wherein a syntax element related to the third mode and a syntax element related to the fourth mode are located later than the second syntax element in a decoding sequence in a merge data syntax; determine a mode applied to the current block based on the second syntax element or the third syntax element; derive motion information of the current block based on the determined mode; and generate a prediction block of the current block by using the motion information of the current block, wherein the first predefined condition includes at least one of a condition by which the third mode is usable and a condition by which the fourth mode is usable. 6. The video signal decoding apparatus of claim 5, wherein the second predefined condition includes a condition by which the fourth mode is usable. 6. The video signal decoding apparatus of claim 5, wherein the second predefined condition includes a condition by which the fourth mode is usable. 7. The video signal decoding apparatus of claim 5, wherein the second predefined condition includes at least one of conditions relating to whether the third mode is usable in a current sequence, whether the fourth mode is usable in the current sequence, whether a maximum number of candidates for the fourth mode is greater than 1, whether a width of the current block is smaller than a first predefined size, and whether a height of the current block is smaller than a second predefined size. 7. The video signal decoding apparatus of claim 5, wherein the second predefined condition includes at least one of conditions relating to whether the third mode is usable in a current sequence, whether the fourth mode is usable in the current sequence, whether a maximum number of candidates for the fourth mode is greater than 1, whether a width of the current block is smaller than a first predefined size, and whether a height of the current block is smaller than a second predefined size. 8. The video signal decoding apparatus of claim 5, wherein when the first syntax element has a value of 1, the processor is configured to obtain a fourth syntax element indicating whether the mode applied to the current block is the first mode or the second mode. 8. The video signal decoding apparatus of claim 5, wherein when the second syntax element has the value of 1, the processor is configured to obtain a fifth syntax element indicating whether a mode applied to the current block is the first mode or the second mode. 9. A video signal encoding method comprising: encoding a first syntax element when the merge mode is applied to the current block and a first predefined condition is satisfied, wherein the first syntax element indicates whether a first mode or a second mode is applied to the current block, when the first predefined condition is not satisfied, the first syntax element is not included in a bitstream including the video signal; determining whether to encode a second syntax element based on a second predefined condition when the first mode and the second mode are not applied to the current block, wherein the second syntax element indicates a mode applied to the current block among a third mode or a fourth mode, wherein a syntax element related to the third mode and a syntax element related to the fourth mode are located later than the first syntax element in a decoding sequence in a merge data syntax; deriving motion information of the current block based on a mode applied to the current block; and generating a prediction block of the current block by using the motion information of the current block, wherein the first predefined condition includes at least one of a condition by which the third mode is usable and a condition by which the fourth mode is usable. 9. A video signal encoding method comprising: encoding a first syntax element indicating whether a merge mode is applied to a current block; encoding a second syntax element when the merge mode is applied to the current block and a first predefined condition is satisfied, wherein the second syntax element indicates whether a first mode or a second mode is applied to the current block, when the first predefined condition is not satisfied, the second syntax element is not included in a bitstream including the video signal; determining whether to encode a third syntax element based on a second predefined condition when the first mode and the second mode are not applied to the current block, wherein the third syntax element indicates a mode applied to the current block among a third mode or a fourth mode, wherein a syntax element related to the third mode and a syntax element related to the fourth mode are located later than the second syntax element in a decoding sequence in a merge data syntax; deriving motion information of the current block based on a mode applied to the current block; and generating a prediction block of the current block by using the motion information of the current block, wherein the first predefined condition includes at least one of a condition by which the third mode is usable and a condition by which the fourth mode is usable. 10. A non-transitory computer-readable medium storing a bitstream, the bitstream being decoded by a decoding method, wherein the decoding method, comprising: parsing a first syntax element when a merge mode is applied to a current block and a first predefined condition is satisfied, wherein the first syntax element indicates whether a first mode or a second mode is applied to the current block, when the first predefined condition is not satisfied, the first syntax element is inferred based on a third syntax element indicating whether a subblock-based merge mode is applied to the current block; determining whether to parse a second syntax element based on a second predefined condition when the first mode and the second mode are not applied to the current block, wherein the second syntax element indicates a mode applied to the current block among a third mode and a fourth mode, wherein a syntax element related to the third mode and a syntax element related to the fourth mode are located later than the first syntax element in a decoding sequence in a merge data syntax; determining a mode applied to the current block based on the first syntax element or the second syntax element; deriving motion information of the current block based on the determined mode; and generating a prediction block of the current block by using the motion information of the current block, wherein the first predefined condition includes at least one of a condition by which the third mode is usable and a condition by which the fourth mode is usable. 10. A non-transitory computer-readable medium storing a bitstream, the bitstream being decoded by a decoding method, wherein the decoding method, comprising: parsing a first syntax element indicating whether a merge mode is applied to a current block; parsing a second syntax element when the merge mode is applied to the current block and a first predefined condition is satisfied, wherein the second syntax element indicates whether a first mode or a second mode is applied to the current block, when the first predefined condition is not satisfied, the second syntax element is inferred based on a fourth syntax element indicating whether a subblock-based merge mode is applied to the current block; determining whether to parse a third syntax element based on a second predefined condition when the first mode and the second mode are not applied to the current block, wherein the third syntax element indicates a mode applied to the current block among a third mode and a fourth mode, wherein a syntax element related to the third mode and a syntax element related to the fourth mode are located later than the second syntax element in a decoding sequence in a merge data syntax; determining a mode applied to the current block based on the second syntax element or the third syntax element; deriving motion information of the current block based on the determined mode; and generating a prediction block of the current block by using the motion information of the current block, wherein the first predefined condition includes at least one of a condition by which the third mode is usable and a condition by which the fourth mode is usable. In view of the above, allowing claims 1-10 of the instant application would result in an unjustified or improper time-wise extension of the "right to exclude" granted by a patent. See In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Feb. Cir. 1993). Contact Information 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Behrooz Senfi, whose telephone number is (571)272-7339. The examiner can normally be reached on Monday-Friday 10:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Christopher Kelley can be reached on 571 272 7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786- 9199 (IN USA OR CANADA) or 571 -272-1000. /BEHROOZ M SENFI/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Dec 10, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581050
OPTICAL ASSEMBLIES FOR MACHINE VISION CALIBRATION
2y 5m to grant Granted Mar 17, 2026
Patent 12574493
DISPLAY DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12568287
GENERATING THREE-DIMENSIONAL VIDEOS BASED ON TEXT USING MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 03, 2026
Patent 12563170
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND DISPLAY DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12556676
IMAGE SENSOR, CAMERA AND IMAGING SYSTEM WITH TWO OR MORE FOCUS PLANES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
93%
With Interview (+10.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 1039 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month