Prosecution Insights
Last updated: April 19, 2026
Application No. 19/012,258

TEMPORAL SIGNALLING FOR VIDEO CODING TECHNOLOGY

Non-Final OA §101§DP
Filed
Jan 07, 2025
Examiner
RAHAMAN, SHAHAN UR
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
V-NOVA INTERNATIONAL LTD
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
479 granted / 633 resolved
+17.7% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
51 currently pending
Career history
684
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 633 resolved cases

Office Action

§101 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 USC § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 21 is rejected under 35 USC § 101 because the claimed invention is directed to non-statutory subject matter. The claim is claiming bitstream which is data per se and is not directed to any of the statutory categories . Allowable Subject Matter Claims 2-21 are allowed over prior art. The following is an examiner’s statement of reasons for allowance: The primary reason for allowance of independent claim 2 is that it contains limitations “assign a value to a block-based temporal parameter, wherein the value assigned to the block-based temporal parameter indicates the temporal mode for the block of the input video and further indicates the temporal mode for the tile comprising the block of the input video.”. This limitation in the context of other limitations in the claim, considering the claim as a whole, is not anticipated nor is obvious over the prior art of record or prior art found during Examiner’s search. Other Independent Claims are allowable for similar reason. Depended claims are allowable, at least, because of their dependence on the allowable independent claim. Examiner found following prior arts to be related and are from the general field of the claimed invention, however they separately or in combination fails to teach the above discussed specific claimed limitations: US 20130322524 A1 (Jang) US 20140219349 A1 (Chien) US 20190387224 A1 (Phillips para 71, if a tile has all “I” blocks the tile is “I”-tile or intra coded, therefore temporal mode of block indicates temporal mode of tiles ) US 20150341674 A1 (Seregin, para 117, a flag for the first block of a tile indicates a predictor palette is reset in the beginning of the tile) For instance, regarding claim 2: Jang teaches An encoder configured to encode an input video [(Fig.2, encoder 120 encode input video from source 120)] into a plurality of encoded streams, such that the encoded streams may be combined to reconstruct the input video [(generated encoded bitstream for base layer and enhancement layer {Fig.3} that are combined by decoder to reconstruct {para 119})] , the encoder configured to: receive an input video comprising respective frames, each frame of the respective frames being divided into a plurality of blocks. [(para 91)] generate a base encoded stream using a base encoder [(para 86: base layer encoder 121)] : determine a temporal mode for one or more further encoded enhancement streams for use in reconstructing the input video together with the base encoded stream, the one or more further encoded enhancement streams being generated using an enhancement encoder [(Fig.15-16: selector 404 determines the coding mode of the enhancement layer))] , wherein the temporal mode is one of a first temporal mode that does not apply non-zero values from a temporal buffer for generating the one or more further encoded enhancement streams [(intra coding => first temporal mode {para 95})] and a second temporal mode that does apply non-zero values from the temporal buffer for generating the one or more further encoded enhancement streams [(inter coding => second temporal mode {para 95} that uses temporal data or data from macroblock of adjacent frame: the adjacent frame, for inter prediction, is provided from the first memory 700/temporal buffer {para 270, 297}, which will have non-zero value {para 108}: intra coding => first temporal mode {para 95}, intra coding only use current frame {}, does not use adjacent frame{para 92} therefore does not use any non-zero values from the temporal buffer 700)]: and generate the one or more further encoded enhancement streams based on data derived from the base encoded stream and the input video according to the determined temporal mode [(para 94, 97)] , wherein generating the one or more further encoded enhancement streams comprises applying a transform to each of a series of blocks of the plurality of blocks[(para 99)] , and wherein the temporal mode is determined for one or more of a frame, tile, or block of the input video [(para 290, mode selection on frame basis)] . In the same/related field of endeavor, Chien teaches frames being divided into a plurality of tiles and each tile of the plurality of tiles being divided into a plurality of blocks [(partitioning video quadtree into tiles and block)] However, they fail to teach “assign a value to a block-based temporal parameter, wherein the value assigned to the block-based temporal parameter indicates the temporal mode for the block of the input video and further indicates the temporal mode for the tile comprising the block of the input video” Though Phillips teaches in para 71, if a tile has all “I” blocks the tile is “I”-tile or intra coded, therefore temporal mode of block indicates temporal mode of tiles and Seregin in para 117, a flag for the first block of a tile indicates a predictor palette is reset in the beginning of the tile. These are not the equivalent to “assign a value to a block-based temporal parameter, wherein the value assigned to the block-based temporal parameter indicates the temporal mode for the block of the input video and further indicates the temporal mode for the tile comprising the block of the input video” Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 2-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-15 of U.S. Patent No. 12192537. Although the claims at issue are not identical, they are not patentably distinct from each other because independent claim 2 is taught by patented claims 1 and 13 (as shown in the table below). Other independent claims 20-21 are obvious over these claims also. Because claims 20 recites corresponding decoder of the encoder of claim 2 and claim 21 recites the bitstream that would be generated by the encoder of claim 2. Dependent claims are obvious variation of the patented dependent claims. Ap. No. 19/012258 US 12192537 B2 2. An encoder configured to encode an input video into a plurality of encoded streams, such that the encoded streams may be combined to reconstruct the input video, the encoder configured to: receive an input video comprising respective frames, each frame of the respective frames being divided into a plurality of tiles and each tile of the plurality of tiles being divided into a plurality of blocks: generate a base encoded stream using a base encoder: determine a temporal mode for one or more further encoded enhancement streams for use in reconstructing the input video together with the base encoded stream, the one or more further encoded enhancement streams being generated using an enhancement encoder, wherein the temporal mode is one of a first temporal mode that does not apply non-zero values from a temporal buffer for generating the one or more further encoded enhancement streams and a second temporal mode that does apply non-zero values from the temporal buffer for generating the one or more further encoded enhancement streams: generate the one or more further encoded enhancement streams based on data derived from the base encoded stream and the input video according to the determined temporal mode, wherein generating the one or more further encoded enhancement streams comprises applying a transform to each of a series of blocks of the plurality of blocks, and wherein the temporal mode is determined for one or more of a frame, tile or block of the input video: and assign a value to a block-based temporal parameter, wherein the value assigned to the block-based temporal parameter indicates the temporal mode for the block of the input video and further indicates the temporal mode for the tile comprising the block of the input video. 1. An encoder configured to encode an input video into a plurality of encoded streams, such that the encoded streams may be combined to reconstruct the input video, the encoder configured to: receive an input video comprising respective frames, each frame of the respective frames being divided into a plurality of tiles and each tile of the plurality of tiles being divided into a plurality of blocks; generate a base encoded stream using a base encoder; determine a temporal mode for one or more further encoded enhancement streams for use in reconstructing the input video together with the base encoded stream, the one or more further encoded enhancement streams being generated using an enhancement encoder, wherein the temporal mode is one of a first temporal mode that does not apply non-zero values from a temporal buffer for generating the one or more further encoded enhancement streams and a second temporal mode that does apply non zero values from the temporal buffer for generating the one or more further encoded enhancement streams; and generate the one or more further encoded enhancement streams based on data derived from the base encoded stream and the input video according to the determined temporal mode, wherein generating the one or more further encoded enhancement streams comprises applying a transform to each of a series of blocks of the plurality of blocks, and wherein the temporal mode is determined for one or more of a frame, tile, or block of the input video, wherein the encoder is configured to: determine the temporal mode for a second frame of the input video, subsequent to a first frame; and omit a quantized value of a transformed block of the first frame from the one or more further encoded enhancement streams based on the temporal mode determined for the second frame, wherein the encoder is configured to use the temporal mode determined for the second frame to control a comparison between the quantized value and one or more thresholds to determine whether the quantized value is to be omitted. 11. The encoder of claim 1, wherein the encoder is configured to assign a respective value to at least one of: a frame-based temporal parameter for a frame of the input video; a tile-based temporal parameter for a tile of the input video; and a block-based temporal parameter for a block of the input video, wherein the value assigned to the frame-based temporal parameter indicates the temporal mode for the frame of the input video, the value assigned to the tile-based temporal parameter indicates the temporal mode for the tile of the input video, and the value assigned to the block-based temporal parameter indicates the temporal mode for the block of the input video. 13. The encoder of claim 11, wherein the encoder is configured to assign the value of the block-based temporal parameter for the block of the input video to further indicate the temporal mode for the tile comprising the block. Claims 2-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-16 of U.S. Patent No. 11792440. Although the claims at issue are not identical, they are not patentably distinct from each other because independent claim 2 is taught by patented claims 1 and 13 (as shown in the table below). Other independent claims 20-21 are obvious over these claims also. Because claims 20 recites corresponding decoder of the encoder of claim 2 and claim 21 recites the bitstream that would be generated by the encoder of claim 2. Dependent claims are obvious variation of the patented dependent claims. Ap. No. 19/012258 US 11792440 B2 2. An encoder configured to encode an input video into a plurality of encoded streams, such that the encoded streams may be combined to reconstruct the input video, the encoder configured to: receive an input video comprising respective frames, each frame of the respective frames being divided into a plurality of tiles and each tile of the plurality of tiles being divided into a plurality of blocks: generate a base encoded stream using a base encoder: determine a temporal mode for one or more further encoded enhancement streams for use in reconstructing the input video together with the base encoded stream, the one or more further encoded enhancement streams being generated using an enhancement encoder, wherein the temporal mode is one of a first temporal mode that does not apply non-zero values from a temporal buffer for generating the one or more further encoded enhancement streams and a second temporal mode that does apply non-zero values from the temporal buffer for generating the one or more further encoded enhancement streams: generate the one or more further encoded enhancement streams based on data derived from the base encoded stream and the input video according to the determined temporal mode, wherein generating the one or more further encoded enhancement streams comprises applying a transform to each of a series of blocks of the plurality of blocks, and wherein the temporal mode is determined for one or more of a frame, tile or block of the input video: and assign a value to a block-based temporal parameter, wherein the value assigned to the block-based temporal parameter indicates the temporal mode for the block of the input video and further indicates the temporal mode for the tile comprising the block of the input video. 1. An encoder configured to encode an input video into a plurality of encoded streams, such that the encoded streams may be combined to reconstruct the input video, the encoder configured to: receive an input video comprising respective frames, each frame of the respective frames being divided into a plurality of tiles and each tile of the plurality of tiles being divided into a plurality of blocks; generate a base encoded stream using a base encoder; determine a temporal mode for one or more further encoded enhancement streams for use in reconstructing the input video together with the base encoded stream, the one or more further encoded enhancement streams being generated using an enhancement encoder, wherein the temporal mode is one of a first temporal mode that does not apply non-zero values from a temporal buffer for generating the one or more further encoded enhancement streams and a second temporal mode that does apply non zero values from the temporal buffer for generating the one or more further encoded enhancement streams; and generate the one or more further encoded enhancement streams based on data derived from the base encoded stream and the input video according to the determined temporal mode, wherein generating the one or more further encoded enhancement streams comprises applying a transform to each of a series of blocks of the plurality of blocks, and wherein the temporal mode is determined for one or more of a frame, tile, or block of the input video, wherein the encoder is configured to encode, separately from the one or more further encoded streams, temporal mode signalling data indicating the temporal mode for the one or more further encoded streams, wherein the encoder is configured to encode the temporal mode signalling data using run-length encoding, wherein the run-length encoding is performed using the same run-length encoding process as a run-length encoding process used by the enhancement encoder to encode the one or more further encoded enhancement streams, wherein the encoder is configured to: encode temporal mode signalling data indicating the temporal mode of a first block within a tile using the run-length encoding, the temporal mode of the first block being the second temporal mode; and skip the run-length encoding of the temporal mode signalling data of remaining blocks within the tile. 11. The encoder of claim 1, wherein the encoder is configured to assign a respective value to at least one of: a frame-based temporal parameter for a frame of the input video; a tile-based temporal parameter for a tile of the input video; and a block-based temporal parameter for a block of the input video, wherein the value assigned to the frame-based temporal parameter indicates the temporal mode for the frame of the input video, the value assigned to the tile-based temporal parameter indicates the temporal mode for the tile of the input video, and the value assigned to the block-based temporal parameter indicates the temporal mode for the block of the input video. 13. The encoder of claim 11, wherein the encoder is configured to assign the value of the block-based temporal parameter for the block of the input video to further indicate the temporal mode for the tile comprising the block Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Shahan Rahaman whose telephone number is (571)270-1438. The examiner can normally be reached on 7am - 3:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at telephone number (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /SHAHAN UR RAHAMAN/Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Jan 07, 2025
Application Filed
May 29, 2025
Response after Non-Final Action
Jan 20, 2026
Non-Final Rejection — §101, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599294
IMAGE-RECORDING DEVICE FOR IMPROVED LOW LIGHT INTENSITY IMAGING AND ASSOCIATED IMAGE-RECORDING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602765
DEFECT INSPECTION SYSTEM AND DEFECT INSPECTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12598328
VIDEO SIGNAL PROCESSING METHOD AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593035
IMAGE ENCODING/DECODING METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586224
THREE-DIMENSIONAL SCANNING SYSTEM AND METHOD FOR OPERATING SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
88%
With Interview (+12.6%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 633 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month