Prosecution Insights
Last updated: April 19, 2026
Application No. 18/532,634

HYBRID SPATIO-TEMPORAL NEURAL MODELS FOR VIDEO COMPRESSION

Non-Final OA §103
Filed
Dec 07, 2023
Examiner
FLORES, LEON
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Adeia Guides Inc.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
1222 granted / 1350 resolved
+28.5% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
10 currently pending
Career history
1360
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
39.3%
-0.7% vs TC avg
§102
35.6%
-4.4% vs TC avg
§112
7.0%
-33.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1350 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Terminal Disclaimer The terminal disclaimer filed on 11/10/25 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of 18/532638 has been reviewed and is accepted. The terminal disclaimer has been recorded. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: control circuitry in claim s (11, 13, 16, 18-20) . Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) (1- 20 ) are rejected under 35 U.S.C. 103 as being unpatentable over Dupont et al . (hereinafter Dupont)(US 2025/0168368 A1) Re claim 1, Dupont discloses a method comprising: accessing, via a content server, video data comprising a sequence of frames (See fig. 1: 102; ¶ 60, 71 where it teaches input data comprised a plurality of input data; the encoder device (such as a server).) ; generating a first frame based on averaging pixel attributes of the sequence of frames . (See fig. 1: 102; ¶ 71 where it teaches input data 102 comprises a plurality of input data values. For example, the input data may correspond to an image or video, or to part (e.g., a patch) of one image or video, wherein suggesting that the average was taken.) But the reference of Dupont fails to explicitly teach determining a sequence level representation based on the first frame; training a neural network model based on the sequence of frames to determine a cross- resolution representation corresponding to the sequence of frames, wherein the training comprises generating a plurality of model parameters for reconstructing the sequence of frames based on the sequence level representation and the cross-resolution representation, wherein the plurality of model parameters comprises neural radiance network parameters; and transmitting, via the content server, bitstreams of the plurality of model parameters, the sequence level representation, and the cross-resolution representation. However, the reference of Dupont does suggest determining a sequence level representation based on the first frame (See fig. 1: 112; ¶ 72, 75 where it teaches generating synthesis neural network parameters ) ; training a neural network model based on the sequence of frames to determine a cross- resolution representation corresponding to the sequence of frames, wherein the training comprises generating a plurality of model parameters for reconstructing the sequence of frames based on the sequence level representation and the cross-resolution representation, wherein the plurality of model parameters comprises neural radiance network parameters (See fig. 1: 1 06 , 116; ¶ 72, 75-79 where it teaches generating decoder neural network parameters based on the input data. ; reconstruction data is generated based on parameters 106 and 112.) ; and transmitting, via the content server, bitstreams of the plurality of model parameters, the sequence level representation, and the cross-resolution representation. (See fig. 1: 104; ¶ 87 where it teaches transmitting the encoded data (108, 112, 118) to one or more client devices for decoding.) Therefore, it would have been obvious to one of ordinary skills in the art to incorporate these features into the system of Dupont, in the manner as claimed, for the benefit of optimize the objective function. (See ¶ 74, 77) Re claim 2, Dupont discloses wherein the cross-resolution representation comprises latent features corresponding to each frame of the sequence of frames. (See ¶ 28 where it teaches features of the latent values.) Re claim 3, Dupont discloses generating a bitstream of the cross-resolution representation based on quantization of the latent features. (See ¶ 7 where it teaches quantizing the latent values.) Re claim 4, Dupont discloses reconstructing the sequence of frames by combining, via a channel transformer, the sequence level representation and the latent features. (See fig. 1: 114; ¶ 79) Re claim 5, Dupont discloses wherein the sequence level representation comprises first pixel attribute information corresponding to the first frame, wherein the cross- resolution representation comprises second pixel attribute information corresponding to the sequence of frames, and wherein the neural network model is trained to determine pixel attribute information for reconstructing the sequence of frames. (See fig. 1; ¶ 72-79) Re claim 6, Dupont discloses storing encodings of the plurality of model parameters, the sequence level representation, and the cross-resolution representation. (See ¶ 43) Re claim 7, Dupont discloses wherein the sequence of frames corresponds to a first resolution, and wherein generating the first frame further comprises downscaling the first frame based on the first resolution. (See ¶ 71 where it teaches a patch of an image, wherein suggesting the image was downscaled and/or ¶ 6 having different resolutions.) Re claim 8, Dupont discloses wherein the downscaling comprises using a convolutional network model comprising a plurality of residual spatial attention blocks to produce, from first feature channels, expanded feature channels. (See ¶ 28) Re claim 9, Dupont discloses wherein determining a cross-resolution representation corresponding to the sequence of frames comprises using a convolutional network model comprising a plurality of residual spatial attention blocks to produce, from first feature channels, expanded feature channels. (See ¶ 28) Re claim 10, Dupont discloses wherein determining the sequence level representation is executed concurrently with training the neural network model based on the sequence of frames. (See fig. 1; ¶ 71-79) Claims (11-20) is a system claim corresponding to method claims (1-10). Hence, the steps performed in method claims (1-10) would have necessitated the elements in system claims (11-20). Therefore, claims (11-20) have been analyzed and rejected w/r to claims (1-10) respectively. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT LEON FLORES whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-1201 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 8am - 6pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT HENOK SHIFERAW can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-4637 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LEON FLORES/ Primary Examiner, Art Unit 2676 December 1 5 , 2025
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Dec 15, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602919
GENERATIVE DATA AUGMENTATION WITH TASK LOSS GUIDED FINE-TUNING
2y 5m to grant Granted Apr 14, 2026
Patent 12591968
ANALYSIS METHOD AND DEVICE FOR CEREBROVASCULAR IMAGE BASED ON CEREBROVASCULAR CHUNK FEATURES
2y 5m to grant Granted Mar 31, 2026
Patent 12592062
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586368
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR GENERATING IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12586367
QUANTUM CONVOLUTIONAL NEURAL NETWORK CIRCUIT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+10.5%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month