Prosecution Insights
Last updated: April 19, 2026
Application No. 18/819,064

SYSTEM AND METHOD FOR EFFICIENT TEXT-GUIDED GENERATION OF HIGH-RESOLUTION VIDEOS

Non-Final OA §103
Filed
Aug 29, 2024
Examiner
TSENG, CHENG YUAN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
703 granted / 835 resolved
+22.2% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
30 currently pending
Career history
865
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
28.1%
-11.9% vs TC avg
§102
39.1%
-0.9% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 835 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the claim feature of “obtaining a condition-video pair dataset comprising training video clips and conditions associated with the training … training a video autoencoder using the condition-video pair dataset …” in claims 2, 18 and 21 must be shown or the features canceled from the claims. No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 13-17, 20 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Sree Harsha (US 2025/0,265,752) in view of CN856 (CN118138856). Referring to claims 1, 17, 20 and 22, Sree Harsha discloses a method for using diffusion models (fig. 2, diffusion models 222/210) to generate a requested video (fig. 2, target digital video 136) from a user prompt (fig. 2, text prompt 128, inputs 122), comprising: inputting (fig. 2, inputs 122) the user prompt for the requested video and Gaussian noise (fig. 2, source video 124; para.0059, Gaussian noise) into a content frame diffusion model (fig. 2, frame generation module 216) to generate a content frame (fig. 2, frames 218) for the requested video (fig. 2, target digital video 136). CN856 discloses inputting (fig. 2, descriptive text feature vector and reference image feature vector 203) the generated content frame, the user prompt, and the Gaussian noise into a motion diffusion model (fig. 2, second diffusion model 203) to generate motion latent representation (fig. 5, output of second diffusion model 500) corresponding to motions of attributes within the generated content frame that are encoded in a latent space (fig. 2, video frame feature vector 203); and generating the requested video (fig. 2, target video 204) based on inputting the generated content frame and the generated motion latent representations into a video decoder (fig. 2, decode 204). Sree Harsha and CN856 are analogous art because they are from the same field of endeavor in using diffusion models to generate target video. At the time of the filing, it would have been obvious to a person of ordinary skill in the art, having the teaching of Sree Harsha and CN856 before him or her to modify the target video generation of Sree Harsha to include the second diffusion model of CN856, thereafter the target vide is generated through two diffusion models. The suggestion and/or motivation for doing so would be obtaining the advantage of improved video quality in dimension and motion (CN856 abstract) as suggested by CN856. Therefore, it would have been obvious to combine Sree Harsha with CN856 to obtain the invention as specified in the application claims. As to claim 13, Sree Harsha discloses the method of claim 1, wherein a step of inputting and generating is performed on a server (fig. 1, service provider system 102; para.0031, server) to generate the requested video, and the requested video is streamed (para.0032, streaming) to a user device (fig. 1, computing device 104). As to claim 14, Sree Harsha discloses the method of claim 1, wherein a step of inputting and generating is performed within a cloud computing environment (fig. 8, cloud 814). As to claim 15, Sree Harsha discloses the method of claim 1, wherein a step of inputting and generating is performed for training, testing, or certifying a neural network (fig. 4, neural network 410) employed in a machine. As to claim 16, Sree Harsha discloses the method of claim 1, wherein a step of inputting and generating is performed on a virtual machine (fig. 8, cloud platform 814/816) comprising a portion of a graphics processing unit. Allowable Subject Matter Claims 2-12, 18-19 and 21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The claim limitation of “obtaining a condition-video pair dataset comprising training video clips and conditions associated with the training … training a video autoencoder using the condition-video pair dataset …” as required in dependent claims 2, 18 and 21. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner Cheng-Yuan Tseng whose telephone number is (571)272-9772, and fax number is (571)273-9772. The examiner can normally be reached on Monday through Friday from 09:00 to 17:30 Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on (571)272-2330. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866)217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800)786-9199 (IN USA OR CANADA) or (571)272-1000. /CHENG YUAN TSENG/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Aug 29, 2024
Application Filed
Feb 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602844
Graphics Processor
2y 5m to grant Granted Apr 14, 2026
Patent 12586285
METHODS AND SYSTEMS FOR MARKERLESS FACIAL MOTION CAPTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12579415
Area-Efficient Convolutional Block
2y 5m to grant Granted Mar 17, 2026
Patent 12572355
MODULAR ADDITION INSTRUCTION
2y 5m to grant Granted Mar 10, 2026
Patent 12567173
Infant 2D Pose Estimation and Posture Detection System
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+15.7%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 835 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month