Prosecution Insights
Last updated: April 19, 2026
Application No. 18/407,688

VIDEO TO EVENT SIMULATION METHODS AND SYSTEMS

Non-Final OA §102§103§112
Filed
May 09, 2024
Examiner
CHEN, YU
Art Unit
2613
Tech Center
2600 — Communications
Assignee
The Regents of the University of California
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
98%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
711 granted / 1052 resolved
+5.6% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
110 currently pending
Career history
1162
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 35 USC § 112 (f) The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "a backbone conversion network", “an event sampling module” and " video to event prediction pipeline” in claim claims 1-10. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. However, the examiner cannot find any specific hardware or computer structures in the specification to perform the recited claim functions. According to MPEP section 2181.II, a rejection under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph is appropriate if the specification discloses no corresponding algorithm associated with a computer or microprocessor. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 1 is determined to invoke 35 U.S.C. 112(f). there is no disclosure of structure, material or acts for performing the recited function, the claim fails to satisfy the requirements of 35 U.S.C. 112(b). See MPEP 2181.III. Dependent claims 2-10 are also rejected because the dependencies to claim 1. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-10 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. As to claim 1, when a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under section 112(a). See MPEP 2181.IV. Dependent claims 2-10 are also rejected because of their respective dependencies. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-10 are rejected under 35 U.S.C. 102(a)(1) as being anticiapted by Zhang et al. (“V2CE: Video to Continuous Events Simulator.” 2024 IEEE International Conference on Robotics and Automation (ICRA) (2023): 12455-12461., 09/16/2023, https://doi.org/10.48550/arXiv.2309.08891 ). As to claim 1, Zhang discloses a video to event prediction pipeline system, comprising: a backbone conversion network having a model that is configured to receive a raw active pixel sensor video sequence and convert it into 3D predicted voxels (Zhang, Abstract, Page 2, Fig. 2, Motion-Aware Event Voxel Prediction Pipeline” Page 2, “Motion-Aware Event Voxel Prediction”); an event sampling module configured to receive the 3D predicted voxels and create event timestamps in a continuous scale by leveraging nonlinear dynamics of event firing trends in each voxel of the 3D predicted voxels (Page 4, “Voxels to Continuous Events Sampling” “This second task aims to recover the exact event timestamps in a continuous scale from Stage1’s output event voxel.”); wherein the backbone conversion network comprises a series of training loss function modules, the training loss function modules teaching the backbone conversion network to account for variations in the active pixel sensor video sequence caused by adjustable camera parameters of the active pixel sensor video sequence (Page 3, “The first loss to introduce is the Spatial-Temporal-Pyramid Loss” “Temporal-Pyramid Loss” “Adversarial Loss (ADV Loss”). As to claim 2, claim 1 is incorporated and the Zhang discloses the adjustable camera parameters comprise one or more of exposure, ISO, and aperture (Page 2, “Additionally, both camera types have adjustable parameters such as exposure, ISO, and aperture”). As to claim 3, claim 1 is incorporated and the Zhang discloses the training loss function module comprises a loss module that encourages the model to extract multi-scale information from adjacent voxels by applying coarse supra- voxel matching (Page 3, “The STP Loss encourages the model to extract multi-scale information from adjacent voxels, enhancing its robustness against noise by applying coarse supra-voxel matching.”). As to claim 4, claim 3 is incorporated and the Zhang discloses the training loss function module comprises a loss module that encourages the model to prioritize neighboring events (Page 3, “Temporal-Pyramid Loss (TP Loss, LTP ) is designed to prioritize neighboring events, which are crucial for voxel level event reconstruction.”). As to claim 5, claim 4 is incorporated and the Zhang discloses the training loss function module comprises a loss module that encourages the model to align information flow between the predicted event frames and the active pixel sensor video sequence (Page 3, “This addresses the issue of sparsity in voxels and ensures better and aligned information flow between generated event frames and the input frame sequence.”). As to claim 6, claim 5 is incorporated and the Zhang discloses the training loss function module comprises a loss module that encourages the model to enhance realness of the predicted 3D event based voxels by training a discriminator using ground truth and predicted voxels and real and fake samples (Page 3, “Adversarial Loss (ADV Loss, LADV) aims to enhance the realness of our generated event voxels.”). As to claim 7, claim 6 is incorporated and the Zhang discloses the training loss function module comprises a loss module that encourages the model to compute average brightness of voxels exceeding a threshold and align with brightness of ground truth voxels (Page 3, “compute the average brightness Ia of voxels exceeding a threshold β, and align this Ia with that of the ground truth voxels.”). As to claim 8, claim 7 is incorporated and the Zhang discloses the event sampling module ensures that each event influences a voxel series only for a predetermined duration (Page 4, “Each event influences the voxel series for a short and finite duration which can be characterized by a continuous-time unit step signal (with an on-time duration same as δ).”). As to claim 9, claim 1 is incorporated and the Zhang discloses the event sampling module ensures that each event influences a voxel series only for a predetermined duration (Page 4, “Each event influences the voxel series for a short and finite duration which can be characterized by a continuous-time unit step signal (with an on-time duration same as δ).”). As to claim 10, claim 1 is incorporated and the Zhang discloses the event sampling module assumes that each voxel of the 3D predicted voxels conforms to a slope distribution described by a probability density function (Page 4, “To accurately model this phenomenon, we assume that each voxel and its neighboring voxels conform to a slope distribution described by the Probability Density Function”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2 are rejected under 35 U.S.C. 103 as being unpatentable over Hu, Yuhuang, Shih-Chii Liu, and Tobi Delbruck. ("v2e: From video frames to realistic DVS events." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.) in view of Wang, Zihao W., et al. "Event-driven video frame synthesis." Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. 2019.). As to claim 1, Hu discloses a video to event prediction pipeline system, comprising: a backbone conversion network having a model that is configured to receive a raw active pixel sensor video sequence and convert it into 3D predicted voxels (Hu, Page 6, “Event voxel grid representation” “Our experiments use the event voxel grid method to convert N events into a 3D representation with size H ×W × D [27, 11] to use as the network input for the Sec. 6 results. H and W are sensor height and width dimensions. D is a hyperparameter that defines the number of slices of the output voxel grid.” Page 6, “This setting is identical to [11]. There are 5,900 intensity frame and event voxel grid pairs generated from the v2e day recording. These pairs are used as training samples for NGA. For experiments in Sec. 6.3, we also used the training and validation datasets from [11] that were generated from the real events.”); an event sampling module configured to receive the 3D predicted voxels and create event timestamps in a continuous scale by leveraging nonlinear dynamics of event firing trends in each voxel of the 3D predicted voxels (Page 2, “Fig. 1 shows a simplified schematic of the DVS pixel circuit. The continuous-time process of generating events is illustrated in Fig. 1C. The DVS pixel bias current parameters control the pixel event threshold and analog bandwidth. In Fig. 1A, the input photocurrent generates a continuous logarithmic photoreceptor output voltage Vp. The change amplifier in Fig. 1B produces an inverted and amplified output voltage Vd. When Vd crosses either the ON or OFF threshold voltage, the pixel emits an event (via a shared digital output that is not shown).” “During the initial “moderately bright” cycles, the signal photocurrent, Ip ≫ Idark and the bandwidth of the photoreceptor, which depends on Ip, is high enough so that Vp can follow the input current fluctuations.”); Hu does not explicitly disclose wherein the backbone conversion network comprises a series of training loss function modules, the training loss function modules teaching the backbone conversion network to account for variations in the active pixel sensor video sequence caused by adjustable camera parameters of the active pixel sensor video sequence. Wang teaches the backbone conversion network comprises a series of training loss function modules, the training loss function modules teaching the backbone conversion network to account for variations in the active pixel sensor video sequence caused by adjustable camera parameters of the active pixel sensor video sequence (Wang, abstract, “Our differentiable model enables iterative optimization of the latent video tensor via auto differentiation, which propagates the gradients of a loss function defined on the measured data.” Page 2, “2) Prediction: Recent work on future frame prediction has proposed to use adversarial nets [28], temporal consistency losses [6] and layered cross convolution networks [43].”). Hu and Wang are considered to be analogous art because all pertain to video frame synthesizing. It would have been obvious before the effective filing date of the claimed invention to have modified Hu with the features of “the backbone conversion network comprises a series of training loss function modules, the training loss function modules teaching the backbone conversion network to account for variations in the active pixel sensor video sequence caused by adjustable camera parameters of the active pixel sensor video sequence” as taught by Wang. The suggestion/motivation would have been in order to unifying a variety of TVFS scenarios (Wang, abstract.). As to claim 2, claim 1 is incorporated and the combination of Hu and Wang the adjustable camera parameters comprise one or more of exposure, ISO, and aperture (Wang, Page 1, “When the motion in the scene is significantly faster than the capturing speed, the motion is usually under-sampled, resulting in motion blur or large discrepancies between consecutive frames, depending on the shutter speed (exposure time).” Fig.9, “the events during exposure time”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YU CHEN/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

May 09, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604497
THIN FILM TRANSISTOR AND ARRAY SUBSTRATE
2y 5m to grant Granted Apr 14, 2026
Patent 12597176
IMAGE GENERATOR AND METHOD OF IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12589481
TOOL ATTRIBUTE MANAGEMENT IN AUTOMATED TOOL CONTROL SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12588347
DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586265
LINE DRAWING METHOD, LINE DRAWING APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
98%
With Interview (+29.9%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month