Prosecution Insights
Last updated: April 19, 2026
Application No. 18/422,634

FOUR-DIMENSIONAL OBJECT AND SCENE MODEL SYNTHESIS USING GENERATIVE MODELS

Non-Final OA §103§112
Filed
Jan 25, 2024
Examiner
TSENG, CHARLES
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
541 granted / 686 resolved
+16.9% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
20 currently pending
Career history
706
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 686 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 4, 6, 9-11, 14, 16 and 18-20 are objected to because of the following informalities: For claim 1, Examiner believes this claim should be amended in the following manner: A processor comprising: one more circuits to: receive an input indicating one or more features of content, the content comprising at least one of an object or a scene; initialize a content model, according to the input, to represent the input in three spatial dimensions and a time dimension; update the content model by rendering one or more sequences of frames from the content model, determining, using a latent diffusion model, a metric of the one or more sequences, and modifying the content model according to the metric, until a convergence condition is satisfied; and cause at least one of (i) a simulation to be performed using the For claim 4, Examiner believes this claim should be amended in the following manner: The processor of claim 1, wherein the one or more circuits are to: update the content model according to a predetermined input identifying a camera pose for [[the]] a given sequence of frames and a time point for one or more frames of the given sequence of frames; and determine the metric according to the given sequence of frames rendered according to the predetermined input. For claim 6, Examiner believes this claim should be amended in the following manner: The processor of claim 1, wherein the one or more circuits are to: render, from the modify the render, from the modified content model, a third frame for a third time point subsequent to the second time point according to the second frame. For claim 9, Examiner believes this claim should be amended in the following manner: The processor of claim 1, wherein the one or more circuits are to identify, from the the object represented by the For claim 10, Examiner believes this claim should be amended in the following manner: The processor of claim 1, wherein the processor is comprised in at least one of: a system for generating synthetic data; a system for performing simulation operations; a system for performing conversational artificial intelligence (AI) operations; a system for performing collaborative content creation for three-dimensional (3D) assets; a system comprising one or more large language models (LLMs); a system for performing digital twin operations; a system for performing light transport simulation; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. For claim 11, Examiner believes this claim should be amended in the following manner: A system comprising: memory for storing instructions; one or more processing units to execute the instructions to execute operations comprising: receiving an input indicating one or more features of content, the content comprising at least one of an object or a scene; initializing a content model, according to the input, to represent the input in three spatial dimensions and a time dimension; updating the content model by rendering one or more sequences of frames from the content model, determining, using a latent diffusion model, a metric of the one or more sequences, and modifying the content model according to the metric, until a convergence condition is satisfied; and causing at least one of (i) a simulation to be performed using the For claim 14, Examiner believes this claim should be amended in the following manner: The system of claim 11, wherein the one or more processing units are to execute operations comprising: updating the content model according to a predetermined input identifying a camera pose for [[the]] a given sequence of frames and a time point for one or more frames of the given sequence of frames; and determining the metric according to the given sequence of frames rendered according to the predetermined input. For claim 16, Examiner believes this claim should be amended in the following manner: The system of claim 11, wherein the one or more processing units are to execute operations comprising: rendering, from the modifying the rendering, from the modified content model, a third frame for a third time point subsequent to the second time point according to the second frame. For claim 18, Examiner believes this claim should be amended in the following manner: The system of claim 11, wherein the system is comprised in at least one of: a system for generating synthetic data; a system for performing simulation operations; a system for performing conversational artificial intelligence (AI) operations; a system for performing collaborative content creation for three-dimensional (3D) assets; a system comprising one or more large language models (LLMs); a system for performing digital twin operations; a system for performing light transport simulation; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. For claim 19, Examiner believes this claim should be amended in the following manner: A method, comprising: receiving, by one or more processors, an input indicative of at least one of an object or a scene; initializing, by the one or more processors, based at least on the input, a plurality of spatial dimensions of a content model of the at least one of the object or the scene; updating, by the one or more processors, the content model to have a temporal dimension responsive to evaluating a plurality of frames rendered from the content model at a plurality of points in time using a latent diffusion model having one or more temporal layers outputting, by the one or more processors, one or more frames from the For claim 20, Examiner believes this claim should be amended in the following manner: The method of claim 19, wherein the content model comprises a three-dimensional (3D) Gaussian splatting representation corresponding to the plurality of spatial dimensions coupled with a multilayer perceptron (MLP) corresponding to the temporal dimension. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-5, 7-9, 13-15, 17 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. For dependent claim 3, parent claim 1 establishes a “content model” and an “updated content model”. Claim 3 goes on to recite the phrase “the content model” and it is unclear and ambiguous to which of the previously established “content model” and “updated content model” is being referenced by the phrase “the content model”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. For dependent claim 4, parent claim 1 establishes a “content model” and an “updated content model”. Claim 4 goes on to recite the phrase “the content model” and it is unclear and ambiguous to which of the previously established “content model” and “updated content model” is being referenced by the phrase “the content model”. Furthermore, claim 4 recites the phrase “the given sequence of frames” and neither parent claim 1 nor claim 4 provides antecedent basis for this phrase and the phrase “the given sequence of frames” is indefinite. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. For dependent claim 5, parent claim 1 establishes a “content model” and an “updated content model”. Claim 5 goes on to recite the phrase “the content model” and it is unclear and ambiguous to which of the previously established “content model” and “updated content model” is being referenced by the phrase “the content model”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. For dependent claim 7, parent claim 1 establishes a “content model” and an “updated content model”. Claim 7 goes on to recite the phrase “the content model” and it is unclear and ambiguous to which of the previously established “content model” and “updated content model” is being referenced by the phrase “the content model”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. For dependent claim 8, parent claim 1 establishes a “content model” and an “updated content model”. Claim 8 goes on to recite the phrase “the content model” and it is unclear and ambiguous to which of the previously established “content model” and “updated content model” is being referenced by the phrase “the content model”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. For dependent claim 9, parent claim 1 establishes a first “an object” and claim 9 establishes a second “an object”. Claim 9 goes on to recite the phrase “the object” and it is unclear and ambiguous to which of the previously established first “object” and second “object” is being referenced by the phrase “the object”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. For dependent claim 13, parent claim 11 establishes a “content model” and an “updated content model”. Claim 13 goes on to recite the phrase “the content model” and it is unclear and ambiguous to which of the previously established “content model” and “updated content model” is being referenced by the phrase “the content model”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. For dependent claim 14, parent claim 11 establishes a “content model” and an “updated content model”. Claim 14 goes on to recite the phrase “the content model” and it is unclear and ambiguous to which of the previously established “content model” and “updated content model” is being referenced by the phrase “the content model”. Furthermore, claim 14 recites the phrase “the given sequence of frames” and neither parent claim 11 nor claim 14 provides antecedent basis for this phrase and the phrase “the given sequence of frames” is indefinite. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. For dependent claim 15, parent claim 11 establishes a “content model” and an “updated content model”. Claim 15 goes on to recite the phrase “the content model” and it is unclear and ambiguous to which of the previously established “content model” and “updated content model” is being referenced by the phrase “the content model”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. For dependent claim 17, parent claim 11 establishes a “content model” and an “updated content model”. Claim 17 goes on to recite the phrase “the content model” and it is unclear and ambiguous to which of the previously established “content model” and “updated content model” is being referenced by the phrase “the content model”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. For dependent claim 20, parent claim 19 establishes a “content model” and an “updated content model”. Claim 20 goes on to recite the phrase “the content model” and it is unclear and ambiguous to which of the previously established “content model” and “updated content model” is being referenced by the phrase “the content model”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 6-12 and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al., Animate 124: Animating One Image to 4D Dynamic Scene, arXiv, November 2023 (hereinafter “Zhao”) in view of Alesiani et al. (U.S. Patent Application Publication 2024/0296919 A1, hereinafter “Alesiani”). For claim 1, Zhao discloses a framework (page 1) to: receive an input indicating one or more features of content, the content comprising at least one of an object or a scene (disclosing acquisition of an input image and a text prompt to indicate features of content comprising an object and a scene (pages 1-2/Fig. 1)); initialize a content model, according to the input, to represent the input in three spatial dimensions and a time dimension (disclosing initialization of a dynamic neural radiance field (NeRF) model as a content model according to the input to represent the input in three spatial dimensions and a time dimension (pages 1-2/Fig. 1; page 4/Fig. 2)); update the content model by rendering one or more sequences of frames from the content model using a latent diffusion model (disclosing the dynamic NeRF model is dynamic to be updated by rendering frames from the dynamic NeRF model using a latent video diffusion model (page 4)); and cause at least one of (i) a simulation to be performed using the updated content model or (ii) presentation of the updated content model using a display (disclosing the updated dynamic NeRF model is presented for display (page 9/Fig. 6; and page 11/Fig. 8)). Zhao does not disclose a processor comprising one or more circuits to determine, using a latent diffusion model, a metric of one or more sequences of frames, and modifying the content model according to the metric, until a convergence condition is satisfied. However, these limitations are well-known in the art as disclosed in Alesiani. Alesiani similarly discloses a system and method for implementing a sequential diffusion model as a content model to update the sequential diffusion model to implement a latent diffusion model for image synthesis to perform a simulation (par. 5, 37, 61 and 138). Alesiani explains its system is implement with a processor comprising circuitry (par. 142). Alesiani further explains the system determines, using the latent diffusion model, an equation as a metric of a sequence of frames and modifies the sequential diffusion model according to the metric to satisfy a convergence condition (par. 62, 78 and 117). It follows Zhao may be accordingly modified with the teachings of Alesiani to implement its framework with a processor and circuitry to determine, using its late diffusion model, a metric of its one or more sequences and to modify its content model according to its metric until a convergence condition is satisfied. A person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention would find it obvious to modify Zhao with the teachings of Alesiani. Alesiani is analogous art in dealing with a system and method for implementing a sequential diffusion model as a content model to update the sequential diffusion model to implement a latent diffusion model for image synthesis to perform a simulation (par. 5, 37, 61 and 138). Alesiani discloses its use of a convergence condition is advantageous in appropriately conditioning a diffusion generation process to synthesize video (par. 62, 78 and 117). Consequently, a PHOSITA would incorporate the teachings of Alesiani into Zhao for appropriately conditioning a diffusion generation process to synthesize video. Therefore, claim 1 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention. For claim 2, depending on claim 1, Zhao as modified by Alesiani discloses wherein the latent diffusion model comprises one or more layers configured for the time dimension, and comprises or is coupled with an optimizer to determine the metric based at least on a gradient associated with a given frame of the one or more sequences of frames (Zhao discloses its latent video diffusion model includes layers of multilayer perceptrons configured for the time dimension and further includes an optimization stage as an optimizer (pages 2, 4, and 6); Alesiani similarly discloses a system and method for implementing a sequential diffusion model as a content model to update the sequential diffusion model to implement a latent diffusion model for image synthesis to perform a simulation (par. 5, 37, 61 and 138); Alesiani explains the system determines, using the latent diffusion model, an equation as a metric of a sequence of frames and modifies the sequential diffusion model according to the metric to satisfy a convergence condition (par. 62, 78 and 117); Alesiani further explains its system performs optimization to determine its metric based on a gradient associated with a frame of the sequence of frames (par. 39, 85, 91 and 117); and it follows Zhao may be accordingly modified with the teachings of Alesiani to implement its framework with a processor and circuitry to determine, using its late diffusion model, a metric of its one or more sequences and to modify its content model according to its metric until a convergence condition is satisfied). For claim 6, depending on claim 1, Zhao as modified by Alesiani discloses wherein the one or more circuits are to: render, from the updated content model, a first frame for a first time point and a second frame for a second time point subsequent to the first time point (Zhao discloses rendering, from the dynamic NeRF model, a first frame for a first timestep and a second frame for a second timestep subsequent to the first timestep (page 4; page 9/Fig. 6; and page 11/Fig. 8)); modify the updated content model according to the second frame (Zhao discloses the dynamic NeRF model is modified based on the second frame (page 4; page 9/Fig. 6; and page 11/Fig. 8)); and render, from the modified content model, a third frame for a third time point subsequent to the second time point according to the second frame (Zhao discloses rendering, from the dynamic NeRF model, a third frame for a third timestep subsequent to the second timestep based on the second frame (page 4; page 9/Fig. 6; and page 11/Fig. 8)). For claim 7, depending on claim 1, Zhao as modified by Alesiani discloses wherein the input comprises natural language data and one or more images, and the one or more circuits are to update the content model according to the one or more images (Zhao discloses the input includes the text prompt as natural language data and an input image for updating the dynamic NeRF model according to the input image (pages 1-2/Fig. 1; page 4; page 9/Fig. 6; and page 11/Fig. 8); Alesiani similarly discloses a system and method for implementing a sequential diffusion model as a content model to update the sequential diffusion model to implement a latent diffusion model for image synthesis to perform a simulation (par. 5, 37, 61 and 138); Alesiani explains its system is implement with a processor comprising circuitry (par. 142); and it follows Zhao may be accordingly modified with the teachings of Alesiani to implement its framework with a processor and circuitry to update its content model according to its one or more images). For claim 8, depending on claim 1, Zhao as modified by Alesiani discloses wherein the one or more circuits are to update the content model according to a physics model to measure a physics-based realism of motion represented in the one or more sequences of frames (Alesiani similarly discloses a system and method for implementing a sequential diffusion model as a content model to update the sequential diffusion model to implement a latent diffusion model for image synthesis to perform a simulation (par. 5, 37, 61 and 138); Alesiani explains its system implements a physics-informed neural network as a physics model to determine physical loss with respect to physical law to measure physics-based realism of motion in frames (par. 49, 110 and 126); and it follows Zhao may be accordingly modified with the teachings of Alesiani to update its content model according to a physical model to measure a physics-based realism of motion in its one or more sequence of frames for realistic display of its frames). For claim 9, depending on claim 1, Zhao as modified by Alesiani discloses wherein the one or more circuits are to identify, from the updated content model, at least one of a joint of an object represented by the updated content model, a movement property of the object, or a deformation property of the object (Zhao discloses identifying, from the dynamic NeRF model, motion as a movement property of the object (page 4; page 9/Fig. 6; and page 11/Fig. 8); Alesiani similarly discloses a system and method for implementing a sequential diffusion model as a content model to update the sequential diffusion model to implement a latent diffusion model for image synthesis to perform a simulation (par. 5, 37, 61 and 138); Alesiani likewise explains its system identifies physical movements as a movement property of an object to be simulated by its sequential diffusion model (par. 49); and it follows Zhao may be accordingly modified with the teachings of Alesiani to identify a movement property of its object to appropriately display its object with the movement property). For claim 10, depending on claim 1, Zhao as modified by Alesiani discloses wherein the processor is comprised in at least one of: a system for generating synthetic data; a system for performing simulation operations; a system for performing conversational AI operations; a system for performing collaborative content creation for 3D assets; a system comprising one or more large language models (LLMs); a system for performing digital twin operations; a system for performing light transport simulation; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Alesiani similarly discloses a system and method for implementing a sequential diffusion model as a content model to update the sequential diffusion model to implement a latent diffusion model for image synthesis to perform a simulation (par. 5, 37, 61 and 138); and it follows Zhao may be accordingly modified with the teachings of Alesiani to implement its framework in a simulation system to present a simulation of its object for display). For claim 11, Zhao as modified by Alesiani discloses a system comprising: one or more processing units (Alesiani similarly discloses a system and method for implementing a sequential diffusion model as a content model to update the sequential diffusion model to implement a latent diffusion model for image synthesis to perform a simulation (par. 5, 37, 61 and 138); Alesiani explains its system is implement with a processor comprising circuitry (par. 142); and it follows Zhao may be accordingly modified with the teachings of Alesiani to implement its framework with a processor and circuitry to appropriately carry out the functions of its framework) to execute operations as the processor of claim 1 (see above as to claim 1). For claim 12, depending on claim 11, this claim is a combination of the limitations of claim 11 and claim 2. It follows claim 12 is rejected for the same reasons discussed above as to claim 11 and claim 2. For claim 16, depending on claim 11, this claim is a combination of the limitations of claim 11 and claim 6. It follows claim 16 is rejected for the same reasons discussed above as to claim 11 and claim 6. For claim 17, depending on claim 11, this claim is a combination of the limitations of claim 11 and claim 7. It follows claim 17 is rejected for the same reasons discussed above as to claim 11 and claim 7. For claim 18, depending on claim 11, this claim is a combination of the limitations of claim 11 and claim 10. It follows claim 18 is rejected for the same reasons discussed above as to claim 11 and claim 10. For claim 19, Zhao as modified by Alesiani discloses a method (Zhao discloses a method (page 1)), comprising: receiving, by one or more processors, an input indicative of at least one of an object or a scene (Zhao discloses acquisition of an input image and a text prompt to indicate features of content comprising an object and a scene (pages 1-2/Fig. 1); Alesiani similarly discloses a system and method for implementing a sequential diffusion model as a content model to update the sequential diffusion model to implement a latent diffusion model for image synthesis to perform a simulation (par. 5, 37, 61 and 138); Alesiani explains its system is implement with a processor comprising circuitry (par. 142); and it follows Zhao may be accordingly modified with the teachings of Alesiani to implement its method with a processor and circuitry to appropriately carry out the functions of its method); initializing, by the one or more processors, based at least on the input, a plurality of spatial dimensions of a content model of the at least one of the object or the scene (Zhao discloses initialization of a dynamic neural radiance field (NeRF) model as a content model according to the input to represent the input in three spatial dimensions and a time dimension (pages 1-2/Fig. 1; page 4/Fig. 2)); updating, by the one or more processors, the content model to have a temporal dimension responsive to evaluating a plurality of frames rendered from the content model at a plurality of points in time using a latent diffusion model having one or more temporal layers, to generate an updated content model (Zhao discloses the dynamic NeRF model is dynamic to be updated over time as a temporal dimension responsive to evaluating frames rendered from the dynamic NeRF model over timestamps using a latent video diffusion model having layers of multilayer perceptrons to update the dynamic NeRF model (page 4; page 9/Fig. 6; and page 11/Fig. 8)); and outputting, by the one or more processors, one or more frames from the updated content model (Zhao discloses the display for frames from the updated dynamic NeRF model for output (page 9/Fig. 6; and page 11/Fig. 8)). Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhao in view of Alesiani further in view of Martin Brualla et al. (U.S. Patent Application Publication 2024/0005590 A1, hereinafter “Brualla”). For claim 5, depending on claim 1, Zhao as modified by Alesiani discloses wherein the content model comprises: at least one of a Gaussian splatting representation, a neural radiance field (NeRF), a mesh representation, or a point cloud (Zhao discloses its content model as the dynamic NeRF model (pages 1-2/Fig. 1; page 4/Fig. 2)). Zhao as modified by Alesiani does not specifically disclose a deformation field to represent motion in one or more sequences of frames. However, these limitations are well-known in the art as disclosed in Brualla. Brualla similarly discloses a system and method for performing image synthesis using neural radiance fields (par. 2). Brualla explains its system implements a deformation field to represent movements as motion in frames for synthesizing images with a NeRF (par. 67-70). It follows Zhao and Alesiani may be accordingly modified with the teachings of Brualla to implement a deformation field to represent motion in its one or more sequence of frames. A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Zhao and Alesiani with the teachings of Brualla. Brualla is analogous art in dealing with a system and method for performing image synthesis using neural radiance fields (par. 2). Brualla discloses its use of a deformation field is advantageous in representing movements in frames to facilitate appropriate image synthesis with neural radiance fields (par. 67-70). Consequently, a PHOSITA would incorporate the teachings of Brualla into Zhao and Alesiani for representing movements in frames to facilitate appropriate image synthesis with neural radiance fields. Therefore, claim 5 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention. For claim 15, depending on claim 11, this claim is a combination of the limitations of claim 11 and claim 5. It follows claim 15 is rejected for the same reasons discussed above as to claim 11 and claim 5. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhao in view of Alesiani further in view of Alimo et al. (U.S. Patent Application Publication 2024/0135623 A1, hereinafter “Alimo”). For claim 20, depending on claim 19, Zhao as modified by Alesiani discloses wherein the content model comprises a multilayer perceptron (MLP) corresponding to the temporal dimension (Zhao discloses its dynamic NeRF model includes layers of multilayer perceptrons corresponding to the time dimension (pages 2, 4, and 6)). Zhao as modified by Alesiani does not specifically disclose a 3D Gaussian splatting representation. However, these limitations are well-known in the art as disclosed in Alimo. Alimo similarly discloses a system and method for rendering and synthesizing images with neural radiance fields (par. 2 and 53-54). Alimo explains it is known to use a 3D Gaussian splatting in place of NeRF to perform 3D reconstruction and novel view synthesis (par. 54). It follows Zhao and Alesiani may be accordingly modified with the teachings of Alimo to implement a 3D Gaussian splatting representation corresponding to its spatial dimensions in its content model and coupled with a multilayer perceptron corresponding to its temporal dimension. A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Zhao and Alesiani with the teachings of Alimo. Alimo is analogous art in dealing with a system and method for rendering and synthesizing images with neural radiance fields (par. 2 and 53-54). Alimo discloses its use of a 3D Gaussian splatting representation is advantageous in appropriately facilitating 3D reconstruction and novel view synthesis for image synthesis (par. 2 and 53-54). Consequently, a PHOSITA would incorporate the teachings of Brualla into Zhao and Alesiani for appropriately facilitating 3D reconstruction and novel view synthesis for image synthesis. Therefore, claim 20 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention. Allowable Subject Matter Claims 3, 4, 13 and 14 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES TSENG whose telephone number is (571)270-3857. The examiner can normally be reached 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES TSENG/ Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jan 25, 2024
Application Filed
Jan 12, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594021
EDITING METHOD OF DYNAMIC SPECTRUM PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12591405
SHARED CONTROL OF A VIRTUAL OBJECT BY MULTIPLE DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12579760
DIGITAL CONTENT PLATFORM INCLUDING METHODS AND SYSTEM FOR RECORDING AND STORING DIGITAL CONTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572015
TRANSPARENT OPTICAL MODULE USING PIXEL PATCHES AND ASSOCIATED LENSLETS
2y 5m to grant Granted Mar 10, 2026
Patent 12566503
REPRESENTATION FORMAT FOR HAPTIC OBJECT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+32.1%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 686 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month