Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 18-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Vachha “Creating Visual Effects with Neural Radiance Fields”, 2023.
Regarding claim 18, Vachha discloses one or more computer-readable storage media storing instructions (Blender add-on script allows for more controlled camera trajectories of photorealistic scenes, compositing meshes and other environmental effects with NeRFs, and compositing multiple NeRFs in a single scene, page 1, column 2, line 9-12) that, responsive to execution by a processing device (NeRF objects and environments in a single scene. This is achieved by rendering an RGB render and an accumulation render for each of the cropped NeRF objects, page 2, column 1, lines 23-26), causes the processing device to perform operations comprising
generating a composite video using a machine-learning model (composites featuring portal effects, NeRF objects floating in NeRF environments, and NeRFs composited into real-life footage such as an elevator interior as seen in Figure 4) by synchronizing movement of a viewpoint in relation to a subject captured in a subject video (By aligning the NeRF camera path with the virtual Blender camera, page 1, column 2; transforming the Blender camera path coordinate system to be relative to the origin of the NeRF representation in the Blender scene for each frame in the render, page 2, column 2, lines 1-3) with a three-dimensional representation of an environment generated from an environment video (NeRF representations can be imported into 3D creation tools, page 1, column 2).
Regarding claim 19, Vachha discloses the one or more computer-readable media as described in claim 18, wherein the three-dimensional representation is configured as a neural radiance field (NeuralRadianceFields(NeRFs) [Mildenhalletal.2021] have emerged as a popular research area in graphics for constructing 3D environments and objects, page 1, column 1).
Regarding claim 20, Vachha discloses the one or more computer-readable media as described in claim 19, wherein the neural radiance field is configured as a trained model using machine learning (VFX pipelines using Nerfstudio, an open-source framework for training and rendering NeRFs [Tancik et al. 2023]. The approach involves using Blender3, a widely used 3D creation software, to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFXpipelines, page 1, column 2).
Allowable Subject Matter
Claims 1-17 are allowed.
The following is an examiner’s statement of reasons for allowance:
The closest prior art of record, namely, Jobe et al. (US 20160037148 A1). discloses a method comprising: producing, by a processing device (computer system 40), subject data defining a subject depicted in frames of a subject video and viewpoint data describing movement of a viewpoint with respect to the frames of the subject video (expand the utility of front/rear projection used in video filming by correcting the projected image to adjust for the perspective shift of the physical on-set camera, para. 0027; For example, as the virtual camera pans left, the rendered image will pan left. As the virtual camera moves laterally, the rendered backplate will change to reflect the new visual field of the virtual camera, para. 0028); forming, by the processing device, three-dimensional data defining a three- dimensional representation of an environment (figure 3) depicted in frames of an environment video (paras. 0025, 0031, 0040); generating, by the processing device, a composited video by aligning the environment with the movement of the viewpoint of the subject based on the subject data and the three-dimensional data (creating a believable integration of the subject and backplate composite image, para. 0027; combining the "displayed" image of the subject onto the focal plane and dynamically moving the virtual camera and focal plane through the 3D environment according to the on-set camera's physical movements, the computer is able to render a composite image that looks like the subject is occupying the 3D environment. In this example, the image of the physical world (i.e., what the on- set camera is seeing and recording on-set of the subject) is pulled into the 3D digital environment and composited into a 2D image that maintains the same perspective of both cameras, para. 0029; see also figures 8 and 9); and rendering, by the processing device, the composited video (the rendered backplate will change to reflect the new visual field of the virtual camera. It is possible to perform real-time rendering from a virtual camera in a 3D environment, and this process may be used for real-time compositing of a filming subject on a green-screen stage, paras. 0028-0029).
However, the closest prior art of record, namely, Jobe et al. does not disclose “producing, by a processing device, subject data defining a subject depicted in frames of a subject video and viewpoint data describing movement of a viewpoint with respect to the frames of the subject video." (in combination with the other claimed limitations and/or features), as claimed in independent claim 1.
Dependent claims 2-10 are allowable as they depend from an allowable base independent claim 1.
Independent claim 11 is citing the same or similar subject matter and are also allowed.
Dependent claims 12-17 are allowable as they depend from an allowable base independent claim 11.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J LETT whose telephone number is (571)272-7464. The examiner can normally be reached Mon-Fri 9-6 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THOMAS J LETT/ Primary Examiner, Art Unit 2611