DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Applicant’s preliminary amendment/response filed 12/15/2025 has been entered and made of record. Claim 1 was amended. Claims 2-20 were added. Claims 1-20 are pending in the application.
Claim Objections
Claim 8 is objected to because of the following informalities: Claim 8 does not have clear antecedent basis for “the lens profile.” (For examination, this limitation is interpreted to refer to “lens data” recited in claim 1.) Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-3, 5, 10, 15-16, and 19-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over respective claims (see table below) of U.S. Patent No. 12,106,427. Although the claims at issue are not identical, they are not patentably distinct from each other.
Instant Claims
1-2
3
5
10-11
15-16
19
20
12106427 Claims
1
4
7
5-6
8
9
12
For example:
Instant Claim 2
12106427 Claim 1
A method, comprising:
A method for video rendering a background scene without using a background plate, the method comprising:
tracking spatial coordinates of at least one camera during a video sequence forming a shot having multiple frames and outputting the tracked spatial coordinates as sensor data so that movement of the at least one camera is retraceable,
tracking spatial coordinates of at least one camera during a video sequence forming a shot having multiple frames and outputting the tracked spatial coordinates as sensor data so that movement of the at least one camera is retraceable,
wherein each of the at least one camera has a lens;
wherein each of the at least one camera has a lens,
wherein the at least one camera captures images of the background scene in the shot without a subject;
determining lens data corresponding to the lens of the at least one camera during the shot; and
creating a lens profile storing lens data corresponding to the lens of the at least one camera during the shot;
encoding the lens data;
sending the sensor data and the lens data to a render engine to
sending the lens data to a render engine;
[Claim 2] generating a background scene for virtual production using the replicated shot.
retracing the movement of the at least one camera during the shot at the render engine using the sensor data including the tracked spatial coordinates to replicate the shot virtually so that the background scene is rendered without using the background plate;
recreating the lens and one or more characteristics of the lens during the shot using the lens data; and
replicate the shot frame by frame in a virtual environment using the sensor data and the lens data.
replicating the shot in a virtual environment using the retraced camera movement and the recreated one or more lens characteristics, wherein replicating the shot includes mimicking the lens and the lens characteristics, frame by frame, to replicate the shot virtually and to enable defining of a resolution for the background scene.
Regarding instant claim 20, claim 12 of 12106427 does not recite including six-degrees-of-freedom (6DoF) information. However, the concept and advantages of 6DoF information are well known and expected in the art (Official Notice). It would have been obvious for the tracked spatial coordinates to include 6DoF information to track the translation and rotation of the camera.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 8-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wick et al. (US 2021/0150804) in view of Jobe et al. (US 2016/0037148).
Regarding claim 1, Wick teaches/suggests: A method, comprising:
spatial coordinates of at least one camera during a video sequence forming a shot having multiple frames as sensor data so that movement of the at least one camera is retraceable, wherein each of the at least one camera has a lens (Wick [0045] “detecting the camera settings and the camera positions of the first image sequence … the capturing lens” [0066] “use sensors that directly detect a movement of the camera/lens”);
determining lens data corresponding to the lens of the at least one camera during the shot (Wick [0045] “The camera settings and camera positions can involve at least the position of the entrance pupil and the field of view of the capturing lens”); and
sending the sensor data and the lens data to a render engine to replicate the shot frame by frame in a virtual environment using the sensor data and the lens data (Wick [0046]-[0047] “transmitting the camera settings and camera positions as data 11 to a virtual camera 12 ... The virtual camera 12 can be, for example, a parameter set for settings of an image synthesis program 18 that can generate a virtual image sequence 14”).
Wick is silent regarding:
tracking spatial coordinates of at least one camera during a video sequence forming a shot having multiple frames and outputting the tracked spatial coordinates;
Jobe, however, teaches/suggests:
tracking spatial coordinates of at least one camera during a video sequence forming a shot having multiple frames and outputting the tracked spatial coordinates (Jobe [0031] “physically tracking the on-set camera 30 in real-time by using its positional data to reposition the virtual camera to the appropriate corresponding location within the 3D environment”);
Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify the real camera of Wick to be tracked as taught/suggested by Jobe to detect its positions.
Regarding claim 2, Wick as modified by Jobe teaches/suggests: The method of claim 1, further comprising:
generating a background scene for virtual production using the replicated shot (Wick [0051] “the first, real image sequence 22 and the second, virtual image sequence 24 can be combined to form a composite image sequence 26” [0004] “an image sequence having image contents generated as described, said image sequence showing for example virtual, computer-animated living organisms or backgrounds, is intended to be embedded for example into an image sequence filmed in reality”).
Regarding claim 3, Wick and Jobe are silent regarding: The method of claim 1, wherein the tracked spatial coordinates include six-degrees-of-freedom (6DoF) information. However, the concept and advantages of 6DoF information are well known and expected in the art (Official Notice). It would have been obvious for the camera positions of Wick as modified by Jobe to include 6DoF information to track the translation and rotation of the real camera.
Regarding claim 4, Wick as modified by Jobe teaches/suggests: The method of claim 1, wherein the sensor data and the lens data are output by wireless communication (Wick [0075] “a wireless or wired transmission of the lens/camera data to a rendering computer”).
Regarding claim 8, Wick as modified by Jobe teaches/suggests: The method of claim 1, wherein the lens profile is determined in real time (Wick [0075] “the lens/camera data can be used for rendering in real time during the capturing of the lens/camera data”).
Regarding claim 9, Wick as modified by Jobe teaches/suggests: The method of claim 1, wherein the spatial coordinates are tracked in real time (Jobe [0031] “physically tracking the on-set camera 30 in real-time by using its positional data to reposition the virtual camera to the appropriate corresponding location within the 3D environment”). The same rationale to combine as set forth in the rejection of claim 1 above is incorporated herein.
Regarding claim 10, Wick as modified by Jobe teaches/suggests: The method of claim 1, wherein the lens data includes a lens distortion profile, a focus distance (Wick [0028] “these pupil locations are functions of the focusing distance and zoom setting that are individually dependent on the lens … there are a number of lens parameters such as distortion”).
Wick and Jobe are silent regarding a nodal point. However, the concept and advantages of a nodal point are well known and expected in the art (Official Notice). It would have been obvious for the camera settings of Wick as modified by Jobe to include the nodal point to generate the virtual image sequence that correspond to the real camera.
Regarding claim 11, Wick as modified by Jobe teaches/suggests: The method of claim 1, wherein the lens data includes an image plane distance to a nodal point (Wick [0028] “these pupil locations are functions of the focusing distance and zoom setting that are individually dependent on the lens”). The focusing distance meets the limitation “an image plane distance to a nodal point.”
Regarding claim 12, Wick as modified by Jobe teaches/suggests: The method of claim 1, wherein the lens data includes a focal length of the at least one camera (Wick [0061] “The lens/camera data can include … the focal length of the lens”).
Regarding claim 13, Wick as modified by Jobe teaches/suggests: The method of claim 12, wherein the focal length included in the lens data accounts for zoom and lens breathing (Wick [0028] “Just the perspective reproduction of a scene depends on the location of the entrance and exit pupils and the change in focal length of the lens. In this case, these pupil locations are functions of the focusing distance and zoom setting that are individually dependent on the lens”). The change in focal length meets the lens breathing.
Regarding claim 14, Wick as modified by Jobe teaches/suggests: The method of claim 1, wherein the lens data includes lens shading (Wick [0028] “there are a number of lens parameters such as … vignetting”).
Regarding claim 15, Wick as modified by Jobe teaches/suggests: A system, comprising:
at least one camera to capture images of a background scene and output the captured images, wherein each of the at least one camera has a lens (Wick [0044]-[0045] “generating a first individual image sequence with a real camera 10 … the capturing lens”);
at least one sensor to track, in real time, spatial coordinates of the at least one camera during a video sequence forming a shot having multiple frames and output the tracked spatial coordinates as sensor data so that movement of the at least one camera is retraceable (Wick [0045] “detecting the camera settings and the camera positions of the first image sequence” [0066] “use sensors that directly detect a movement of the camera/lens” Jobe [0031] “physically tracking the on-set camera 30 in real-time by using its positional data to reposition the virtual camera to the appropriate corresponding location within the 3D environment”); and
processing circuitry (Wick [0025] “a data processing system”) configured to determine, in real time, lens data corresponding to the lens of the at least one camera during the shot (Wick [0045] “The camera settings and camera positions can involve at least the position of the entrance pupil and the field of view of the capturing lens” [0075] “the lens/camera data can be used for rendering in real time during the capturing of the lens/camera data”), and
send the sensor data and the lens data to a render engine to replicate the shot frame by frame in a virtual environment using the sensor data and the lens data (Wick [0046]-[0047] “transmitting the camera settings and camera positions as data 11 to a virtual camera 12 ... The virtual camera 12 can be, for example, a parameter set for settings of an image synthesis program 18 that can generate a virtual image sequence 14”).
The same rationale to combine as set forth in the rejection of claim 1 above is incorporated herein.
Claims 16-18 recite limitation(s) similar in scope to those of claims 2-4, respectively, and are rejected for the same reason(s).
Regarding claim 20, Wick as modified by Jobe teaches/suggests: A non-transitory computer-readable storage medium storing computer-readable instructions thereon which, when executed by processing circuitry, cause the processing circuitry to perform a method (Wick [0025] “a computer program”), the method comprising:
receiving tracked spatial coordinates of at least one camera captured during a video sequence forming a shot having multiple frames so that movement of the at least one camera is retraceable, wherein each of the at least one camera has a lens (Wick [0045] “detecting the camera settings and the camera positions of the first image sequence … the capturing lens” [0066] “use sensors that directly detect a movement of the camera/lens” Jobe [0031] “physically tracking the on-set camera 30 in real-time by using its positional data to reposition the virtual camera to the appropriate corresponding location within the 3D environment”);
receiving lens data including the lens and one or more characteristics of the lens of the at least one camera during the shot (Wick [0045] “The camera settings and camera positions can involve at least the position of the entrance pupil and the field of view of the capturing lens”); and
sending the tracked spatial coordinates and the lens data to a render engine to replicate the shot frame by frame in a virtual environment using the tracked spatial coordinates and the lens data (Wick [0046]-[0047] “transmitting the camera settings and camera positions as data 11 to a virtual camera 12 ... The virtual camera 12 can be, for example, a parameter set for settings of an image synthesis program 18 that can generate a virtual image sequence 14” Jobe [0031] “physically tracking the on-set camera 30 in real-time by using its positional data to reposition the virtual camera to the appropriate corresponding location within the 3D environment”).
The same rationale to combine as set forth in the rejection of claim 1 above is incorporated herein.
Wick and Jobe are silent regarding including six-degrees-of-freedom (6DoF) information. However, the concept and advantages of 6DoF information are well known and expected in the art (Official Notice). It would have been obvious for the camera positions of Wick as modified by Jobe to include 6DoF information to track the translation and rotation of the real camera.
Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wick et al. (US 2021/0150804) in view of Jobe et al. (US 2016/0037148) as applied to claims 1 and 15 above, and further in view of Houskeeper (US 6014163).
Regarding claim 5, Wick further discloses in [0051]: “the first, real image sequence 22 and the second, virtual image sequence 24 can be combined to form a composite image sequence 26.” Wick and Jobe are silent regarding: The method of claim 1, further comprising:
synchronizing the lens data to respective frames of the shot.
Houskeeper, however, teaches/suggests:
synchronizing the lens data to respective frames of the shot (Houskeeper col. 6 ll. 42-48 “the video delay 34 synchronizes the camera field of view data with the virtual image data and thus, properly sets the data streams for compositing the data of the two images”).
Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify the camera settings of Wick as modified by Jobe to be synchronized as taught/suggested by Houskeeper for the compositing.
Claim 19 recites limitation(s) similar in scope to those of claim 5, and is rejected for the same reason(s).
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wick et al. (US 2021/0150804) in view of Jobe et al. (US 2016/0037148) as applied to claim 1 above, and further in view of Yu (US 2019/0122121).
Regarding claim 6, Wick as modified by Jobe teaches/suggests: The method of claim 1, wherein the instructing is generated by the render engine (Wick [0047] “The virtual camera 12 can be, for example, a parameter set for settings of an image synthesis program 18 that can generate a virtual image sequence 14”). The instructing is an inherent feature of the image synthesis program (the render engine). Wick as modified by Jobe does not teach/suggest a plugin for the render engine. Yu, however, teaches/suggests a plugin for the render engine (Yu [0048] “a plug-in support for the game development engine with instant rendering capabilities”). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify the image synthesis program (the render engine) of Wick as modified by Jobe to include the plugin of Yu for instant rendering capabilities.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wick et al. (US 2021/0150804) in view of Jobe et al. (US 2016/0037148) as applied to claim 1 above, and further in view of Sarratori et al. (US 2016/0328421).
Regarding claim 7, Wick as modified by Jobe does not teach/suggest: The method of claim 1, further comprising:
rendering, by the render engine, the shot using an asset having a resolution of greater than or equal to 8K.
Sarratori, however, teaches/suggests an asset having a resolution of greater than or equal to 8K (Sarratori [0042] “a visual asset may have a vectorized representation [119] that is designed for use on large, high resolution displays such as 4k or 8k televisions”). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify the image synthesis (the rendering) of Wick as modified by Jobe to include the visual asset of Sarratori for high resolution displays. As such, Wick as modified by Jobe and Sarratori teaches/suggests:
rendering, by the render engine, the shot using an asset having a resolution of greater than or equal to 8K (Wick [0047] “The virtual camera 12 can be, for example, a parameter set for settings of an image synthesis program 18 that can generate a virtual image sequence 14” Sarratori [0042] “a visual asset may have a vectorized representation 120 that is designed for use on large, high resolution displays such as 4k or 8k televisions”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 2020/0204758 – 3D tracking data
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANH-TUAN V NGUYEN whose telephone number is 571-270-7513. The examiner can normally be reached on M-F 9AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JASON CHAN can be reached on 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANH-TUAN V NGUYEN/
Primary Examiner, Art Unit 2619