DETAILED ACTION
The present Office action is in response to the Request for Continued Examination (RCE) filed on 16 DECEMBER 2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1, 7, and 11 have been amended. No claims have been cancelled or added. Claims 1-14 are pending and herein examined.
Response to Arguments
Applicant’s arguments, see Remarks, filed 16 DECEMBER 2025, with respect to the rejection(s) of claim(s) 1, 7, and 11 under 35 U.S.C. § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of previously presented prior-arts and U.S. Publication No. 2006/0034602 A1 (hereinafter “Fukui”).
With regard to claim 1, previously rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Publication No. 2019/0222824 A1 (hereinafter “Sheridan”) in view of U.S. Publication No. 2014/0146084 A1 (hereinafter “Polo”), and further in view of U.S. Publication No. 2014/0369661 A1 (hereinafter “Partouche”), Applicant alleges:
“Partouche discloses only the user’s satisfaction or dissatisfaction as the basis for determining whether to re-shoot. For example, if the field of view or the position of objects in the video does not meet expectations, a re-shooting may be performed. Partouche does not disclose the use of the image processing controls signal derived from the previous video (corresponding to the original) to re-shoot a new video (corresponding to the updated original video). The “re-shoot” of Claim 1 involves altering the shooting parameters (i.e., adjusting the hardware function of the camera module), rather than re-shooting a video with the same shooting parameters that merely results in differences in object position or field of view.” (Remarks, p. 2.)
The Examiner recognizes Partouche discloses re-shooting for a user’s satisfaction. See Partouche, ¶ [0016]. However, the user would have more creative liberty than to change just the field of view or position of objects. For instance, a focal length can be changed while filming and depending on the dynamics of the scene, the focal length could be different. See Partouche, ¶ [0110]. The disclosure of re-shooting in Partouche’s disclosure can allow for changing camera hardware functions, because there is nothing restricting such actions; however, there is no express disclosure of changing one of the camera hardware functions. In the current rejection, Fukui’s disclosure is relied upon for showing when a user decides to re-shoot, parameters of the camera are updated. See Fukui, FIG. 2, parameter changing S111.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5 and 7-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2019/0222824 A1 (hereinafter “Sheridan”) in view of U.S. Publication No. 2014/0146084 A1 (hereinafter “Polo”), further in view of U.S. Publication No. 2014/0369661 A1 (hereinafter “Partouche”), and even further in view of U.S. Publication No. 2006/0034602 A1 (hereinafter “Fukui”).
Regarding claim 1, Sheridan discloses a virtual reality (VR) real-time filming and monitoring system ([0002], “Systems for capturing, streaming, and/or playing back immersive content, such as to simulate an immersive virtual reality environment”), configured to allow a user to shoot an object in the real world to generate an original video (FIG. 1, image capture system 103 with a plurality of cameras taking individual non-stitched video, [0062], “camera 116 can write video image data”) and play a first VR screening video (FIG. 1, playback device(s) 106 with display rendered content 142), and allow the user to input, in real-time, an image processing control signal ([0190], “The operator of the playback device 1000 may control one or more parameters via input device 1004 and/or select operations to be performed, e.g., optionally select to display 3D scene or 2D scene;” FIG. 4, step 414 includes the user’s desired orientation for the viewing space selected for stitching in step 416; [0156], “user being able to switch between the different positions”) (FIG. 2, playback device(s) 106 with display rendered content 142), characterized in that the system comprises:
a camera module (FIG. 1, image capture system 102), comprising a plurality of cameras (FIG. 1, camera pairs 114a-114c each with cameras 116a and 116b referenced collectively as camera 116), configured to shoot the object in the real world so as to generate an original video ([0062], “Each camera 116 can include image processing electronics and at least one image sensor residing within a camera housing 122. The camera 116 can write video image data captured by the image sensor to one or more storage devices”), wherein the original video comprises the plurality of non-stitched videos shot by the plurality of cameras ([0062], “Each camera 116 can include image processing electronics and at least one image sensor residing within a camera housing 122. The camera 116 can write video image data captured by the image sensor to one or more storage devices;” Note, the individual camera videos are unstitched at the step of video image data capture), respectively;
a first image processing module (FIG. 1, system 100 with image processing and content server 104 and playback device(s) 106), configured to process the plurality of non-stitched videos to generate a real-time video temporary data ([0138], “the playback device stitches together rendered frames in order to provide a contiguous viewing area”) according to the image processing control signal ([0190], “The operator of the playback device 1000 may control one or more parameters via input device 1004 and/or select operations to be performed, e.g., optionally select to display 3D scene or 2D scene;” FIG. 4, step 414 includes the user’s desired orientation for the viewing space selected for stitching in step 416; [0156], “user being able to switch between the different positions”), a relative position information of the plurality of cameras ([0156], “multiple camera rigs […] located at different physical locations […] the user being able to switch between he different positions and with the play back device 822 communicating the selected position from the playback device 822 to the content server 814.” Note, the images stitched for display correspond to the position of the selected rig of cameras. FIG. 2 is the calibration profile and step 206 providing a physical relative position for an object and the cameras to generate a grid and map for determining intersections between images for stitching appropriately);
an output module (FIG. 1, playback device(s) 106), configured to generate the first VR screening video according to the real-time video temporary data ([0141], “at block 416, the display device 106 stitches or seams together the mapped left eye frame from the first camera pair 114a with the mapped left eye frame from the second camera pair 114b to create a composite left eye frame, and stitches or seams together the mapped right eye frame from the first camera pair 114a with the mapped right eye frame from the second camera pair 114b to create a composite right eye frame. […] The display device 106 then drives the display with the composite left and right eye images”);
a real-time play module (FIG. 1, playback device(s) 106), configured to play the first VR screening video ([0141], “at block 416, the display device 106 stitches or seams together the mapped left eye frame from the first camera pair 114a with the mapped left eye frame from the second camera pair 114b to create a composite left eye frame, and stitches or seams together the mapped right eye frame from the first camera pair 114a with the mapped right eye frame from the second camera pair 114b to create a composite right eye frame. […] The display device 106 then drives the display with the composite left and right eye images”)
wherein the camera module has a hardware function ([0063] disclose configuring various resolutions of the camera. [0064], “record and/or output video data at frame rates.” [0065], “The lens 124 can be in the form of a lens system including a number of optical, electronic, and/or mechanical components operating together to provide variable focus, aperture, and/or zoom”),
Sheridan fails to expressly disclose allow the user to input, in real time, […] an editing command into the VR real-time filming; and
an editing mode, configured to generate an edited data according to the real video temporary data and the editing command,
wherein the hardware function is adjusted based on the image processing control signal, and the camera module is further configured to re-shoot the object in the real world so as to generate an updated original video according to the hardware function been adjusted, wherein the image processing control signal for controlling the camera module to re-shoot the object in the real world is generated based on the first VR screening video,
wherein the first image processing module is further configured to process the updated original video according to the image processing control signal so as to generate an updated real-time video temporary data, wherein the updated real-time video temporary data is different from the real-time video temporary data.
However, Polo teaches allow the user to input, in real time, […] an editing command into the VR real-time filming (FIG. 4A illustrates a real-time image with augmented reality controls for editing and FIG. 4B illustrates a user having edited the real-time image with a virtual character 410 and changed weather. Note, these are also image processing control signals that are considered manipulated by a user in real time); and
an editing mode, configured to generate an edited data according to the real video temporary data and the editing command (FIG. 4A illustrates a real-time image with augmented reality controls for editing and FIG. 4B illustrates a user having edited the real-time image with a virtual character 410 and changed weather).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have edited in real-time the VR data, as taught by Polo (FIGS. 4A-4B), in Sheridan’s invention. One would have been motivated to modify Sheridan’s invention, by incorporating Polo’s invention, to enable the user to enhance, replace, or otherwise augment real-world object and elements ([0040]) which provide a customizable experience tailored to the user for improving their entertainment.
Sheridan and Polo fail to expressly disclose wherein the hardware function is adjusted based on the image processing control signal, and the camera module is further configured to re-shoot the object in the real world so as to generate an updated original video according to the hardware function been adjusted, wherein the image processing control signal for controlling the camera module to re-shoot the object in the real world is generated based on the first VR screening video,
wherein the first image processing module is further configured to process the updated original video according to the image processing control signal so as to generate an updated real-time video temporary data, wherein the updated real-time video temporary data is different from the real-time video temporary data.
However, Partouche teaches original video according to the hardware function (FIG. 9 depicts a flowchart for shooting, where the director iterating through the process, controlling filming and animation control is the image processing control signal. [0119], “In a determination step 106, if the director considers the shot to be satisfactory (arrow Y) based on the composite images generated, he stops shooting the video footage (step 107);” [0120], “If the determination step 106 shows that the shot is not satisfactory (arrow N), he can take advantage of the fact that all the actors and camera operators are on hand and can shoot the scene again (return to step 105). If necessary, the animation can be changed in this step, as described above in relation to FIG. 8.” The composite image of FIG. 8 depicts the use of the virtual object),
wherein the first image processing module is further configured to process the updated original video according to the image processing control signal so as to generate an updated real-time video temporary data, wherein the updated real-time video temporary data is different from the real-time video temporary data (FIG. 9 depicts a flowchart for shooting, where the director iterating through the process, controlling filming and animation control is the image processing control signal. [0119], “In a determination step 106, if the director considers the shot to be satisfactory (arrow Y) based on the composite images generated, he stops shooting the video footage (step 107);” [0120], “If the determination step 106 shows that the shot is not satisfactory (arrow N), he can take advantage of the fact that all the actors and camera operators are on hand and can shoot the scene again (return to step 105). If necessary, the animation can be changed in this step, as described above in relation to FIG. 8.” The composite image of FIG. 8 depicts the use of the virtual object. Note, the animation changing will change the metadata as well as any other aspect the director decides to change).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have reshot the video and produce a second video with second desirable effects, as taught by Partouche, ([0119-0129]), in Sheridan and Polo’s invention. One would have been motivated to modify Sheridan and Polo’s invention, by incorporating Partouche’s invention, to allow for reshooting a scene until a user is satisfied, allowing more freedom when filming (Partouche: [0016-0017]).
Sheridan, Polo, and Partouche fail to expressly disclose wherein the hardware function is adjusted based on the image processing control signal, and the camera module is further configured to re-shoot the object in the real world so as to generate an updated original video according to the hardware function been adjusted.
However, Fukui teaches wherein the hardware function is adjusted based on the image processing control signal, and the camera module is further configured to re-shoot the object in the real world so as to generate an updated original video according to the hardware function been adjusted (FIG. 2, steps S101 to S107 capture and generate images and they are displayed in S108, then a user optionally in S109 can choose to capture again (e.g., re-shoot) with new parameters for the camera module in S111. [0076], “If the user determines a necessity of re-capturing when a warning is issued, the user can decide whether to immediately execute capturing again in consideration of a composition of an image displayed on the EVF.” [0117], “control parameters of the image capture unit 2, such as an amount of flash light, a capturing sensitivity, a shutter speed, and the like, are forcibly changed”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have captured again the video with modified camera parameters, as taught by Fukui, (FIG. 2), in Sheridan, Polo, and Partouche’s invention. One would have been motivated to modify Sheridan, Polo, and Partouche’s invention, by incorporating Fukui’s invention, to improve image quality, such as preventing burry outputs and unwanted red-eyes on captured subjects (Fukui: [0109] and [0128]).
Regarding claim 2, Sheridan, Polo, Partouche, and Fukui disclose every limitation of claim 1, as outlined above. Additionally, Sheridan discloses the first image processing module includes:
a camera calibration unit (FIG. 1, system 100 with image processing and content server 104 and playback device(s) 106), configured to output an alignment information according to the original video (FIG. 1, calibration profile(s), see generation of calibration profile(s) in FIG. 2 and its use in FIG. 4 steps 412 to 414 with corresponding paragraphs);
a video stitching unit (FIG. 1, system 100 with image processing and content server 104 and playback device(s) 106), configured to output a stitched video according to the original video and the alignment information (FIG. 4, step 416 describes stitching the rendered frames identified by the calibration profile in step 414);
a color calibration unit (FIG. 1, system 100 with image processing and content server 104 and playback device(s) 106), configured to output a calibrated video according to the stitched video (FIG. 1, render calibrated content to 3D viewing environment 140 is output to display rendered content 142. Note, element 140 performs stitching and other calibrations, such as distortion corrections); and
a dual-document recordation unit (FIG. 1, system 100 with image processing and content server 104 and playback device(s) 106), configured to generate the real-time video temporary data according to the calibrated video and generate an original video temporary data ([0120] describes memory onboard the playback device 106 for storing data and [0294] describes the functionalities generated in combination with a memory. Note, the persistence of the generated images, both recorded and the real-time virtual elements are considered stored on the playback device during processing and display).
Regarding claim 3, Sheridan, Polo, Partouche, and Fukui disclose all of the limitations of claim 2, as outlined above. Additionally, Sheridan discloses characterized in that the first processing module further includes: a video playback and alignment unit (FIG. 1, system 100 with image processing and content server 104 and playback device(s) 106), configured to generate an aligned video according to the real-time video temporary data (FIG. 1, display rendered content 142; FIG. 4, display content with display device 418 that is stitched and aligned as per steps 414-146. Note, the combination of references relies on original data with supplemented real-time edited data from Polo’s invention, providing the generated display of multiple temporary data).
Regarding claim 4, Sheridan, Polo, Partouche, and Fukui disclose all of the limitations of claim 3, as outlined above. Additionally, Sheridan discloses characterized in that the video stitching unit is configured to generate the stitched video according to the original video, the alignment information, and the aligned video (FIG. 4 shows use of the calibration profile for calibrating (e.g., aligning) the video and then using the aligned video for stitching. Additionally, Polo also discloses in FIG. 4B illustrating the output “aligned” video and controls for continually editing the output video, such as changing the virtual object in the output video, which will cause new stitching of a new virtual object in the aligned position).
Regarding claim 5, Sheridan, Polo, Partouche, and Fukui disclose all of the limitations of claim 3, as outlined above. Additionally, Polo discloses characterized in that the first image processing module further includes:
a green screen video unit, configured to generate a green screen video to the video stitching unit according to the aligned video, wherein, the video stitching unit is configured to generate the stitched video according to the original video, the alignment information, and the green screen video (FIGS. 2-4B disclose changing the augmented reality from the original information with filtered information providing location information of objects and accompanying paragraphs [0055-0059] include specifics of adjusting the augmented content, wherein the “green screen video” is considered an output video that is further being edited with additional superimposed (i.e. stitched) content). The same motivation of claim 1 applies to claim 5.
Regarding claim 7, the limitations are the same as those in claim 1; however, written as a process instead of a machine. Therefore, the same rationale of claim 1 applies to claim 7.
Regarding claim 8, Sheridan, Polo, Partouche, and Fukui disclose all of the limitations of claim 7, as outlined above. Additionally, Partouche discloses characterized in that the method further comprises: determining whether to stop shooting the plurality of non-stitched videos according to the first VR screening video ([0119] describes if the director is satisfactory with the composite images, then he stops shooting the video footage). The same motivation of claim 1 applies to claim 8.
Regarding claim 9, Sheridan, Polo, Partouche, and Fukui disclose all of the limitations of claim 8, as outlined above. Additionally, Polo discloses characterized in that the method further comprises: editing the real-time video temporary data ([0012] describes further editing in post-capture as a post-production process. FIG. 4B illustrates a user having edited the real-time image with a virtual character 410 and changed weather). The same motivation of claim 1 applies to claim 9.
Regarding claim 10, Sheridan, Polo, Partouche, and Fukui disclose all of the limitations of claim 9, as outlined above. Additionally, Polo discloses characterized in that the method further comprises: generating a second VR screening video according to the original video and the edited real-time video temporary data (FIG. 4B depicts controls for continually changing the augmented content, wherein the further output of additional edited video is a “second VR screening video”). The same motivation of claim 1 applies to claim 10.
Regarding claim 11, the limitations are the same as those in claim 1; however, written as a process instead of a machine. Therefore, the same rationale of claim 1 applies to claim 11.
Regarding claim 12, the limitations are the same as those in claim 8. Therefore, the same motivation of claim 8 applies to claim 12.
Regarding claim 13, the limitations are the same as those in claims 9 and 10. Therefore, the same motivation of claims 9 and 10 applies to claim 13.
Regarding claim 14, Sheridan, Polo, Partouche, and Fukui disclose all of the limitations of claim 1, as outlined above. Additionally, Hollingworth discloses wherein the output module is further configured to generate an updated first VR screening video according to the updated real-time temporary data, the editing module is further configured to generated an updated edited data according to the updated real-time video temporary data and the editing command, and the real-time play module is configured to play the updated first VR screening video (FIG. 9 depicts a flowchart for shooting, where the director iterating through the process, controlling filming and animation control is the image processing control signal. [0119], “In a determination step 106, if the director considers the shot to be satisfactory (arrow Y) based on the composite images generated, he stops shooting the video footage (step 107);” [0120], “If the determination step 106 shows that the shot is not satisfactory (arrow N), he can take advantage of the fact that all the actors and camera operators are on hand and can shoot the scene again (return to step 105). If necessary, the animation can be changed in this step, as described above in relation to FIG. 8.” The composite image of FIG. 8 depicts the use of the virtual object. Note, the updated data represents the same information addressed in claim 1 with Sheridan and Polo’s disclosures, only applied to the re-shot video of Hollingworth). The same motivation of claim 1 applies to claim 14.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2019/0222824 A1 (hereinafter “Sheridan”) in view of U.S. Publication No. 2014/0146084 A1 (hereinafter “Polo”), further in view of U.S. Publication No. 2014/0369661 A1 (hereinafter “Partouche”), even further in view of U.S. Publication No. 2006/0034602 A1 (hereinafter “Fukui”), and even further in view of U.S. Publication No. 2017/0287200 A1 (hereinafter “Forutanpour”).
Regarding claim 6, Sheridan, Polo, Partouche, and Fukui disclose all of the limitations of claim 1, as outlined above. Sheridan, Polo, Partouche, and Fukui fail to expressly disclose characterized in that the first image processing module comprises a graphics processing unit (GPU).
However, Forutanpour teaches characterized in that the first image processing module comprises a graphics processing unit (GPU) ([0005], “graphics processing unit (GPU)”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have used a GPU, as taught by Forutanpour ([0005]), in Sheridan, Polo, Partouche, and Fukui’s invention. One would have been motivated to modify Sheridan, Polo, Partouche, and Fukui’s invention, by incorporating Forutanpour’s invention, to use a GPU because it is an optimized processor for processing images.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STUART D BENNETT whose telephone number is (571)272-0677. The examiner can normally be reached Monday - Friday from 9:00 AM - 5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STUART D BENNETT/Examiner, Art Unit 2481