DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
2. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
3. Claim 20 is rejected under 35 U.S.C. 101 because claim 20 recites: “A computer-readable storage device”. However, the ordinary meaning of a computer readable medium known in the art covers forms of non-transitory storage (CD-ROM, hard drives, etc.) and transitory storage (propagating signals, etc.). Therefore claim 20 is not statutory for reciting a computer readable storage device which covers both non-statutory subject matter and statutory subject matter. However, claim 20 may be amended to narrow the claim to cover only statutory embodiments by amending the claim to recite “A non-transitory computer readable storage device that stores…”. Claims that recite nothing but the physical characteristics of a form of energy, such as a frequency, voltage, or the strength of a magnetic field, define energy or magnetism, per se, and as such are non-statutory natural phenomena. O’Reilly, 56 U.S. (15 How.) at 112-14. Moreover, it does not appear that a claim reciting a signal encoded with functional descriptive material falls within any of the categories of patentable subject matter set forth in § 101. First, a claimed signal is clearly not a “process” under § 101 because it is not a series of steps. The other three § 101 classes of machine, compositions of matter and manufactures "relate to structural entities and can be grouped as ‘product’ claims in order to contrast them with process claims." 1 D. Chisum, Patents § 1.02 (1994). The three product classes have traditionally required physical structure or material.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1, 3-9, 11, 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Heinen (US 20200312029 A1), hereinafter Heinen, in view of Park (US 10818057 B2), hereinafter Park..
Regarding claim 1, Heinen teaches a method, comprising: obtaining, using at least one image capture device, at least one panoramic image of a scene (Fig. 29a, paragraph 344, a panoramic camera for shooting and obtaining panoramic images); receiving an indication to add a real-time graphic to the at least one panoramic image before transmission of the at least one panoramic image to an end user (paragraph 171, wherein scenes can be annotated with objects in real time which is interpreted as adding real-time graphics to the image; Fig. 8e, paragraph 235 wherein a user can be notified of an annotation and view it, which is interpreted as the annotated graphics being added before being transmitted to an end user); placing the real-time graphic onto the at least one panoramic image (Fig. 8e, paragraph 235, wherein an annotation containing text and visuals can be added onto the scene, which is defined as a panoramic object); and verifying placement of the real-time graphic onto the at least one panoramic image by comparing a location of the real-time graphic within the at least one panoramic image with a location of a physical marker within the scene corresponding to the at least one panoramic image (paragraph 146, wherein real world objects can act as markers for virtual objects to be placed in a 360-degree scene, which is interpreted as real-time graphics having a location corresponding to a location of a physical marker within the scene, and wherein having virtual objects be placed and displayed relative to the markers is interpreted as verifying placement of the real-time graphics by comparing its location to the marker in the panoramic image).
Heinen does not teach generating a broadcast frame from the at least one panoramic image having the real-time graphic, wherein the generating comprises identifying a region of interest within the at least one panoramic image having the real-time graphic and converting the region of interest to the broadcast frame.
Park teaches generating a broadcast frame from the at least one panoramic image having the real-time graphic (Col. 21, lines 3-18, wherein the terminal electronic device outputting VR content based off of a received image, defined as a panoramic image, is interpreted as generating a broadcast frame; Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein the panoramic image can have patch images added to it, which are interpreted as real-time graphics), wherein the generating comprises identifying a region of interest within the at least one panoramic image having the real-time graphic and converting the region of interest to the broadcast frame (Col. 21, lines 3-29, wherein the terminal electronic device may output content based on an image provided to divide and output at least a partial region of the image, where the provided image is defined as a panoramic image, which is interpreted as identifying a region of interest of a panoramic image and converting the region to the broadcast frame).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Heinen to incorporate the teachings of Park for this method of obtaining and modifying a panoramic image. Heinen discusses capturing and displaying a panoramic image and scene to multiple users simultaneously, and allowing a user to edit or annotate the scene and have the changes reflected to other users. Heinen additionally discusses using markers to align a panoramic scene with real world objects, in order to more accurately match the virtual objects within the scene with the real world. Similarly, Park discusses a method for capturing and editing spherical, 360-degree images by adding image patches to the image, and provides a method for editing spherical content more easily, as well as discussing inserting patch images based on a detected central point of the scene, which is interpreted as being similar to inserting a graphic based on a detected marker. As both references discuss ways for users to edit the content of 360-degree, panoramic images, and Park additionally discusses methods for effectively transmitting that content to devices such as VR headsets, it would be obvious to combine them.
Regarding claim 3, Heinen in view of Park discloses the method of claim 1. Additionally, Heinen teaches the method of claim 1, comprising adjusting the placement of the real-time graphic responsive to the verifying (paragraph 146, wherein placing objects within the scene relative to physical markers via an interaction method is interpreted as adjusting the placement of the real-time objects in response to verifying their placement compared to a physical marker).
Regarding claim 4, Heinen in view of Park discloses the method of claim 1. Additionally, Park teaches the method of claim 1, wherein the physical marker is not included in the at least one panoramic image (Col. 12, lines 25-35, wherein a structure image corresponding to a central point of the spherical image, where a structure image is defined as a physical structure in the image such as a tripod or hand, is interpreted as a physical marker, and the structure being covered up by a patch image suggests it is not included in the panoramic image).
The motivation to combine would be the same as that set forth for claim 1.
Regarding claim 5, Heinen in view of Park discloses the method of claim 1. Additionally, Heinen teaches the method of claim 1, wherein the obtaining comprises obtaining metadata with the at least one panoramic image (paragraph 357, wherein capturing content like 360-degree images may include obtaining additional meta information).
Regarding claim 6, Heinen in view of Park discloses the method of claim 1. Additionally, Heinen teaches the method of claim 1, wherein the obtaining comprises obtaining a plurality of panoramic images and generating, from the plurality of panoramic image, a full three-dimensional image (Fig. 29a, paragraph 344, wherein the camera obtains multiple images which are stitched together to form a 360-degree panoramic image, which is interpreted as a full three-dimensional image).
Regarding claim 7, Heinen in view of Park discloses the method of claim 1. Additionally, Heinen teaches the method of claim 1, wherein the placing the real-time graphic comprises placing the real-time graphic between dynamic entities within the at least one panoramic image (paragraph 311-312, wherein moving objects within a scene can be tracked, which is interpreted as dynamic entities, and suggests that real-time graphics placed within a panoramic scene can be placed between dynamic entities).
Regarding claim 8, Heinen in view of Park discloses the method of claim 1. Additionally, Park teaches the method of claim 1, wherein the adding the real-time graphic comprises adjusting at least one characteristic of the real-time graphic before placement within the at least one panoramic image (Col. 13, lines 8-18, wherein a patch image being applied to spherical content is interpreted as a real-time graphic being added to a panoramic image, and wherein the patch image may be adjusted before being applied to the image).
The motivation to combine would be the same as that set forth for claim 1.
Regarding claim 9, Heinen in view of Park disclose the method of claim 1. Additionally, Heinen teaches the method of claim 1, wherein the real-time graphic is derived from information captured by one or more sensors (Fig. 29c, paragraph 346, wherein a 3D scanner containing sensors is used to scan and generate object models that can be inputted into a panoramic scene; paragraph 146, wherein virtual objects from the 3D scanner can be inserted into the scene).
Regarding claim 11, Heinen teaches a system, comprising: at least one image capture device; a processor operatively coupled to the at least one image capture device; a memory device that stores instructions (Fig. 1A, computing systems including a processor and memory) that, when executed by the processor, cause the information handling device to: obtain, using the at least one image capture device, at least one panoramic image of a scene (Fig. 29a, paragraph 344, a panoramic camera for shooting and obtaining panoramic images); receive an indication to add a real-time graphic to the at least one panoramic image before transmission of the at least one panoramic image to an end user (paragraph 171, wherein scenes can be annotated with objects in real time which is interpreted as adding real-time graphics to the image; Fig. 8e, paragraph 235 wherein a user can be notified of an annotation and view it, which is interpreted as the annotated graphics being added before being transmitted to an end user); place the real-time graphic onto the at least one panoramic image (Fig. 8e, paragraph 235, wherein an annotation containing text and visuals can be added onto the scene, which is defined as a panoramic object); and verify placement of the real-time graphic onto the at least one panoramic image by comparing a location of the real-time graphic within the at least one panoramic image with a location of a physical marker within the scene corresponding to the at least one panoramic image (paragraph 146, wherein real world objects can act as markers for virtual objects to be placed in a 360-degree scene, which is interpreted as real-time graphics having a location corresponding to a location of a physical marker within the scene, and wherein having virtual objects be placed and displayed relative to the markers is interpreted as verifying placement of the real-time graphics by comparing its location to the marker in the panoramic image).
Heinen does not teach generating a broadcast frame from the at least one panoramic image having the real-time graphic, wherein the generating comprises identifying a region of interest within the at least one panoramic image having the real-time graphic and converting the region of interest to the broadcast frame.
Park teaches generating a broadcast frame from the at least one panoramic image having the real-time graphic (Col. 21, lines 3-18, wherein the terminal electronic device outputting VR content based off of a received image, defined as a panoramic image, is interpreted as generating a broadcast frame; Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein the panoramic image can have patch images added to it, which are interpreted as real-time graphics), wherein the generating comprises identifying a region of interest within the at least one panoramic image having the real-time graphic and converting the region of interest to the broadcast frame (Col. 21, lines 3-29, wherein the terminal electronic device may output content based on an image provided to divide and output at least a partial region of the image, where the provided image is defined as a panoramic image, which is interpreted as identifying a region of interest of a panoramic image and converting the region to the broadcast frame).
The motivation to combine would be the same as that set forth for claim 1.
Regarding claim 13, Heinen in view of Park discloses the system of claim 11. Additionally, Heinen teaches the system of claim 11, comprising adjusting the placement of the real-time graphic responsive to the verifying (paragraph 146, wherein placing objects within the scene relative to physical markers via an interaction method is interpreted as adjusting the placement of the real-time objects in response to verifying their placement compared to a physical marker).
Regarding claim 14, Heinen in view of Park discloses the system of claim 11. Additionally, Park teaches the system of claim 11, wherein the physical marker is not included in the at least one panoramic image (Col. 12, lines 25-35, wherein a structure image corresponding to a central point of the spherical image, where a structure image is defined as a physical structure in the image such as a tripod or hand, is interpreted as a physical marker, and the structure being covered up by a patch image suggests it is not included in the panoramic image).
The motivation to combine would be the same as that set forth for claim 1.
Regarding claim 15, Heinen in view of Park discloses the system of claim 11. Additionally, Heinen teaches the system of claim 11, wherein the obtaining comprises obtaining metadata with the at least one panoramic image (paragraph 357, wherein capturing content like 360-degree images may include obtaining additional meta information).
Regarding claim 16, Heinen in view of Park discloses the system of claim 11. Additionally, Heinen teaches the system of claim 11, wherein the obtaining comprises obtaining a plurality of panoramic images and generating, from the plurality of panoramic image, a full three-dimensional image (Fig. 29a, paragraph 344, wherein the camera obtains multiple images which are stitched together to form a 360-degree panoramic image, which is interpreted as a full three-dimensional image).
Regarding claim 17, Heinen in view of Park discloses the system of claim 11. Additionally, Heinen teaches the system of claim 11, wherein the placing the real-time graphic comprises placing the real-time graphic between dynamic entities within the at least one panoramic image (paragraph 311-312, wherein moving objects within a scene can be tracked, which is interpreted as dynamic entities, and suggests that real-time graphics placed within a panoramic scene can be placed between dynamic entities).
Regarding claim 18, Heinen in view of Park discloses the system of claim 11. Additionally, Park teaches the system of claim 11, wherein the adding the real-time graphic comprises adjusting at least one characteristic of the real-time graphic before placement within the at least one panoramic image (Col. 13, lines 8-18, wherein a patch image being applied to spherical content is interpreted as a real-time graphic being added to a panoramic image, and wherein the patch image may be adjusted before being applied to the image).
The motivation to combine would be the same as that set forth for claim 1.
Regarding claim 19, Heinen in view of Park disclose the system of claim 11. Additionally, Heinen teaches the system of claim 11, wherein the real-time graphic is derived from information captured by one or more sensors (Fig. 29c, paragraph 346, wherein a 3D scanner containing sensors is used to scan and generate object models that can be inputted into a panoramic scene; paragraph 146, wherein virtual objects from the 3D scanner can be inserted into the scene).
Regarding claim 20, Heinen teaches a product, comprising: a computer-readable storage device that stores executable code that, when executed by a processor, (Fig. 1A, computing systems including a processor and memory that run software programs) causes the product to: obtain, using the at least one image capture device, at least one panoramic image of a scene (Fig. 29a, paragraph 344, a panoramic camera for shooting and obtaining panoramic images); receive an indication to add a real-time graphic to the at least one panoramic image before transmission of the at least one panoramic image to an end user (paragraph 171, wherein scenes can be annotated with objects in real time which is interpreted as adding real-time graphics to the image; Fig. 8e, paragraph 235 wherein a user can be notified of an annotation and view it, which is interpreted as the annotated graphics being added before being transmitted to an end user); place the real-time graphic onto the at least one panoramic image (Fig. 8e, paragraph 235, wherein an annotation containing text and visuals can be added onto the scene, which is defined as a panoramic object); and verify placement of the real-time graphic onto the at least one panoramic image by comparing a location of the real-time graphic within the at least one panoramic image with a location of a physical marker within the scene corresponding to the at least one panoramic image (paragraph 146, wherein real world objects can act as markers for virtual objects to be placed in a 360-degree scene, which is interpreted as real-time graphics having a location corresponding to a location of a physical marker within the scene, and wherein having virtual objects be placed and displayed relative to the markers is interpreted as verifying placement of the real-time graphics by comparing its location to the marker in the panoramic image).
Heinen does not teach generating a broadcast frame from the at least one panoramic image having the real-time graphic, wherein the generating comprises identifying a region of interest within the at least one panoramic image having the real-time graphic and converting the region of interest to the broadcast frame.
Park teaches generating a broadcast frame from the at least one panoramic image having the real-time graphic (Col. 21, lines 3-18, wherein the terminal electronic device outputting VR content based off of a received image, defined as a panoramic image, is interpreted as generating a broadcast frame; Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein the panoramic image can have patch images added to it, which are interpreted as real-time graphics), wherein the generating comprises identifying a region of interest within the at least one panoramic image having the real-time graphic and converting the region of interest to the broadcast frame (Col. 21, lines 3-29, wherein the terminal electronic device may output content based on an image provided to divide and output at least a partial region of the image, where the provided image is defined as a panoramic image, which is interpreted as identifying a region of interest of a panoramic image and converting the region to the broadcast frame).
The motivation to combine would be the same as that set forth for claim 1.
7. Claims 2, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Heinen in view of Park as applied to claims 1, 11 above, and further in view of Peterson (US 8692849 B2), hereinafter Peterson.
Regarding claim 2, Heinen in view of Park discloses the method of claim 1. Additionally, Park teaches generating a first sphere from the at least one panoramic image (Fig. 6, wherein spherical content viewing image 610 is interpreted as a first sphere; Col. 15 lines 12-20); and generating a second sphere, wherein the generating a second sphere comprises adding the real-time graphic into the second sphere (Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein the top view image 630 where patches are added onto is interpreted as a second sphere).
Neither Heinen nor Park teaches generating a single model by fusing the first sphere with the second sphere having the real-time graphic.
Peterson teaches generating a single model by fusing a panoramic image with a
second image having the real-time graphic (Fig. 4, Col. 9 lines 34-49, wherein adjustment layers are interpreted as an image having real-time graphic edits, and saving the layers as a single panoramic image file is interpreted as fusing the images together).
It would be obvious to one of ordinary skill before the effective filing date of the claimed invention to have modified Heinen in view of Park to incorporate the teachings of Peterson for this method of generating and editing a spherical, panoramic image. Heinen discusses generating a 3D panoramic scene by using a camera to capture and stitch together images, as well as a method for users to add text and visuals to the scene. Park discusses a method for capturing and editing spherical, 360-degree images by adding image patches to the spherical image, and provides a method of editing spherical content more easily. Peterson, similarly, discusses adding adjustment layers consisting of graphical edits on top of panoramic images, in order to adjust factors such as exposure, color balance, and other various inconsistencies found in the stitching process of creating a panoramic image, in order to create a more seamless image. Seeing as Heinen and Park both discuss generating spherical images by stitching together images captured by various cameras and editing its content, it would be obvious to incorporate the teachings of Peterson, for adding a layer of graphical edits on top of an existing spherical panoramic image, and fusing the two into a single image in order to create a more seamless and better-looking panoramic image.
Regarding claim 12, Heinen in view of Park discloses the system of claim 11. Additionally, Park teaches generating a first sphere from the at least one panoramic image (Fig. 6, wherein spherical content viewing image 610 is interpreted as a first sphere; Col. 15 lines 12-20); and generating a second sphere, wherein the generating a second sphere comprises adding the real-time graphic into the second sphere (Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein the top view image 630 where patches are added onto is interpreted as a second sphere).
Neither Heinen nor Park teaches generating a single model by fusing the first sphere with the second sphere having the real-time graphic.
Peterson teaches generating a single model by fusing a panoramic image with a
second image having the real-time graphic (Fig. 4, Col. 9 lines 34-49, wherein adjustment layers are interpreted as an image having real-time graphic edits, and saving the layers as a single panoramic image file is interpreted as fusing the images together).
The motivation to combine would be the same as that set forth for claim 2.
8. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Heinen in view of Park as applied to claim 1 above, and further in view of Irie (US 9760974 B2), hereinafter Irie.
Regarding claim 10, Heinen in view of Park discloses the method of claim 1. Additionally, Irie teaches the method of claim 1, wherein excerpted region of interest comprises a two-dimensional rectified region of interest (Fig. 10A-D; Col. 9 line 53 - Col. 10 line 27, wherein a predetermined area is interpreted as an excerpted region of interest, and a projection view of the predetermined area T represented by (x, y) coordinates is interpreted as a two-dimensional rectified region).
It would be obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Heinen in view of Park to incorporate the teachings of Irie for this method of finding a two-dimensional rectified region of interest of a three-dimensional spherical image. Both Heinen and Park both disclose methods of capturing and stitching images to create panoramic images, as well as methods for editing that image. Similarly, Irie also discloses a method of capturing a panoramic image, as well as discussing displaying a rectified, two-dimensional area of a three-dimensional panoramic image, to allow users to easily view and edit a portion of the panoramic image on a user device. As all three references discuss ways for a user to edit a panoramic image, and Irie discloses a more in-depth way of viewing specific regions of a panoramic image for easier editing capabilities, it would be obvious to combine these references.
Conclusion
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN W YICK whose telephone number is (571)272-4063. The examiner can normally be reached M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JORDAN WAN YICK/Examiner, Art Unit 2612
/Said Broome/Supervisory Patent Examiner, Art Unit 2612