DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
2. Claims 1, 11, and 20 have been amended.
3. Claims 2-10 and 12-19 are as previously presented.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1, 3-8, 10-11, 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Park (US 10818057 B2), hereinafter Park, in view of Peterson (US 8692849 B2), hereinafter Peterson, and in view of Foutzitzis (US 20180150994 A1), hereinafter Foutzitzis.
Regarding claim 1, Park teaches a method, comprising: obtaining, using at least one image capture device, at least one panoramic image (Fig. 1, cameras 110 and 120, Col. 5 lines 7-28, wherein spherical content captured by the cameras is interpreted as a panoramic image); receiving an indication to add a real-time graphic to the at least one panoramic image before transmission to an end user (Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein patch images are defined as real-time graphics added to a panoramic image, and selecting and adding the patch to spherical content based on user input is interpreted as receiving an indication to add a real-time graphic); performing an autonomous insertion of the real-time graphic within the at least one panoramic image (Col. 18, lines 43-59, wherein patch images being automatically recommended and outputted for the spherical content is interpreted as autonomously inserting graphics within the panoramic image) by: generating a first sphere from the at least one panoramic image (Fig. 6, wherein spherical content viewing image 610 is interpreted as a first sphere; Col. 15 lines 12-20); generating a second sphere, wherein the generating a second sphere comprises adding the real-time graphic into the second sphere (Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein the top view image 630 where patches are added onto is interpreted as a second sphere); and generating a broadcast frame from the single model (Col. 21, lines 3-18, wherein the terminal electronic device outputting VR content based off of a received image, defined as a panoramic image, is interpreted as generating a broadcast frame) by excerpting a region of interest from the single model and converting the region of interest to the broadcast frame (Col. 21, lines 3-29, wherein an image provided to divide and output at least a partial region into a left-eye region and a right-eye region is interpreted as excerpting a region of interest from the single model, where the provided image is defined as a panoramic image).
Park does not teach obtaining at least one panoramic image of a live event production, and generating a single model by fusing the first sphere with the second sphere having the real-time graphic.
Peterson teaches generating a single model by fusing a panoramic image with a second image having the real-time graphic (Fig. 4, Col. 9 lines 34-49, wherein adjustment layers are interpreted as an image having real-time graphic edits, and saving the layers as a single panoramic image file is interpreted as fusing the images together).
Neither Park nor Peterson teaches obtaining at least one panoramic image of a live event production.
Foutzitzis teaches obtaining at least one panoramic image of a live event production (paragraph 31-32, image capture device obtaining a 360-degree field of view image, which is interpreted as a panoramic image, and live streams the content for live monitoring of events).
It would be obvious to one of ordinary skill before the effective filing date of the claimed invention to have modified Park to incorporate the teachings of Peterson for this method of generating and editing a spherical, panoramic image. Park discusses a method for capturing and editing spherical, 360-degree images by adding image patches to the spherical image, and provides a method of editing spherical content more easily. Peterson, similarly, discusses adding adjustment layers consisting of graphical edits on top of panoramic images, in order to adjust factors such as exposure, color balance, and other various inconsistencies found in the stitching process of creating a panoramic image, in order to create a more seamless image. Likewise, Foutzitzis discusses capturing spherical video images, as well as providing a way to better render and display the spherical video, as well as providing ways to livestream the content more efficiently. Seeing as the spherical content described in Park, a spherical image created by stitching together images captured by various cameras, is also a type of panoramic images, it would be obvious to incorporate the teachings of Peterson, for adding a layer of graphical edits on top of an existing spherical panoramic image, with Park in order to create a more seamless and better-looking panoramic image. Similarly, the methods described in Foutzitzis on more efficiently livestreaming 3D spherical images would be obvious to incorporate with Park and Peterson, in order to more effectively render the spherical, panoramic content described by both references. Therefore, it would be obvious to combine these references.
Regarding claim 3, Park in view of Peterson and Foutzitzis discloses the method of claim 1. Additionally, Park teaches the method of claim 1, wherein the obtaining comprises obtaining a plurality of panoramic images and generating, from the plurality of panoramic image, a full three-dimensional image (Col. 6, line 47 – Col. 7 line 38, wherein the cameras having fisheye lens each capturing an image with an angle greater than or equal to 180 degrees is interpreted as capturing a plurality of panoramic images, and stitching the images together into a globular shape is interpreted as generating a three-dimensional image from the plurality of panoramic images).
Regarding claim 4, Park in view of Peterson and Foutzitzis discloses the method of claim 3. Additionally, Park teaches the method of claim 3, wherein the placing the real-time graphic comprises placing the real-time graphic between dynamic entities within the full three- dimensional image (Col. 12, lines 25-35; Col. 15 line 63 – Col. 16 line 1, wherein the structure image is defined as an undesired object captured in frame such as a tripod, camera, or hand, which is interpreted as a dynamic entity, and placing image patches on top of structure images is interpreted as placing real-time graphics on top of dynamic entities, which implies being able to place the graphics in between dynamic entities).
Regrading claim 5, Park in view of Peterson and Foutzitzis discloses the method of claim 1. Additionally, Park teaches the method of claim 1, wherein the adding the real-time graphic comprises a user adjusting at least one characteristic the real-time graphic before placement within the second sphere (Fig. 6, Col. 15 line 63 – Col. 16 line 12, wherein an adjustment guide capable of adjusting a size and direction of the patch image, and adjusting and displaying the sizes and directions of the adjustment guide and patch image based on touch events are interpreted as a user input adjusting characteristics of real-time graphics before placing them)
Regarding claim 6, Park in view of Peterson and Foutzitzis discloses the method of claim 1. Additionally, Park teaches the method of claim 1, wherein the adding the real-time graphic comprises automatically, using software, placing the real-time graphic within the second sphere and wherein the placing comprises automatically, using the software, adjusting at least one characteristic of the real-time graphic (Col. 14, lines 54-65, wherein automatically adjusting and outputting a patch image’s size and color is interpreted as automatically placing and adjusting characteristics of the real-time graphic).
Regarding claim 7, Park in view of Peterson and Foutzitzis discloses the method of claim 1. Additionally, Peterson teaches the method of claim 1, wherein the fusing comprises drawing graphics of the second sphere over the first sphere (Fig. 4, Col. 9 lines 34-49, wherein the adjustment layers 112 being layered on top of the image 102 on the layer stack is interpreted as drawing graphics of the second panoramic image layer on top of the first panoramic image layer).
The motivation to combine would be the same as that set forth for claim 1.
Regarding claim 8, Park in view of Peterson discloses the method of claim 1. Additionally, Park teaches the method of claim 1, wherein the real-time graphic is derived from information captured by one or more sensors (Fig. 6, Col. 15 line 63 – Col. 16 line 12, wherein the patch image being adjusted in response to touch input from a user is interpreted as real-time graphics being derived by information captured by sensors, wherein a touch input device is interpreted as a type of sensor).
Regarding claim 10, Park in view of Peterson discloses the method of claim 1. Additionally, Park teaches the method of claim 1, further comprising transmitting the broadcast frame to a user (Col. 21, lines 3-29, wherein the terminal electronic device outputting virtual VR content consisting of a partial region to a VR device is interpreted as transmitting a broadcast frame to a user, wherein the virtual VR content is content viewable by a user).
Regarding claim 11, Park teaches a system, comprising: at least one image capture device; a processor operatively coupled to the at least one image capture device; a memory device that stores instructions (Fig. 2, Col. 6 lines 42-59) that, when executed by the processor, cause the information handling device to: obtain, using at least one image capture device, at least one panoramic image (Fig. 1, cameras 110 and 120, Col. 5 lines 7-28, wherein spherical content captured by the cameras is interpreted as a panoramic image); receive an indication to add a real-time graphic to the at least one panoramic image before transmission to an end user (Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein patch images are defined as real-time graphics added to a panoramic image, and selecting and adding the patch to spherical content based on user input is interpreted as receiving an indication to add a real-time graphic); perform an autonomous insertion of the real-time graphic within the at least one panoramic image (Col. 18, lines 43-59, wherein patch images being automatically recommended and outputted for the spherical content is interpreted as autonomously inserting graphics within the panoramic image) by: generate a first sphere from the at least one panoramic image (Fig. 6, wherein spherical content viewing image 610 is interpreted as a first sphere; Col. 15 lines 12-20); generate a second sphere, wherein the generating a second sphere comprises adding the real-time graphic into the second sphere (Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein the top view image 630 where patches are added onto is interpreted as a second sphere); and generate a broadcast frame from the single model (Col. 21, lines 3-18, wherein the terminal electronic device outputting VR content based off of a received image, defined as a panoramic image, is interpreted as generating a broadcast frame) by excerpting a region of interest from the single model and converting the region of interest to the broadcast frame (Col. 21, lines 3-29, wherein an image provided to divide and output at least a partial region into a left-eye region and a right-eye region is interpreted as excerpting a region of interest from the single model, where the provided image is defined as a panoramic image).
Park does not teach obtaining at least one panoramic image of a live event production, and generating a single model by fusing the first sphere with the second sphere having the real-time graphic.
Peterson teaches generating a single model by fusing a panoramic image with a second image having the real-time graphic (Fig. 4, Col. 9 lines 34-49, wherein adjustment layers are interpreted as an image having real-time graphic edits, and saving the layers as a single panoramic image file is interpreted as fusing the images together).
Neither Park nor Peterson teaches obtaining at least one panoramic image of a live event production.
Foutzitzis teaches obtaining at least one panoramic image of a live event production (paragraph 31-32, image capture device obtaining a 360-degree field of view image, which is interpreted as a panoramic image, and live streams the content for live monitoring of events).
The motivation to combine would be the same as that set forth for claim 1.
Regarding claim 13, Park in view of Peterson discloses the system of claim 11. Additionally, Park teaches the system of claim 11, wherein the obtaining comprises obtaining a plurality of panoramic images and generating, from the plurality of panoramic image, a full three-dimensional image (Col. 6, line 47 – Col. 7 line 38, wherein the cameras having fisheye lens each capturing an image with an angle greater than or equal to 180 degrees is interpreted as capturing a plurality of panoramic images, and stitching the images together into a globular shape is interpreted as generating a three-dimensional image from the plurality of panoramic images).
Regarding claim 14, Park in view of Peterson discloses the system of claim 13. Additionally, Park teaches the system of claim 13, wherein the placing the real-time graphic comprises placing the real-time graphic between dynamic entities within the full three- dimensional image (Col. 12, lines 25-35; Col. 15 line 63 – Col. 16 line 1, wherein the structure image is defined as an undesired object captured in frame such as a tripod, camera, or hand, which is interpreted as a dynamic entity, and placing image patches on top of structure images is interpreted as placing real-time graphics on top of dynamic entities, which implies being able to place the graphics in between dynamic entities).
Regrading claim 15, Park in view of Peterson discloses the system of claim 11. Additionally, Park teaches the system of claim 11, wherein the adding the real-time graphic comprises a user adjusting at least one characteristic the real-time graphic before placement within the second sphere (Fig. 6, Col. 15 line 63 – Col. 16 line 12, wherein an adjustment guide capable of adjusting a size and direction of the patch image, and adjusting and displaying the sizes and directions of the adjustment guide and patch image based on touch events are interpreted as a user input adjusting characteristics of real-time graphics before placing them)
Regarding claim 16, Park in view of Peterson discloses the system of claim 11. Additionally, Park teaches the system of claim 11, wherein the adding the real-time graphic comprises automatically, using software, placing the real-time graphic within the second sphere and wherein the placing comprises automatically, using the software, adjusting at least one characteristic of the real-time graphic (Col. 14, lines 54-65, wherein automatically adjusting and outputting a patch image’s size and color is interpreted as automatically placing and adjusting characteristics of the real-time graphic).
Regarding claim 17, Park in view of Peterson discloses the system of claim 11. Additionally, Peterson teaches the system of claim 11, wherein the fusing comprises drawing graphics of the second sphere over the first sphere (Fig. 4, Col. 9 lines 34-49, wherein the adjustment layers 112 being layered on top of the image 102 on the layer stack is interpreted as drawing graphics of the second panoramic image layer on top of the first panoramic image layer).
The motivation to combine would be the same as that set forth for claim 1.
Regarding claim 18, Park in view of Peterson discloses the system of claim 11. Additionally, Park teaches the system of claim 11, wherein the real-time graphic is derived from information captured by one or more sensors (Fig. 6, Col. 15 line 63 – Col. 16 line 12, wherein the patch image being adjusted in response to touch input from a user is interpreted as real-time graphics being derived by information captured by sensors, wherein a touch input device is interpreted as a type of sensor).
Regarding claim 19, Park in view of Peterson discloses the system of claim 11. Additionally, Park teaches the system of claim 11, further comprising transmitting the broadcast frame to a user (Col. 21, lines 3-29, wherein the terminal electronic device outputting virtual VR content consisting of a partial region to a VR device is interpreted as transmitting a broadcast frame to a user, wherein the virtual VR content is content viewable by a user).
Regarding claim 20, Park teaches a product, comprising: a non-transitory computer-readable storage device that stores executable code (Fig. 2, Col. 6 lines 42-59) that, when executed by the processor, cause the product to: obtain, using at least one image capture device, at least one panoramic image (Fig. 1, cameras 110 and 120, Col. 5 lines 7-28, wherein spherical content captured by the cameras is interpreted as a panoramic image); receive an indication to add a real-time graphic to the at least one panoramic image before transmission to an end user (Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein patch images are defined as real-time graphics added to a panoramic image, and selecting and adding the patch to spherical content based on user input is interpreted as receiving an indication to add a real-time graphic); perform an autonomous insertion of the real-time graphic within the at least one panoramic image (Col. 18, lines 43-59, wherein patch images being automatically recommended and outputted for the spherical content is interpreted as autonomously inserting graphics within the panoramic image) by: generate a first sphere from the at least one panoramic image (Fig. 6, wherein spherical content viewing image 610 is interpreted as a first sphere; Col. 15 lines 12-20); generate a second sphere, wherein the generating a second sphere comprises adding the real-time graphic into the second sphere (Fig. 6, Col. 15 line 52 – Col. 16 line 12, wherein the top view image 630 where patches are added onto is interpreted as a second sphere); and generate a broadcast frame from the single model (Col. 21, lines 3-18, wherein the terminal electronic device outputting VR content based off of a received image, defined as a panoramic image, is interpreted as generating a broadcast frame) by excerpting a region of interest from the single model and converting the region of interest to the broadcast frame (Col. 21, lines 3-29, wherein an image provided to divide and output at least a partial region into a left-eye region and a right-eye region is interpreted as excerpting a region of interest from the single model, where the provided image is defined as a panoramic image).
Park does not teach obtaining at least one panoramic image of a live event production, and generating a single model by fusing the first sphere with the second sphere having the real-time graphic.
Peterson teaches generating a single model by fusing a panoramic image with a second image having the real-time graphic (Fig. 4, Col. 9 lines 34-49, wherein adjustment layers are interpreted as an image having real-time graphic edits, and saving the layers as a single panoramic image file is interpreted as fusing the images together).
Neither Park nor Peterson teaches obtaining at least one panoramic image of a live event production.
Foutzitzis teaches obtaining at least one panoramic image of a live event production (paragraph 31-32, image capture device obtaining a 360-degree field of view image, which is interpreted as a panoramic image, and live streams the content for live monitoring of events).
The motivation to combine would be the same as that set forth for claim 1.
7. Claims 2, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Peterson and Foutzitzis as applied to claim 1, 11 above, and further in view of Gorstan (US 20130293671 A1), hereinafter Gorstan.
Regarding claim 2, Park in view of Peterson discloses the method of claim 1. Additionally, Gorstan teaches the method of claim 1, wherein the obtaining comprises obtaining metadata with the at least one panoramic image (Fig. 2, paragraph 23-25, extracting metadata from images, stitching images into a panoramic image, and attaching metadata to the panorama interpreted as obtaining metadata with the panoramic image).
It would be obvious to one of ordinary skill before the effective filing date of the claimed invention to have modified Park in view of Peterson to incorporate the teachings of Gorstan for this method of obtaining metadata for a panoramic image. Park discusses stitching and editing a panoramic, spherical image from multiple obtained images. Peterson also discusses stitching and editing together a panoramic image, teaching methods to avoid stitching issues or visual artifacts in the final image. Similarly, Foutzitzis discusses rectifying a panoramic spherical image and generating a 3D rendering of images to display a full 360-degree display of the image in a method similar to stitching, as well as mentioning attaching metadata to livestreamed content. Additionally, Gorstan also teaches a method for stitching together multiple photos in order to create a panoramic image, and discusses attaching metadata to the final panorama image in order to let users know helpful data about the image, such as geographical information of where the image was taken, or timestamps of when the photo was taken. As all four references discuss stitching panoramas, and Gorstan discloses helpful use cases for attaching metadata for a panoramic image, it would have been obvious to combine these references.
Regarding claim 12, Park in view of Peterson discloses the system of claim 11. Additionally, Gorstan teaches the system of claim 11, wherein the obtaining comprises obtaining metadata with the at least one panoramic image (Fig. 2, paragraph 23-25, extracting metadata from images, stitching images into a panoramic image, and attaching metadata to the panorama interpreted as obtaining metadata with the panoramic image).
The motivation to combine would be the same as that set forth for claim 2.
8. Claims 9 is rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Peterson and Foutzitzis as applied to claim 1 above, and further in view of Irie (US 9760974 B2), hereinafter Irie.
Regarding claim 9, Park in view of Peterson and Foutzitzis discloses the method of claim 1. Additionally, Irie teaches the method of claim 1, wherein excerpted region of interest comprises a two-dimensional rectified region of interest (Fig. 10A-D; Col. 9 line 53 – Col. 10 line 27, wherein a predetermined area is interpreted as an excerpted region of interest, and a projection view of the predetermined area T represented by (x, y) coordinates is interpreted as a two-dimensional rectified region).
It would be obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Park in view of Peterson and Foutzitzis to incorporate the teachings of Irie for this method of finding a two-dimension rectified region of interest of a 3-dimensional spherical image. Park and Peterson both disclose methods of stitching images to create panoramic images. Additionally, Foutzitzis discloses generating a 3D spherical, panoramic image from 2D circular images. Similarly, Irie also discloses a method of capturing a panoramic image. Irie also discusses displaying a rectified, two-dimensional area of a three-dimensional panoramic image, to allow a user to easily view and edit a portion of the panoramic image on a user device such as a smartphone. While Park also discusses a user being able to view and edit a panoramic image on a smartphone or other user device, it only discloses displaying the entire three-dimensional panoramic image. Irie, on the other hand, lets a user view both the full panoramic view and a zoomed in, rectified view of the panoramic image, as well as having a more intuitive way to change which portion of the panoramic image is being viewed. Because Irie discloses this intuitive and in-depth way of viewing a panoramic image, and both Park, Peterson, and Foutzitzis already discuss ways of creating and viewing panoramic images, it would be obvious to combine these references.
Response to Arguments
9. Applicant's arguments filed October 31st, 2025 have been fully considered but they are not persuasive.
Applicant argues that neither Park, Peterson, or any combination herein, teaches “receiving an indication to add a real-time graphic to the at least one panoramic image before transmission of the image to an end user”, as disclosed in claim 1.
Examiner respectfully disagrees. Examiner replies that, during patent examination, the pending claims must be given their broadest reasonable interpretation consistent with the specification. See MPEP § 2111. Also, it is improper to import claim limitations from the specification. See MPEP § 2111.01(II).
Additionally, Examiner states that Park teaches adding real-time graphics to a panoramic image, as seen in Fig. 6 where the patches that can be added onto a panoramic image are interpreted as real-time graphics, and further taught in Col. 15 line 52 – Col. 16 line 12 wherein the patch images are added in response to user input, which suggests the additions being in real time. Additionally, Col. 18, lines 43-59, clearly state that a patch image may be automatically recommended and outputted based on information within the spherical content, which is interpreted as being able to autonomously insert graphics into the spherical panoramic image.
In conclusion, the rejections set forth in the previous Office Action are shown to have been proper, and the claims are rejected above. To the extent that new citations and parenthetical remarks can be considered new grounds of rejection, such new grounds are necessitated by applicant’s amendments to the claim. Therefore, the present office action is made final.
Conclusion
10. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN W YICK whose telephone number is (571)272-4063. The examiner can normally be reached M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JORDAN WAN YICK/Examiner, Art Unit 2612
/Said Broome/Supervisory Patent Examiner, Art Unit 2612