DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 2/1/2025 is in compliance with the provisions of 37 CFR 1.97 and is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-9 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hutten; Curt US 20140013228 A1 (hereafter Hutten) and in further view of RICHARDS; Mathew et al. US 20210182560 A1 (hereafter Richards) and in further view of Prosserman; Jeff et al. US 20160073013 A1 (hereafter Prosserman).
Regarding claim 1, “a display method, applied to a display side device, and the method comprising: receiving first data from a capturing side device; and displaying a first interface based on the first data, wherein the first interface comprises a preset layer and a radar map which is superimposed on a specified area of the preset layer, the preset layer is obtained based on a preset viewpoint image of a target scene, and the radar map indicates location information of a plurality of target objects in the target scene” Hutten para 26-32 teaches experience feed interface 120 on a display side device coupled to a plurality of feed acquiring devices for receiving video content and related data and wherein the experience feed interface comprises a feed radar 170 which corresponds to claimed radar map superimposed on a specific area of the preset layer in Hutten’s graphical user interface; see also para 26-32 disclosing displaying the user interface based on received data from feed devices comprising experience dimensions, aggregated feeds or experience policies can then be used to generate or calculate one or more metrics associated with an experience dimension, which can provide a value associated with the dimension.
Whereas Hutten does not use the term “target scene,” based on the teachings of Richards in an analogous art, a person of ordinary skill in the art would have understood Hutten to read on target scene wherein the different cameras provide different viewpoints for the event associated with the map identifying object positions (i.e., analogous to a radar map)(see Richards para 26-27 - images in the subset are determined as the images which have the best mapping for objects in the scene associated with a particular event which is to be reviewed) because Prosserman also teaches an invention for multi vantage point comprising a device for viewing streaming video captured from multiple vantage points (Abstract; para 82, 108 utilizing device location data for radar navigation of which camera footage is being displayed on the screens).
Therefore, it would have been obvious before the effective filing date of the claimed invention to modify the teachings of Hutton for a display side device coupled to a plurality of feed acquiring devices for receiving video content and related data and wherein the experience feed interface comprises a feed radar which corresponds to a radar map superimposed on a specific area of a preset layer by further incorporating known elements of Richards for displaying a graphical user interface for receiving content and data from a plurality of cameras providing different viewpoints for an event associated and displaying a map identifying object positions related to said event because the prior art to Prosserman teaches an invention for multi vantage point comprising a device for viewing streaming video captured from multiple vantage points and utilizing device location data for radar navigation of which camera footage is being displayed on the screens in order to enable the end user to navigate different camera views for a particular event.
Regarding claim 2, :wherein the first data comprises encoded data used to display the preset layer and encoded data used to display the radar map; and wherein the displaying the first interface based on the first data comprises: displaying, on the specified area of the preset layer obtained through decoding, the radar map obtained through decoding” is further rejected on obviousness grounds as discussed in the rejection of claim 1 wherein Hutten para 73-76 disclosing a feed radar providing a consolidated or simplified representation of feed data; experience feed can include multiple experience levels (e.g., different angles, different focus, different time frame, different sound quality, etc.) and that a user may wish to modify one or more of the experience levels presented to him. As such, user interface 400 can further comprise various controls that allow a user to interact with a remote experience system of the inventive subject matter to further customize the experience feed presented to the user via interface; display can include feeds and be presented as overlays in a 3D model of at least a portion of the seating area and if the user wishes to view feeds captured by devices located in different parts of the venue, the user can simply utilize space shift control.
Regarding claim 3, “wherein the first data comprises encoded data used to display the preset layer and the location information of the plurality of target objects; and wherein the displaying the first interface based on the first data comprises: generating the radar map based on the location information of the plurality of target objects; and displaying the generated radar map on the specified area of the preset layer obtained through decoding” is further rejected on obviousness grounds as discussed in the rejection of claims 1-2 wherein Hutten para 85 discloses an event can be represented as a data structure describing a characteristic of an event (e.g., an event object) as described in parent patent application No. 13/912,567, filed on Jun. 7, 2013. See also Richards para 20, 26-27 - images in the subset are determined as the images which have the best mapping for objects in the scene associated with a particular event which is to be reviewed). See also Prosserman disclosing an invention for multi vantage point comprising a device for viewing streaming video captured from multiple vantage points (Abstract; para 82, 108 utilizing device location data for radar navigation of which camera footage is being displayed on the screens and tagging human and inanimate objects).
Regarding claim 4, “wherein the first data further comprises personalized information corresponding to at least one specified target object; and the method further comprises: detecting a first user operation on the first interface, wherein the first user operation indicates to obtain personalized information of a first target object, and the first target object is any one of the at least one specified target object; and displaying a second interface in response to the first user operation, wherein the second interface comprises the personalized information of the first target object” is further rejected on obviousness grounds as discussed in the rejection of claims 1-3 wherein Richards para 22, 35-40, 52-54 teaches objects involved in the event are to be mapped to images of the event captured by one or more of the rotatable cameras and for those images to be indicated to the end user for selection of objects of interest.
Regarding claim 5, “wherein the method further comprises: obtaining, from a preset database, the personalized information corresponding to the at least one specified target object; and associating the personalized information of the at least one specified target object with a corresponding specified target object in the radar map” is further rejected on obviousness grounds as discussed in the rejection of claims 1-4 wherein Richards para 48-57 – stored predetermined camera parameters are accessed to personalize video content displayed to the end user related to particular objects.
Regarding claim 6, “wherein based on the target scene being a game scene, and the specified target object is an athlete, the personalized information corresponding to the specified target object comprises one or a combination of the following information: information about the athlete, game information about the athlete, and game information about a team to which the athlete belongs” is further rejected on obviousness grounds as discussed in the rejection of claims 1-5 wherein Richards para 48-57 – stored predetermined camera parameters are accessed to personalize video content displayed to the end user related to particular objects comprising particular player positions.
Regarding claim 7, “wherein that the preset layer is obtained based on the preset viewpoint image of the target scene comprises: the preset layer is obtained based on a first single viewpoint image, wherein the first single viewpoint image is any one of a plurality of free viewpoint images captured by the capturing side device; the preset layer is a multi-viewpoint stitched picture obtained by stitching based on a plurality of free viewpoint images; or the preset layer is a second single viewpoint image generated based on a plurality of free viewpoint images and a target virtual viewpoint” is further rejected on obviousness grounds as discussed in the rejection of claims 1-6 wherein Hutten para 73-76 disclosing a feed radar providing a consolidated or simplified representation of feed data; experience feed can include multiple experience levels (e.g., different angles, different focus, different time frame, different sound quality, etc.) and that a user may wish to modify one or more of the experience levels presented to him. As such, user interface 400 can further comprise various controls that allow a user to interact with a remote experience system of the inventive subject matter to further customize the experience feed presented to the user via interface; display can include feeds and be presented as overlays in a 3D model of at least a portion of the seating area and if the user wishes to view feeds captured by devices located in different parts of the venue, the user can simply utilize space shift control. See also Richards para 26-27 - images in the subset are determined as the images which have the best mapping for objects in the scene associated with a particular event which is to be reviewed) because Prosserman also teaches an invention for multi vantage point comprising a device for viewing streaming video captured from multiple vantage points (Abstract; para 82, 100-108 utilizing device location data for radar navigation of which camera footage is being displayed on the screens and displaying panoramic content).
Regarding claim 8, “wherein the method further comprises: detecting a second user operation on the first interface, wherein the second user operation indicates to disable displaying of the radar map or hide displaying of the radar map; and displaying a third interface in response to the second user operation, wherein the third interface comprises the preset layer” is further rejected on obviousness grounds as discussed in the rejection of claims 1-7 wherein Prosserman para 87-90 teaches enabling an end user to toggle between graphical user interfaces comprising the removal of the radar map as understood in the combined teachings of Hutten, Prosserman and Richards as discussed in the rejection of claim 1.
Regarding the method of claim 9 and the display side device claims 15-20, claims are grouped and rejected with the method claims applied to a display side device in claims 1-8 because the steps of the method claims are met by the disclosure of the apparatus and methods of the reference(s) as discussed in the rejection of claims 1-8 and because the steps of the method are easily converted into elements of a display side device and display method by one of ordinary skill in the art.
Claim(s) 10-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hutten; Curt US 20140013228 A1 (hereafter Hutten) and in further view of RICHARDS; Mathew et al. US 20210182560 A1 (hereafter Richards) and in further view of Prosserman; Jeff et al. US 20160073013 A1 (hereafter Prosserman) and in further view of Mate; Sujeet Shyamsundar et al. US 20160100110 A1 (hereafter Mate).
Regarding claim 10, “wherein the obtaining the first data based on the at least one free viewpoint image comprises: obtaining depth information of the target scene based on the at least one free viewpoint image; identifying a target object in the target scene based on the at least one free viewpoint image, and determining a feature identifier of the target object; determining the location information of the plurality of target objects in the target scene based on the depth information of the target scene; generating the radar map based on the feature identifier of the target object and the location information of the plurality of target objects; and obtaining the first data based on: encoded data that is obtained through encoding based on the at least one free viewpoint image and that is used to display the preset layer, and encoded data that is obtained through encoding based on the radar map and that is used to display the radar map” the limitations of claims 10 parallel the limitations recited in claims 1-9, however, claims 1-9 do not reference the “depth information” in relation to “obtaining depth information of the target scene based on the at least one free viewpoint image.”
In an analogous art, Mate teaches an invention for scene synthesis comprising obtaining media presentation as a seed media and obtaining one or more criteria for content extraction in order to return matching high quality media presentation comprising depth of field user preferences for inclusion of a hyperfocal video or a depth of field media stream that is closest to an object of interest (OOI). See Mate Abstract and para 113-116.
Therefore, it would have been obvious before the effective filing date of the claimed invention to modify the teachings of Hutton, Richards, and Prosserman for a display side device coupled to a plurality of feed acquiring devices for receiving video content and related data and wherein the experience feed interface comprises a feed radar which corresponds to a radar map superimposed on a specific area of a preset layer and displaying content and data from a plurality of cameras providing different viewpoints for an event associated and displaying a map identifying object positions related to said event and by further incorporating known elements for depth of field data disclosed in Mate’s invention for scene synthesis comprising obtaining media presentation as a seed media and obtaining one or more criteria for content extraction in order to return matching high quality media presentation comprising depth of field user preferences for inclusion of a hyperfocal video or a depth of field media stream that is closest to an object of interest (OOI) in order to enable the end user to more efficiently navigate different camera views for a particular objects identified in a particular event.
Regarding claim 11, “wherein the obtaining the first data based on the at least one free viewpoint image comprises: obtaining depth information of the target scene based on the at least one free viewpoint image; determining the location information of the plurality of target objects in the target scene based on the depth information of the target scene; and obtaining the first data based on: the location information of the plurality of target objects, and encoded data that is obtained through encoding based on the at least one free viewpoint image and that is used to display the preset layer” is further rejected on obviousness grounds as discussed in the rejection of claims 1-10 wherein Mate teaches identifying object location as an object of interest wherein the invention discloses an invention for scene synthesis comprising obtaining media presentation as a seed media and obtaining one or more criteria for content extraction in order to return matching high quality media presentation comprising depth of field user preferences for inclusion of a hyperfocal video or a depth of field media stream that is closest to an object of interest (OOI). See Mate Abstract and para 113-116.
Regarding claim 12, “wherein before the obtaining the first data, the method further comprises: obtaining, from a preset database, the personalized information corresponding to the at least one specified target object; and wherein the obtaining the first data further comprises: obtaining the first data based on the personalized information corresponding to the at least one specified target object” is further rejected on obviousness grounds as discussed in the rejection of claims 1-11 wherein Richards para 48-57 – stored predetermined camera parameters are accessed to personalize video content displayed to the end user related to particular objects. See also Mate teaches an invention for scene synthesis comprising obtaining media presentation as a seed media and obtaining one or more criteria for content extraction in order to return matching high quality media presentation comprising depth of field user preferences for inclusion of a hyperfocal video or a depth of field media stream that is closest to an object of interest (OOI). See Mate Abstract and para 78-79, 89, 97, 106, 109, 113-116 relates to database associated with user preferences.
Regarding claim 13, “wherein based on the target scene being a game scene, and the specified target object is an athlete, the personalized information corresponding to the specified target object comprises one or a combination of the following information: information about the athlete, game information about the athlete, and game information about a team to which the athlete belongs” is further rejected on obviousness grounds as discussed in the rejection of claims 1-12 wherein Richards para 48-57 – stored predetermined camera parameters are accessed to personalize video content displayed to the end user related to particular objects comprising particular player positions.
Regarding claim 14, “wherein that the preset layer is obtained based on the preset viewpoint image of the target scene comprises: the preset layer is obtained based on a first single viewpoint image, wherein the first single viewpoint image is any one of a plurality of free viewpoint images captured by the capturing side device; the preset layer is a multi-viewpoint stitched picture obtained by stitching based on a plurality of free viewpoint images; or the preset layer is a second single viewpoint image generated based on a plurality of free viewpoint images and a target virtual viewpoint” is further rejected on obviousness grounds as discussed in the rejection of claims 1-12 wherein Hutten para 73-76 disclosing a feed radar providing a consolidated or simplified representation of feed data; experience feed can include multiple experience levels (e.g., different angles, different focus, different time frame, different sound quality, etc.) and that a user may wish to modify one or more of the experience levels presented to him. As such, user interface 400 can further comprise various controls that allow a user to interact with a remote experience system of the inventive subject matter to further customize the experience feed presented to the user via interface; display can include feeds and be presented as overlays in a 3D model of at least a portion of the seating area and if the user wishes to view feeds captured by devices located in different parts of the venue, the user can simply utilize space shift control. See also Richards para 26-27 - images in the subset are determined as the images which have the best mapping for objects in the scene associated with a particular event which is to be reviewed) because Prosserman also teaches an invention for multi vantage point comprising a device for viewing streaming video captured from multiple vantage points (Abstract; para 82, 100-108 utilizing device location data for radar navigation of which camera footage is being displayed on the screens and displaying panoramic content). See also Mate 56-58, 112 disclosing panoramic content for display.
CONCLUSION
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALFONSO CASTRO whose telephone number is (571)270-3950. The examiner can normally be reached on Monday to Friday from 10am to 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Flynn can be reached. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALFONSO CASTRO/Primary Examiner, Art Unit 2421