DETAILED ACTION
This office action is in response to the amendment/argument filed 12/10/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Application claims priority to foreign application with application number JP2022-073894 dated 04/27/2022. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Response to Arguments
Applicant’s arguments, see pg. 6, filed 12/10/2025, with respect to the objection to claims 1 and 16 have been fully considered and are persuasive. The objection to claims 1 and 16 has been withdrawn. However, one of the minor grammatical issues of claim 17 has not been addressed; therefore, the objection to claim 17 is maintained.
Applicant's arguments, filed 12/10/2025, with respect to the rejection of independent claims 1, 16, and 17 under 35 U.S.C. 103 have been fully considered but they are not persuasive.
In particular, the applicant argues that the combination of Okutani in view of Maruyama does not teach the limitation added to the amended claims 1, 16, and 17: “wherein the first image and the second image are displayed concurrently in the first display area such that a user can selectively enlarge either image to the second display area.”
The broadest reasonable interpretation of this limitation includes: (1) a “first display area” displaying at least two images, possibly more; (2) a “second display area”; (3) the ability for a user to select one of the at least two images displayed in the first display area, whereupon the selected image will be displayed at a larger size in the second display area.
Importantly, the wording of the limitation does not require the selected image to also remain displayed in the first display area after being enlarged to the second display area.
Okutani teaches all of the elements of this limitation. For clarity, the following claim mappings to Okutani are applied consistently throughout claims 1, 16, and 17, and remain unchanged for the added limitation:
Fig. 4A: the row of small displays 403-406 corresponds to the “first display area”
Fig. 4A: the large display 402 corresponds to the “second display area”
Any two of the virtual viewpoint images displayed in displays 403-406 may correspond to the “first image” and “second image”
As described in Okutani [0084], multiple virtual viewpoint images are displayed concurrently in displays 403-406, including the “first image” and “second image”. A user may select any of these images, including the “first image” or “second image”, whereupon the selected image will be “selectively enlarged” by being moved to the large display 402.
Furthermore, Maruyama teaches switching between the real camera feed and a virtual viewpoint image. By applying the teachings of Maruyama to the invention of Okutani, one of ordinary skill in the art would arrive at the possibility of substituting one of the virtual viewpoint images among displays 403-406 (the “second image”) with the real camera feed of Maruyama, resulting in the claimed invention.
Therefore, the rejection of claims 1, 16, and 17 under 35 U.S.C. 103 is maintained.
Claim Objections
Claim 17 objected to because of the following informalities: "a control method of a image processing system" should be changed to "a control method of an image processing system". Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 16, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okutani (US 20200105174 A1) in view of Maruyama (US 20200402281 A1).
Regarding claim 1, Okutani teaches an image processing system ([0136] “Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus…”) comprising: one or more memories storing instructions ([0136] “…that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s)”), and one or more processors that execute the instructions ([0136] “The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.”) to:
control display of a first image corresponding to a virtual viewpoint image associated with a first side of digital content of three-dimensional shape, and a second image associated with a second side of the digital content (fig. 4B shows an example of the arrangement of multiple virtual viewpoints, where each virtual viewpoint observes the same space from a different angle, [0086] explains further; [0090] describes how a virtual viewpoint image of a sports field may be generated using simplified three-dimensional models of the field, stand, players, and ball to save computational power), in a first display area (fig. 4A elements 403-406, lower display area displays multiple virtual viewpoint images corresponding to the multiple virtual viewpoints, detailed in [0062] and [0067]), wherein the virtual viewpoint image is generated based on a virtual camera (fig. 4B shows the placement of virtual cameras, [0086] “As shown in FIG. 4B, virtual viewpoints 408 to 413 are set so as to observe a soccer field 407 at various positions in the field 407 from various directions, and the positions, directions, and angles of view of the virtual viewpoints are managed, as shown in FIG. 5. Assume, for example, that the position of each virtual viewpoint is represented by a three-dimensional position in a coordinate system defined for the field 407, the direction of each virtual viewpoint is represented by pan, tilt, and roll values, and the angle of view of each virtual viewpoint is represented by the value of the horizontal angle of view of the virtual viewpoint.”) and a plurality of captured images obtained from a plurality of real cameras ([0008] “wherein the virtual viewpoint image corresponding to the virtual viewpoint selected as a position and direction operation target among the plurality of virtual viewpoints is generated based on a plurality of captured images obtained by capturing an image capturing target region by a plurality of cameras”);
obtain an input to select either the first image or the second image displayed in the first display area (fig. 3 step S309 “change selected switching virtual viewpoint to selected virtual viewpoint”, [0062]-[0063] explains how one virtual viewpoint is the “selected virtual viewpoint” and the rest are “switching virtual viewpoints”, and a user may use the operation unit 107 to select a switching virtual viewpoint to become the selected virtual viewpoint; [0067] explains that the images associated with the switching virtual viewpoints are displayed in the row of smaller displays 403-406 identified as the “first display area”, where any two of the switching virtual viewpoints may correspond to the claimed “first image” and “second image”); and
control, in a case where the input to select the first image is obtained, display of the virtual viewpoint image in a second display area, and control, in a case where the input to select the second image is obtained, display of the corresponding image in the second display area (fig. 4A large display 402 “second display area” displays the virtual viewpoint image associated with the selected virtual viewpoint, and when a new virtual viewpoint is selected from the switching virtual viewpoints 403-406, it is displayed in the display 402: [0067] “For example, as shown in FIG. 4A, the virtual viewpoint image attached with the viewpoint ID corresponding to the selected virtual viewpoint among the virtual viewpoint images received by the reception unit 203 is displayed in a display region 402 in a display region 401 of the display unit 105”; [0084] “the user can look for a desired viewpoint to obtain a desired virtual viewpoint image with reference to the display regions 403 to 406. If the virtual viewpoint image of the desired viewpoint is found, the virtual viewpoint image is displayed in the display region 402 by designating selection of the virtual viewpoint image, thereby making it possible to operate the virtual viewpoint.”; fig. 3 step S307 “display virtual viewpoint image of selected virtual viewpoint” will display a newly selected virtual viewpoint image if previously updated in step S309), and
wherein the first image and the second image are displayed concurrently in the first display area such that a user can selectively enlarge either image to the second display area (fig. 4A, row of smaller displays 403-406 corresponds to the claimed “first display area”, any two of the switching virtual viewpoints displayed concurrently in displays 403-406 may correspond to the claimed “first image” and “second image”; [0084] describes how a user may select any of the switching virtual viewpoint images in 403-406, including the “first image” and “second image”, to be displayed in the large display 402 (“second display area”)).
Okutani does not teach that the second image is corresponding to a captured image obtained from a real camera, that the virtual viewpoint image is generated based on a virtual camera and a plurality of captured images obtained from a plurality of real cameras different from the real camera, or wherein the second image corresponds to a captured image obtained from a real camera that is not among the plurality of real cameras used to generate the virtual viewpoint image.
Maruyama teaches to control display of a first image corresponding to a virtual viewpoint image and a second image corresponding to a captured image obtained from a real camera (Abstract “An image processing apparatus includes: a first identifying unit configured to identify image-capturing conditions concerning a position and an orientation of an image-capturing apparatus which obtains a captured image of an image-capturing target region; a second identifying unit configured to identify viewpoint conditions concerning a position and an orientation of a virtual viewpoint for a virtual viewpoint image generated based on a plurality of images of the image-capturing target region obtained by a plurality of the image-capturing apparatuses at different positions; and a display control unit configured to allow a display apparatus to display information indicating a degree of match between the identified image-capturing conditions and the identified viewpoint conditions before an image presented to a viewer is switched between the captured image and the virtual viewpoint image.”),
where the virtual viewpoint image is generated based on a virtual camera and a plurality of captured images obtained from a plurality of real cameras different from the real camera ([0023] “Furthermore, the virtual camera operation unit 102 notifies the virtual camera switching section control unit 106 of a request for switching between the actual camera and the virtual camera and the execution of the switching. Here, the actual camera is a broadcast camera, a drone camera, a multi-viewpoint image-capturing camera, or the like, and the actual camera image is an image captured by the actual camera”; “broadcast camera” and “drone camera” are listed separately from “multi-viewpoint image-capturing camera”, providing two embodiments in which the “real camera” used to capture the live feed is distinct from the “multi-viewpoint image-capturing” virtual viewpoint generating cameras), and
wherein the second image corresponds to a captured image obtained from a real camera that is not among the plurality of real cameras used to generate the virtual viewpoint image ([0023] “Here, the actual camera is a broadcast camera, a drone camera, a multi-viewpoint image-capturing camera, or the like, and the actual camera image is an image captured by the actual camera”; “broadcast camera” and “drone camera” are listed separately from “multi-viewpoint image-capturing camera”, providing two embodiments in which the “real camera” used to capture the live feed is distinct from the “multi-viewpoint image-capturing” virtual viewpoint generating cameras).
Okutani and Maruyama are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the virtual viewpoint switching interface of Okutani with the teachings of Maruyama to allow a user to switch to an actual camera feed as well as various virtual viewpoints. The motivation would have been to give a sports broadcaster additional camera options; there may be situations where a live camera is more appropriate, such as a more personal close-up on a player (Maruyama [0003]).
Regarding claim 16, the limitations (or a trivial variation) are recited by claim 1, therefore it is rejected using the same references, rationales, and motivation to combine described in the rejection of claim 1.
Regarding claim 17, the limitations (or a trivial variation) are recited by claim 1, therefore it is rejected using the same references, rationales, and motivation to combine described in the rejection of claim 1, with the exception of the limitation a non-transitory computer readable storage medium storing a computer program for causing a computer to execute a control method of a image processing system (Okutani [0136] “Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s)”).
Claim(s) 18, 7, 11, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okutani (US 20200105174 A1) in view of Maruyama (US 20200402281 A1) as applied to claim 1 above, and further in view of Matsubayashi et al. (US 20210014425 A1, hereinafter "Matsubayashi").
Regarding claim 18, the combination of Okutani in view of Maruyama teaches the image processing system according to claim 1, but does not explicitly teach wherein the captured image associated with the second side of the digital content includes an object, and wherein the virtual viewpoint image associated with the first side of the digital content includes a three-dimensional model of the object.
Matsubayashi teaches wherein the captured image associated with the second side of the digital content includes an object, and wherein the virtual viewpoint image associated with the first side of the digital content includes a three-dimensional model of the object (fig. 12 shows multiple virtual viewpoints 102 focused on a target point 103 from different sides, fig. 13A-C shows the virtual viewpoint images showing multiple sides of the target point, and that the target point is positioned on a person (“the object”), [0042]-[0045] describes the process of selecting a target point for the virtual camera based on an object’s position, [0026] describes how virtual viewpoint images are generated using three-dimensional models of the foreground objects located within the imaging region).
Matsubayashi and the combination of Okutani and Maruyama are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the image processing system of Okutani in view of Maruyama to incorporate the teachings of Matsubayashi to focus both the virtual viewpoint position and the real camera position on a person. The motivation would have been to improve a sports broadcast by following a specific player on the field.
Importantly, when the real camera of Okutani in view of Maruyama is focused on a player according to the teachings of Matsubayashi, then the resulting image necessarily contains “an object” as claimed, rather than the three-dimensional model of the object described by Matsubayashi; on the other hand, when the virtual viewpoint of Okutani in view of Maruyama is focused on a player according to the teachings of Matsubayashi, then the resulting virtual viewpoint image may contain the claimed three-dimensional model of the player, as taught by Matsubayashi.
Regarding claim 7, the combination of Okutani in view of Maruyama and further in view of Matsubayashi teaches the image processing system according to claim 18, wherein a position of the virtual camera is determined based on a position of the three-dimensional model (Matsubayashi, fig. 4A-4D, [0041] “A target point according to the present exemplary embodiment refers to a position to be included in the visual field of the virtual camera 102, i.e., a target position to be displayed in a virtual viewpoint image. For example, by controlling the virtual camera 102 so that a target point is adjusted to a specific subject (a player, the ball, or a goal) and is positioned in the line-of-sight direction of the virtual camera 102, a virtual viewpoint image with the specific subject captured at the center can be generated.”, [0026] describes how virtual viewpoint images are generated using three-dimensional models of the foreground objects located within the imaging region, such as the subject of the target point).
Matsubayashi and the combination of Okutani and Maruyama are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the image processing system of Okutani in view of Maruyama to incorporate the teachings of Matsubayashi to focus both the virtual viewpoint position and the real camera position on a person. The motivation would have been to improve a sports broadcast by following a specific player on the field.
Regarding claim 11, the combination of Okutani in view of Maruyama and further in view of Matsubayashi teaches the image processing system according to claim 18, wherein the object is a human (Matsubayashi, [0041] “For example, by controlling the virtual camera 102 so that a target point is adjusted to a specific subject (a player, the ball, or a goal) and is positioned in the line-of-sight direction of the virtual camera 102, a virtual viewpoint image with the specific subject captured at the center can be generated.”).
Matsubayashi and the combination of Okutani and Maruyama are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the image processing system of Okutani in view of Maruyama to incorporate the teachings of Matsubayashi to focus both the virtual viewpoint position and the real camera position on a person. The motivation would have been to improve a sports broadcast by following a specific player on the field.
Regarding claim 15, the combination of Okutani in view of Maruyama teaches the image processing system according to claim 1, but does not explicitly teach wherein the processors are configured to superimpose an icon representing the virtual viewpoint image on the first image.
Matsubayashi teaches wherein the processors are configured to superimpose an icon representing the virtual viewpoint image on the first image (Matsubayashi fig. 5A, [0044] The display control unit 33 superimposes the icon representing the three-dimensional position of the target point 103 corresponding to the operation on the operation unit 1 onto the generated overhead view image 201, front view image 202, side view image 203, and plan view image 204. Further, the icon representing the three-dimensional position of the virtual camera 102 obtained in processing (described below) may be superimposed.”)
Matsubayashi and the combination of Okutani and Maruyama are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the image processing system of Okutani in view of Maruyama to incorporate the teachings of Matsubayashi to display an icon on each of the selectable virtual viewpoint images representing the position of the viewpoint. The motivation would have been to improve the operator’s user experience for use in the intended fast-paced sports broadcast environment, allowing them to quickly determine the status and position of each selectable viewpoint before switching to that viewpoint.
Claim(s) 8 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okutani (US 20200105174 A1) in view of Maruyama (US 20200402281 A1) and further in view of Matsubayashi (US 20210014425 A1) as applied to claim 18 above, and further in view of Kato (US 20200236346 A1).
Regarding claim 8, the combination of Okutani in view of Maruyama and further in view of Matsubayashi teaches the image processing system according to claim 18, but does not explicitly teach wherein an orientation of the virtual camera is determined based on an orientation of the three-dimensional model.
Kato teaches wherein an orientation of the virtual camera is determined based on an orientation of the three-dimensional model (fig. 4B, triangular indicators; [0022] “The orientation of the virtual viewpoint indicated by the camera path may coincide with the orientation of the player's face or correspond to the orientation of the body, feet, or eyes of the player”).
Kato does not explicitly teach that the player is represented by a 3D model in the virtual viewpoint image, but the teachings of Kato may be applied to the combination of Okutani in view of Maruyama and further in view of Matsubayashi, which does teach this limitation (see claim 18). Kato and the combination of Okutani in view of Maruyama and further in view of Matsubayashi are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the image processing system of Okutani in view of Maruyama and further in view of Matsubayashi with the teachings of Kato to enable the alignment of a virtual viewpoint image on both the location and direction of a particular object. The motivation would be to improve the image processing system’s usefulness for its intended use of sports broadcasting, allowing it to focus on individual players on the field.
Regarding claim 9, the combination of Okutani in view of Maruyama and further in view of Matsubayashi teaches the image processing system according to claim 18, but does not explicitly teach wherein a position of the virtual camera is determined based on a position a predetermined distance behind the three-dimensional model, and an orientation of the virtual camera is determined based on an orientation of the three-dimensional model.
Kato teaches wherein a position of the virtual camera is determined based on a position a predetermined distance behind the three-dimensional model (fig. 4B, item 404 camera positions are shifted rearward compared to true positions in item 403; [0022] “The position of the virtual viewpoint indicated by the camera path may coincide with the player's position or is a position at a predetermined distance from the player (e.g., a position behind the player).”), and an orientation of the virtual camera is determined based on an orientation of the three-dimensional model (fig. 4B, triangular indicators; [0022] “The orientation of the virtual viewpoint indicated by the camera path may coincide with the orientation of the player's face or correspond to the orientation of the body, feet, or eyes of the player”).
Kato does not explicitly teach that the player is represented by a 3D model in the virtual viewpoint image, but the teachings of Kato may be applied to the combination of Okutani in view of Maruyama and further in view of Matsubayashi, which does teach this limitation (see claim 18). Kato and the combination of Okutani in view of Maruyama and further in view of Matsubayashi are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the image processing system of Okutani in view of Maruyama and further in view of Matsubayashi with the teachings of Kato to enable the alignment of a virtual viewpoint image behind the location and based on the direction of a particular object. The motivation would be to improve the image processing system’s usefulness for its intended use of sports broadcasting, allowing it to follow a particular player around the field.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okutani (US 20200105174 A1) in view of Maruyama (US 20200402281 A1) and further in view of Matsubayashi (US 20210014425 A1) as applied to claim 18 above, and further in view of Nakao et al. (WO 2019012817 A1, hereinafter Nakao).
Regarding claim 10, the combination of Okutani in view of Maruyama and further in view of Matsubayashi teaches the image processing system according to claim 18, but does not explicitly teach wherein a position of the virtual camera is a position on a spherical surface about the three-dimensional model, and an orientation of the virtual camera is determined based on a direction from the position of the virtual camera to the three-dimensional model.
Nakao teaches wherein a position of the virtual camera is a position on a spherical surface about the three-dimensional model (fig. 10, [0108] “…when the viewpoint mode is the "inward viewpoint" mode, the viewpoint Pv is changed along a spherical surface Ss centered on the reference position Pr as shown in the figure in response to user operation.”, [0109] explains that the reference position Pr and sphere size are selected to enable observation of the target subject St), and an orientation of the virtual camera is determined based on a direction from the position of the virtual camera to the three-dimensional model (fig. 10, [0108] “Since this is the "inward viewpoint" mode, the line of sight direction Dv in this case is set to the direction from the viewpoint Pv toward the reference position Pr”, [0109] explains that the reference position Pr and sphere size are selected to enable observation of the target subject St).
Nakao does not explicitly teach that the subject is represented by a 3D model in the virtual viewpoint image, but the teachings of Kato may be applied to the combination of Okutani in view of Maruyama and further in view of Matsubayashi, which does teach this limitation (see claim 18). Nakao and the combination of Okutani in view of Maruyama and further in view of Matsubayashi are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the image processing system of Okutani in view of Maruyama and further in view of Matsubayashi with the teachings of Nakao to add the ability to reposition a virtual viewpoint along a spherical surface around the target object. This would allow the operator to easily capture interesting shots focusing on the same object from a constantly changing angle.
Claim(s) 12, 13, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okutani (US 20200105174 A1) in view of Maruyama (US 20200402281 A1) as applied to claim 1 above, and further in view of de Paz et al. (US 20200042271 A1, hereinafter "de Paz") and He et al. (US 20210152808 A1, hereinafter "He").
Regarding claim 12, the combination of Okutani in view of Maruyama teaches the image processing system according to claim 1, but does not explicitly or completely teach wherein the first image corresponds to a plurality of virtual viewpoint images including the virtual viewpoint image associated with the first side,
wherein, in a case where the input to select the first image is obtained, control to display one of the plurality of virtual viewpoint images in the second display area is performed, and
wherein, in a case where a specific operation is input to the second display area, the virtual viewpoint image displayed in the second display area, and another virtual viewpoint image among the plurality of the virtual viewpoint images different from the virtual viewpoint image displayed in the second display area, are switched.
De Paz teaches wherein the first image corresponds to a plurality of virtual viewpoint images including the virtual viewpoint image associated with the first side ([0267] describes an image gallery 3000 with 2 display areas 3012 and 3016; first display area 3012 displays digital albums which may each contain multiple images (the “plurality of… images”), each album 3020 is represented by a single thumbnail image (the “first image”); the images of de Paz are not necessarily virtual viewpoint images, but the method by which the images were generated is not relevant to this particular limitation),
wherein, in a case where the input to select the first image is obtained, control to display one of the plurality of virtual viewpoint images in the second display area is performed (fig. 33 and 34, [0276] “Upon selection of the album 3332, a first image 3336 or an image contained within the album may be shown in the second section 3324 of the display”), and
wherein, in a case where a specific operation is input, the virtual viewpoint image displayed in the second display area, and another virtual viewpoint image among the plurality of the virtual viewpoint images different from the virtual viewpoint image displayed in the second display area, are switched ([0278] describes how selecting a different thumbnail from the album will replace the image in the second display area with the image shown in the thumbnail: “[0278] The method 3200 can continue with the reception of a second selection of a second thumbnail in the first section 3308 of the albums displayed in the view 3324. The device 100 may receive the selection of the thumbnail, in step 3224. In response to the received selection of a thumbnail in section 3308, the device 100 may display a second image associated with the second thumbnail in the second section 3224, in step 3228”).
Claim 12 recites limitations which are common to digital image gallery or photo album applications: selecting an image which is representative of a larger collection of images, e.g. a thumbnail image representing an album, to open the image collection in a different display, and scrolling or swiping through the display to view the collection of images sequentially. The images of de Paz are not necessarily virtual viewpoint images, but the concept is applicable; the method by which the images are generated is irrelevant to the functionality of the invention, which is chiefly concerned with the user interface by which the images are displayed. The inventions of de Paz and the combination of Okutani in view of Maruyama are both analogous to the claimed invention because they are concerned with an interface for displaying images to a user; therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the image processing system of Okutani in view of Maruyama to incorporate the image gallery systems of de Paz. The motivation would have been to provide a more streamlined interface to selectively display groups of virtual viewpoint images that belong to particular categories, and/or larger numbers of virtual viewpoint images that would overwhelm the viewer and clutter the screen if displayed simultaneously, further improving the operator’s user experience for use in the intended fast-paced sports broadcast environment.
De Paz does not explicitly teach switching the virtual viewpoint images displayed in the second area in a case where a specific operation is input to the second display area.
He teaches this limitation; more specifically, the concept of switching the image displayed in a display area via an input made to the same display area ([0092] “FIG. 3 shows a user interface that may be presented to the user in some embodiments to indicate available viewpoints. In this example, the user interface displays an overhead view of a venue and provides indications of locations of available viewpoints. In this case, viewpoint 302 is the active viewpoint (the viewpoint from which the user is currently experiencing the presentation) and is displayed in a highlighted fashion. Other viewpoints, such as viewpoints 304, 306, 308, 310, 312, 314, 316, may be displayed to indicate their availability, but they are not currently selected by the user.
[0093] During playback, a user interface such as that illustrated in FIG. 3 may be overlaid over the rendered frame at one of the four corners for example, and the user can select a different viewpoint using a user input device such as a touch screen or an HMD controller. A viewpoint switch is then triggered and the user's view is transitioned so that frames from the target viewpoint are rendered on the display.
[0094] FIG. 4 illustrates another user design example in which the location of available viewpoints is indicated using icons displayed as overlays on content 400 displayed on a head-mounted display. The position of each viewpoint icon in the users view corresponds to the spatial position of an available viewpoint. In the example of FIG. 4, icons 406, 414 may be displayed to correspond to the viewpoints 306, 314, respectively, of FIG. 3… The user can select a viewpoint icon in order to switch the user's view of the rendered scene to the associated viewpoint.”)
He does not explicitly teach that the virtual viewpoint is displayed in, and the operation is input to, a second display area, but in both provided examples it does teach that the selection interface is located in the same display area as the virtual viewpoint image display; therefore, if the teachings of He were applied to the invention of Okutani in view of Maruyama and further in view of de Paz, a virtual viewpoint image displayed in the second display area (element 402 of Okutani) could be switched via an operation input to the second display area (selection of a new viewpoint via touchscreen or HMD controller, as taught by He). The claim limitation may be more conventionally expressed in the form of an image gallery which can be scrolled or swiped via a touch screen, but He teaches it in a manner that is more explicit to the display of virtual viewpoint images.
He and the combination of Okutani in view of Maruyama and further in view of de Paz are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the invention of Okutani in view of Maruyama and further in view of de Paz to incorporate the teachings of He to allow the ability to switch between the collection of virtual viewpoints using an interface located on the large display area 402. The motivation would have been to improve the operator’s user experience for use in the intended fast-paced sports broadcast environment.
Regarding claim 13, the combination of Okutani in view of Maruyama and further in view of de Paz and He teaches the image processing system according to claim 12, wherein the specific operation is a keyboard typing operation, a mouse click operation, a mouse scroll operation, a touch operation, a slide operation, a flick gesture, a pinch-in operation, or a pinch-out operation on a display device displaying the virtual viewpoint image (He [0093] “During playback, a user interface such as that illustrated in FIG. 3 may be overlaid over the rendered frame at one of the four corners for example, and the user can select a different viewpoint using a user input device such as a touch screen or an HMD controller.”).
He and the combination of Okutani in view of Maruyama and further in view of de Paz are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the image processing system of Okutani in view of Maruyama and further in view of de Paz and He to incorporate the touchscreen functionality of He. The motivation would have been to provide a more streamlined UI to improve the operator’s experience for use in the intended fast-paced sports broadcast environment.
Regarding claim 14, the combination of Okutani in view of Maruyama and further in view of de Paz and He teaches the image processing system according to claim 12, wherein the processors are configured to superimpose icons corresponding to the plurality of respective virtual viewpoint images on the second display area, and
in a case where an input to select any one of the icons is accepted, switch the virtual viewpoint image displayed in the second display area and the virtual viewpoint image corresponding to the selected icon.
He teaches wherein the processors are configured to superimpose icons corresponding to the plurality of respective virtual viewpoint images on the second display area (Fig. 4, [0094] "FIG. 4 illustrates another user design example in which the location of available viewpoints is indicated using icons displayed as overlays on content 400 displayed on a head-mounted display. The position of each viewpoint icon in the users view corresponds to the spatial position of an available viewpoint. In the example of FIG. 4, icons 406, 414 may be displayed to correspond to the viewpoints 306, 314, respectively, of FIG. 3"), and
in a case where an input to select any one of the icons is accepted, switch the virtual viewpoint image displayed in the second display area and the virtual viewpoint image corresponding to the selected icon ([0094] "The user can select a viewpoint icon in order to switch the user's view of the rendered scene to the associated viewpoint").
He and the combination of Okutani in view of Maruyama and further in view of de Paz are both analogous to the claimed invention because they are in the same field and pertain to the same issue of displaying a virtual viewpoint image. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the image processing system of Okutani in view of Maruyama and further in view of de Paz and He to incorporate the teachings of He to allow the ability to switch between the collection of virtual viewpoints by selecting icons corresponding to the other virtual viewpoints. The motivation would have been to improve the operator’s user experience for use in the intended fast-paced sports broadcast environment.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN STATZ whose telephone number is (571)272-6654. The examiner can normally be reached Mon-Fri 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BENJAMIN TOM STATZ/Examiner, Art Unit 2611
/TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611