DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the interaction unit” in line 7. The phrase should be changed to “an interaction unit”.
Same issue is present in claim 10 on line 6.
Claims 2-9 and 11-18 are rejected as being dependent on rejected base claims 1 and 10. There is insufficient antecedent basis for this limitation in the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-5, 7-14 and 16-18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kimura (PGPUB 2024/0163414 A1).
Independent Claims
As to claim 1, Kimura (Figs. 1, 2, 4) teaches, a system (information processing system) for generating an interactive virtual environment (i.e. performer interaction environment as shown in Fig. 10), the system comprising:
a processor (i.e. CPU 901 controlling audience information output system, performer information input/output system 2, performer video display system 3, ¶ 204) in communication with a first interaction unit (arrangement of displays and image capturing section as shown in Fig. 2)(¶ 53), the first interaction unit comprising:
a processor (display processing unit 23) in communication with a first interaction unit (¶ 75), the first interaction unit comprising:
a first display device (display areas 233A-1)(¶ 53);
a first video capture device (image-capturing sections 251-1), wherein the first display device and the first video capture device are disposed along a first side (top side) of the interaction unit (¶ 53):
a second display device (i.e. display 233A, image-capturing section 251 as shown on the bottom side of Fig. 2), wherein the first video capture device is positioned facing at least a portion of the second display device (i.e. faces displays on the opposite side of the image capturing device. ¶ 54 specifically discusses the necessity of turning off the display due to the displays being within the field of the image capturing sections. This discussion teaches that the portion of the display is being captured by the image-capturing device) and the first video capture device is operable to capture video data of an interior portion (i.e. image-capturing section 251 face inward) of the first interaction unit and the at least a portion of the second display device from a first perspective (i.e. perspective from top to bottom); and
a second video capture device (i.e. image-capturing section 251 on bottom as shown in Fig. 2), wherein the second video capture device is positioned facing at least a portion of the first display device (i.e. faces inward and toward image-capturing device 251-1) and the second video capture device is operable to capture video data of the interior portion of the first interaction unit (i.e. faces inward) and the at least a portion of the first display device from a second perspective (i.e. perspective from bottom to top); and
a data storage unit (ROM 903, RAM 905) storing instructions (programs) executable by the processor (¶ 204, 205);
wherein the processor is configured to: provide a first display signal (i.e. visual and audio information for display 233A-1) to the first display device, the first display signal usable to generate a first display (i.e. visual image on 233A-1) of the interactive virtual environment on the first display device (¶ 209),
wherein the first display comprises a first view (i.e. view of the audience as shown in Figs. 10 and 11) into the interactive virtual environment from a perspective of the first display device (Figs. 2, 10, 11),
wherein the perspective of the first display device is defined based on a virtual location (i.e. locations of audience crowds B1-B3 from the perspective of performer A) and a virtual orientation (i.e. orientations of B1-B3 facing toward performer A) of the first interaction unit within the interactive virtual environment (¶ 153);
receive a first video signal (i.e. signal to display area 233-1 such as large screen 431 in Fig. 17) from the second video capture device, the first video signal comprising captured video data (i.e. image of performer), from the second perspective, of the interior portion of the first interaction unit and at least a portion of the first display of the interactive virtual environment on the first display device (¶ 181); and
provide a second display signal (i.e. image of performer to the audience) to a display device (stereographic hologram 312) of a second interaction unit (Fig. 10: i.e. audience side at concert venue C), the second display signal based on the first video signal (i.e. based on performer) and usable to generate a second display of the interactive virtual environment on the display device of the second interaction unit (Fig. 10)(¶ 153), wherein the second display comprises a second view (i.e. full view around performer) into the interactive virtual environment, the second view including a portion of the interior portion of the first interaction unit and the at least a portion of the first display from a perspective of the display device of the second interaction unit (Fig. 9: i.e. images 311a-c including background of performer, and Fig. 10, ¶ 171: i.e. including background).
As to claim 10, Kimura (Figs. 1, 2, 4) teaches, a method for generating an interactive virtual environment (i.e. performer interaction environment as shown in Fig. 10), the method implemented by a processor (i.e. CPU 901 controlling audience information output system, performer information input/output system 2, performer video display system 3, ¶ 204) in communication with a first interaction unit (arrangement of displays and image capturing section as shown in Fig. 2)(¶ 53)executing instructions stored on a data storage unit, the processor in communication with a first interaction unit (arrangement of displays and image capturing section as shown in Fig. 2)(¶ 53) comprising:
a first display device (display areas 233A-1)(¶ 53);
a first video capture device (image-capturing sections 251-1), wherein the first display device and the first video capture device are disposed along a first side (top side) of the interaction unit (¶ 53):
a second display device (i.e. display 233A, image-capturing section 251 as shown on the bottom side of Fig. 2), wherein the first video capture device is positioned facing at least a portion of the second display device (i.e. faces displays on the opposite side of the image capturing device. ¶ 54 specifically discusses the necessity of turning off the display due to the displays being within the field of the image capturing sections. This discussion teaches that the portion of the display is being captured by the image-capturing device) and the first video capture device is operable to capture video data of an interior portion (i.e. image-capturing section 251 face inward) of the first interaction unit and the at least a portion of the second display device from a first perspective (i.e. perspective from top to bottom); and
a second video capture device (i.e. image-capturing section 251 on bottom as shown in Fig. 2), wherein the second video capture device is positioned facing at least a portion of the first display device (i.e. faces inward and toward image-capturing device 251-1) and the second video capture device is operable to capture video data of the interior portion of the first interaction unit (i.e. faces inward) and the at least a portion of the first display device from a second perspective (i.e. perspective from bottom to top); and
wherein the method comprises:
providing, by the processor, a first display signal (i.e. visual and audio information for display 233A-1) to the first display device, the first display signal usable to generate a first display (i.e. visual image on 233A-1) of the interactive virtual environment on the first display device (¶ 209),
wherein the first display comprises a first view (i.e. view of the audience as shown in Figs. 10 and 11) into the interactive virtual environment from a perspective of the first display device (Figs. 2, 10, 11),
wherein the perspective of the first display device is defined based on a virtual location (i.e. locations of audience crowds B1-B3 from the perspective of performer A) and a virtual orientation (i.e. orientations of B1-B3 facing toward performer A) of the first interaction unit within the interactive virtual environment (¶ 153);
receiving, by the processor, a first video signal (i.e. signal to display area 233-1 such as large screen 431 in Fig. 17) from the second video capture device, the first video signal comprising captured video data (i.e. image of performer), from the second perspective, of the interior portion of the first interaction unit and at least a portion of the first display of the interactive virtual environment on the first display device (¶ 181); and
providing, by the processor, a second display signal (i.e. image of performer to the audience) to a display device (stereographic hologram 312) of a second interaction unit (Fig. 10: i.e. audience side at concert venue C), the second display signal based on the first video signal (i.e. based on performer) and usable to generate a second display of the interactive virtual environment on the display device of the second interaction unit (Fig. 10)(¶ 153), wherein the second display comprises a second view (i.e. full view around performer) into the interactive virtual environment, the second view including a portion of the interior portion of the first interaction unit and the at least a portion of the first display from a perspective of the display device of the second interaction unit (Fig. 9: i.e. images 311a-c including background of performer, and Fig. 10, ¶ 171: i.e. including background).
Dependent Claims
As to claim 2, Kimura (Fig. 10, 11) teaches, wherein the processor is further configured to:
receive a second video signal (i.e. audience video feed into performer side) from a video capture device (image-capturing section on bottom as shown in Fig. 2) of the second interaction unit, the second video signal comprising video data (i.e. color corrected video) of an interior of the second interaction unit (¶ 163, 164: i.e. generate display for audience video); and
provide a third display signal (opposite audience view, bottom left side of audience in Fig. 17) to the second display device, the third display signal based on the second video signal and usable to generate a third display (i.e. image of eight audience on bottom left as shown in Fig. 17) of the interactive virtual environment on the second display device, wherein the third display comprises a third view into the interactive virtual environment, the third view including a portion of the environment (i.e. concert venue) of the second interaction unit from a perspective of the second display device (¶ 153, Fig. 10), wherein the perspective of the second display device is defined based on the virtual location and the virtual orientation (three audience videos at specific orientation as shown in Figs. 10, 11 and 17) of the first interaction unit within the interactive virtual environment (¶ 134: i.e. line of sight direction of the performer, ¶154: i.e. based on performer’s gaze expression, virtual audiences change, such as selecting venue).
As to claims 3 and 12, Kimura (Fig. 1) teaches, wherein the first display signal is based on a third video signal (i.e. concert venue C may be considered as the second video signal, and concert venue D may be considered as the third video signal) received from a video capture device (camera) of a third interaction unit (concert venue D), the third video signal comprising captured video data of the interior portion of the third interaction unit, and the first view includes a portion of the interior portion of the third interaction unit (Figs. 11 and 17: i.e. performer is able to view concert venue C or D by choice on displays 233 and 431 as shown in Fig. 17).
As to claims 4 and 13, Kimura (Fig. 2) teaches, wherein the first interaction unit comprises three or more display devices (display areas 233A-1 to 233A-n) and three or more video capture devices (image-capturing sections 251-1 to 251-m)(Fig. 2) wherein:
each given video capture device of the three or more video capture devices is positioned facing at least a portion of a given display device of the three or more display devices and the given video capture device is operable to capture video data of the interior portion of the first interaction unit and the at least a portion of the given display device from a given perspective of the multiple perspectives (Fig. 2: i.e. each display and image capturing section are facing each other. Each image capturing section captures the images on the display area along with the performer within the field of view)(¶ 53); and
the processor is further configured to provide a given display signal (output of video processing section 232-1 to 232-n) to each given display device of the three or more display devices, the given display signal usable to generate a given display of the interactive virtual environment on the given display device (Fig. 4), wherein the given display comprises a given view (i.e. portion of full surround view as shown in Fig. 2) into the interactive virtual environment from a respective perspective of the given display device, wherein each given view is determined based on the virtual location and virtual orientation of the first interaction unit within the interactive virtual environment (¶ 90: i.e. each piece of image data is separated by video signal separating section 231).
As to claim 5, Kimura (Fig. 1) teaches, wherein:
the first interaction unit further comprises a first audio playback device (sound output device such as speaker or headphone)(¶ 209), and a first audio capture device (microphone held by the performer)(¶ 75) positioned to capture audio data within the interior portion of the first interaction unit (¶ 56); and the processor is further configured to:
provide a first audio signal (i.e. sound from performer) to the first audio playback device, the first audio signal usable to generate a first audio output associated with the interactive virtual environment by the first audio playback device (¶ 56: i.e. sound of the performer encoded together with performer video and sent to performer video display system 3 to output sound), wherein the first audio signal includes at least portion of audio data captured by an audio capture device of the second interaction unit (¶ 56: i.e. sound of the audience is transferred to the performer side. Therefore, a microphone must be present on the audience side); and
provide a second audio signal (i.e. sound from the audience side) to an audio playback device (speaker 917) of the second interaction unit, the second audio signal usable to generate a second audio output associated with the interactive virtual environment by the audio playback device of the second interaction unit (¶ 56 and sound effects at the concert venue D, ¶ 156), wherein the second audio signal includes at least a portion of audio data captured by the first audio capture device (¶ 56: i.e. sound of the performer is transferred to the audience side).
As to claims 7 and 16, Kimura (Figs. 10, 11) teaches, wherein the first interaction unit and the second interaction unit are virtually arranged within the interactive virtual environment (Figs. 10, 11) such that:
the second display device and the second video capture device are virtually disposed along a second side of the first interaction unit (Fig. 10: i.e. arranged along each side as if performer and audiences are next to each other); and
the display device of the second interaction unit is disposed along a side of the second interaction unit; and the processor is further configured to virtually position the first interaction unit and the second interaction unit adjacent to each other in the interactive virtual environment by positioning the second side of the first interaction unit and the side of the second interaction unit adjacent and substantially parallel (Fig. 10: i.e. parallel arrangement for B1, performer, and B3 sides) to each other in the interactive virtual environment (Fig. 10, 11, 13: i.e. virtual space created based on the gaze of the performer by the processing system as in Fig. 1).
As to claims 8 and 17, Kimura (Fig. 11) teaches, wherein the interactive virtual environment comprises a plurality of virtual interaction units (i.e. switches venue from C to D) arranged within the interactive virtual environment (Fig. 11), the plurality of virtual interaction units including a first plurality of physical interaction units (button, microphone, and input device 915) having corresponding physical interaction units capable of receiving one or more users (performer) therewithin and at least one phantom interaction unit (avatar T and image of microphone) that does not correspond to any in-use physical interaction unit (i.e. switching venue from C and D changes displayed audiences digitally without physically affecting physical interaction unit).
As to claims 9, Kimura (Fig. 2) teaches, wherein the first interaction unit is provided using a physical environment unit (PEU) (Fig. 2) comprising multiple walls (i.e. surround display) and a floor (i.e. floor performer A stands on) configured to fully surround a user while the user participates in the interactive virtual environment (Fig. 2), and the second interaction unit is provided as a physical display unit (PDU) (i.e. audience side in Fig. 10 with video 312) comprising a first side (B1) and a second side (B3) arranged adjacent and substantially parallel to each other along a common optical axis (Fig. 10: i.e. parallel in diagonal direction of bottom left to top right), the display device and the video capture device of the second interaction unit being disposed along the first side of the PDU (Fig. 10: i.e. virtually positions the performer between the B1 and B2 sides of the audiences); and
wherein the processor is further configured to:
provide a fourth display signal (i.e. signal to audience side from the performer, corresponding to a different side such as B2) to a second display device (stereographic hologram 312 or 3D display or HMD) of the second interaction unit disposed along the second side of the PDU (¶ 153),
the fourth display signal usable to generate a fourth display (i.e. side B2 view of video 312) of the interactive virtual environment on the second display device of the second interaction unit, wherein the fourth display comprises a fourth view into the interactive virtual environment, the fourth view including a portion of the interior portion of the first interaction unit and the generated third display (i.e. B2 side of performer on the right side of Fig. 10 is displayed to B2 side of audience on the left side of Fig. 10); and
wherein the first display includes video data of the environment of the PDU captured by a second video capture device (i.e. different angle of camera capturing audiences B1-B3) of the second interaction unit disposed along the second side of the PDU (Fig. 10, ¶ 61: i.e. each audience information video before stitching/joining process, which is obtained by single monocular camera among multiple monocular cameras capturing different areas).
As to claim 11, Kimura (Fig. 10, 11) teaches, receiving, by the processor, a second video signal (i.e. audience video feed into performer side) from a video capture device (image-capturing section on bottom as shown in Fig. 2) of the second interaction unit, the second video signal comprising video data (i.e. color corrected video) of an interior of the second interaction unit (¶ 163, 164: i.e. generate display for audience video); and
Providing, by the processor, a third display signal (opposite audience view, bottom left side of audience in Fig. 17) to the second display device, the third display signal based on the second video signal and usable to generate a third display (i.e. image of eight audience on bottom left as shown in Fig. 17) of the interactive virtual environment on the second display device, wherein the third display comprises a third view into the interactive virtual environment, the third view including a portion of the environment (i.e. concert venue) of the second interaction unit from a perspective of the second display device (¶ 153, Fig. 10), wherein the perspective of the second display device is defined based on the virtual location and the virtual orientation (three audience videos at specific orientation as shown in Figs. 10, 11 and 17) of the first interaction unit within the interactive virtual environment (¶ 134: i.e. line of sight direction of the performer, ¶154: i.e. based on performer’s gaze expression, virtual audiences change, such as selecting venue).
As to claim 14, Kimura (Fig. 1) teaches, wherein:
the first interaction unit further comprises a first audio playback device (sound output device such as speaker or headphone)(¶ 209), and a first audio capture device (microphone held by the performer)(¶ 75) positioned to capture audio data within the interior portion of the first interaction unit (¶ 56); and the method further comprises:
providing, by the processor, a first audio signal (i.e. sound from performer) to the first audio playback device, the first audio signal usable to generate a first audio output associated with the interactive virtual environment by the first audio playback device (¶ 56: i.e. sound of the performer encoded together with performer video and sent to performer video display system 3 to output sound), wherein the first audio signal includes at least portion of audio data captured by an audio capture device of the second interaction unit (¶ 56: i.e. sound of the audience is transferred to the performer side. Therefore, a microphone must be present on the audience side); and
providing, by the processor, a second audio signal (i.e. sound from the audience side) to an audio playback device (speaker 917) of the second interaction unit, the second audio signal usable to generate a second audio output associated with the interactive virtual environment by the audio playback device of the second interaction unit (¶ 56 and sound effects at the concert venue D, ¶ 156), wherein the second audio signal includes at least a portion of audio data captured by the first audio capture device (¶ 56: i.e. sound of the performer is transferred to the audience side).
As to claims 18, Kimura (Fig. 2) teaches, wherein the first interaction unit is provided using a physical environment unit (PEU) (Fig. 2) comprising multiple walls (i.e. surround display) and a floor (i.e. floor performer A stands on) configured to fully surround a user while the user participates in the interactive virtual environment (Fig. 2), and the second interaction unit is provided as a physical display unit (PDU) (i.e. audience side in Fig. 10 with video 312) comprising a first side (B1) and a second side (B3) arranged adjacent and substantially parallel to each other along a common optical axis (Fig. 10: i.e. parallel in diagonal direction of bottom left to top right), the display device and the video capture device of the second interaction unit being disposed along the first side of the PDU (Fig. 10: i.e. virtually positions the performer between the B1 and B2 sides of the audiences); and
the method further comprises:
providing, by the processor, a fourth display signal (i.e. signal to audience side from the performer, corresponding to a different side such as B2) to a second display device (stereographic hologram 312 or 3D display or HMD) of the second interaction unit disposed along the second side of the PDU (¶ 153),
the fourth display signal usable to generate a fourth display (i.e. side B2 view of video 312) of the interactive virtual environment on the second display device of the second interaction unit, wherein the fourth display comprises a fourth view into the interactive virtual environment, the fourth view including a portion of the interior portion of the first interaction unit and the generated third display (i.e. B2 side of performer on the right side of Fig. 10 is displayed to B2 side of audience on the left side of Fig. 10); and
wherein the first display includes video data of the environment of the PDU captured by a second video capture device (i.e. different angle of camera capturing audiences B1-B3) of the second interaction unit disposed along the second side of the PDU (Fig. 10, ¶ 61: i.e. each audience information video before stitching/joining process, which is obtained by single monocular camera among multiple monocular cameras capturing different areas).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kimura in view of Valli et al (PGPUB 2020/0099891 A1).
As to claims 6 and 15, Kimura teaches the system of claim 5 but does not specifically teach the volume level of the audio playback.
Valli (Figs. 1, 4) teaches, wherein the output volume level of the first audio playback device associated with the portion of audio data captured by the audio capture device of the second interaction unit is based on a virtual distance between the virtual location of the first interaction unit and a second virtual location of the second interaction unit in the interactive virtual environment (¶ 140: i.e. volume is controlled as function of proximity/distance in a shared virtual geometry from a remote participant).
It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Valli’s virtual meeting system into Kimura’s system, so as to provide unrestricted natural experience of having a meeting in virtual space (¶ 55, 61).
Claim(s) 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kimura in view of Goetzinger, Jr (USPAT 10,182,210 B1).
As to claim 19, Kimura (Figs. 1, 2, 4) teaches, a system (information processing system) for generating an interactive virtual environment (i.e. performer interaction environment as shown in Fig. 10), the system comprising a data storage unit (ROM, RAM, and storage device 919) storing processor-executable instructions (programs)(¶ 210),
a processor (i.e. CPU 901 controlling audience information output system, performer information input/output system 2, performer video display system 3, ¶ 204) in communication with the data storage unit, a first interaction unit (arrangement of displays and image capturing section as shown in Fig. 2 for the audience and utilizing the 3D display 312)(¶ 53) and a second interaction unit (i.e. arrangement of displays and image capturing section as shown in Fig. for the performer)(¶ 61, 124), wherein:
the first interaction unit (i.e. for the audience side) comprises:
a first display device (i.e. B1 view of 3D display 312 from the B1 audience side),
a second display device (i.e. B3 view of 3D display 312 from the B3 audience side),
a first video capture device (i.e. monocular cameras)(¶ 61)
the second interaction unit (i.e. performer side) comprise:
a third display device (display areas 233A-1, such as large screen 431 for monitoring purposes in Fig. 17)(¶ 53);
a fourth display device (i.e. display 233A, image-capturing section 251 as shown on the bottom side of Fig. 2, which can be used to view audience image as shown in Fig. 17),
a third video capture device (image-capturing section 251-1) positioned facing at least a portion of the fourth display device (i.e. top image capturing section 251-1 faces bottom in Fig. 2), the third video capture device operable to capture video data of an interior portion of the second interaction unit and the at least a portion of the fourth display device from a third perspective (Fig. 2: i.e. captures inward including the performer and bottom side), and
a fourth video capture device (i.e. image-capturing section 251 on bottom side facing top side) positioned facing at least a portion of the third display device (i.e. bottom image capturing section 251 faces top in Fig. 2), the fourth video capture device operable to capture video data of the interior portion of the second interaction unit and the at least a portion of the third display device from a fourth perspective (Fig. 2: i.e. captures inward including the performer and bottom side); and
wherein the processor is configured to:
receive a first video signal (i.e. video capture of audience B1 side) from the first video capture device (Fig. 10, ¶ 61),
provide a first display signal (i.e. captured audience view on B1 side) to the third display device (i.e. displayed on B1 side of performer), the first display signal based on the first video signal and usable to generate a first display (i.e. image of audience on B1) of the interactive virtual environment on the third display device (Fig. 10), wherein the first display includes at least a portion of the surrounding environment of the first interaction unit from the first perspective (Fig. 10),
receive a second video signal (i.e. captured audience view on B3 side) from the second video capture device (i.e. displayed on B3 side of performer), provide a second display signal to the fourth display device, the second display signal based on the second video signal and usable to generate a second display of the interactive virtual environment on the fourth display device (i.e. image of audience on B3), wherein the second display includes at least a portion of the surrounding environment of the first interaction unit from the second perspective (Fig. 10),
receive a third video signal (i.e. captured image of the performer on B1 side) from the third video capture device, the third video signal comprising captured video data, from the third perspective, of the interior portion of the second interaction unit and at least a portion of the second display of the interactive virtual environment on the fourth display device (Fig. 10: i.e. B1 side of performer is displayed to the audience on B1 side), and
receive a fourth video signal (i.e. captured image of performer on B3 side) from the fourth video capture device, the fourth video signal comprising captured video data, from the fourth perspective, of the interior portion of the second interaction unit and at least a portion of the first display of the interactive virtual environment on the third display device (Fig. 10: i.e. B1 side of performer is displayed to the audience on B1 side).
Kimura does not specifically teach, a outwardly facing capture device.
Goetzinger (Figs. 7, 8) teaches, a first video capture device (camera 142n, i.e. on right side of Fig. 7) positioned facing outwardly (360 degree camera) at the first interaction unit (i.e. conference table 130 with other accessories in Fig. 7), the first video capture device operable to capture video data of a surrounding environment of the first interaction unit from a first perspective (i.e. 360 degree camera would capture the images of the attendees sitting on the chairs 20)(col. 38 lines (col. 39 lines 30-34, Fig. 8), and
a second video capture device (camera 142n, i.e. on left side of Fig. 7 with in the table) positioned facing outwardly at the first interaction unit, the second video capture device operable to capture video data of the surrounding environment of the first interaction unit from a second perspective (i.e. 360 degree view different from the right 142n)(col. 39 lines 30-34, Fig. 8).
It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Goetzinger’s augmented reality environment as shown in Fig. 8 into Kimura’s performer and audience environment system, so as to effectively present different real time perspective views to different attendees including remote locations (col. 7 lines 63-col. 8 lines 14).
As to claim 20, Kimura (Figs. 10, 11) teaches, provide a third display signal to the first display device, the third display signal based on the third video signal and usable to generate a third display of the interactive virtual environment on the first display device (i.e. generate video of performer to the audiences based on perspective), wherein the third display includes at least a portion of the interior portion of the second interaction unit and the second display of the interactive virtual environment on the fourth display device from the third perspective (Fig. 17: i.e. on large screen 431, interior portion including the performer and captured audience view is displayed together on the display).
Inquiry
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANGHYUK PARK whose telephone number is (571)270-7359. The examiner can normally be reached on 10:00AM - 6:00 M-F.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached on ((571) 272-7772. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/SANGHYUK PARK/Primary Examiner, Art Unit 2623