DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 20 is objected to because of the following informalities: the limitation “audio stream to an the device” appears to be a typographical error of “audio stream to the device”. Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 8, 9, 16 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1, 9, 16, 17 of U.S. Patent No. 11,733,826 in view of Valli (US 2017/0339372)(Hereinafter referred to as Valli). U.S. Patent No. 11,733,826 teaches all of the limitations of claim 1 except the limitation “wherein the participants include augmented or virtual reality (AR/VR) participants”.
Valli teaches the virtual conference information can be presented on a plurality of different display types (A 3D view such as those illustrated in FIGS. 3A-3C may be displayed as a 2D projection on a conventional 2D monitor. In other embodiments, a 3D view such as those of FIGS. 3A-3C may be displayed using a 3D display device, such as a 3D virtual reality or augmented reality headset, an auto-stereoscopic 3D display, or a holographic display. In embodiments using 3D displays, the display can be processed to give the appearance of eye contact for one or more users at each of multiple sites. See paragraph [0035])
U.S. Patent 11,733,826 and Valli teach presenting representations of users and Valli teaches that a plurality of types of devices can display and interact with the representations , therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of U.S. Patent 11,733,826 with multimodal virtual conference presentation techniques of Valli such that a plurality of types of users could collaborate and visually work together.
Below is a claim mapping between claim 1 of the current application and claim 1 of U.S. Patent 11,733,826.
Current Application
U.S. Patent 11,733,826
1. A method comprising:
1. A method comprising:
providing a video conference session in a virtual environment;
providing an interactive view of the virtual environment;
displaying digital representations of participants in a virtual environment, wherein the participants include augmented or virtual reality (AR/VR) participants and traditional video conference participants;
providing a digital representation of a video conference participant in the virtual environment;
receiving an input from a traditional video conference participant via an interactive interface presented on a device associated with the traditional video conference participant indicating a new virtual location within the virtual environment;
receiving user input based on one or more selections of virtual environment locations displayed with an instance of the interactive view of the virtual environment;
updating the virtual environment to reflect the new virtual location of a digital representation of the traditional video conference participant based on the input;
moving the digital representation of the video conference participant to a new location in the virtual environment based on the received user input;
generating a two-dimensional video stream from a perspective of the new virtual location; and transmitting the two-dimensional video stream to the device associated with the traditional video conference participant.
providing a video stream of the virtual environment from the viewpoint of the digital representation within the virtual environment; and updating the video stream to display the virtual environment from the new location.
Below is claim mapping between the current application and U.S. Patent No. 11,733,826
Current Application
1
8
9
16
11,733,826
1
16
17
9
Claims 1, 8, 9, 10, 16 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 7, 8, 14, 15 of U.S. Patent No. 12,086,378. Although the claims at issue are not identical, they are not patentably distinct from each other because the notion of the claims does refer to the same invention and claim 1 of the current application corresponds with claim 1 of U.S. Patent No. 12,086,378. Claim 1 of U.S. Patent No. 12,086,378 anticipates claim 1 of the current application because it includes all of the limitations of claim 1 of the current application.
Below is a limitation mapping between claim 1 of the current application and claim 1 of U.S. Patent No. 12,086,378.
Current Application
U.S. Patent No. 12,086,378
1. A method comprising:
1. A method comprising:
displaying digital representations of participants in a virtual environment, wherein the participants include augmented or virtual reality (AR/VR) participants and traditional video conference participants;
displaying digital representations of video conference participants in a virtual environment, wherein the virtual environment includes augmented or virtual reality (AR/VR) conference participants and the video conference participants, wherein the video conference participants are non-AR/VR conference participants;
receiving an input from a traditional video conference participant via an interactive interface presented on a device associated with the traditional video conference participant indicating a new virtual location within the virtual environment;
receiving, via an interactive view presented to one of the video conference participants and with respect to one of the digital representations, an input indicative of a virtual location of the virtual environment, wherein the interactive view is separate from the virtual environment and is presented at a display of a device of the one of the video conference participants;
updating the virtual environment to reflect the new virtual location of a digital representation of the traditional video conference participant based on the input;
moving the one of the digital representations to a new location in the virtual environment based on the input;
generating a two-dimensional video stream from a perspective of the new virtual location;
capturing a two-dimensional video stream of the virtual environment from the new location;
and transmitting the two-dimensional video stream to the device associated with the traditional video conference participant.
and displaying the two-dimensional video stream at the device of the one of the video conference participants.
Below is a claim mapping between the current application and U.S. Patent No. 12,086,378.
Current Application
1
8
9
10
16
U.S. Patent No. 12,086,378
1
7
14
8
15
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-4, 6-9, 13-17, 19 is/are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Valli (US 2017/0339372)(Hereinafter referred to as Valli).
Regarding claim 1, Valli teaches A method (Systems and methods are described that enable a 3D telepresence. See abstract) comprising:
displaying digital representations of participants in a virtual environment, wherein the participants include augmented or virtual reality (AR/VR) participants and traditional video conference participants (In some embodiments, the reconstructed views shown in FIGS. 3A-3C may be live video feeds of the user within the user's background. See paragraph [0034])(See Figure 6);
receiving an input from a traditional video conference participant via an interactive interface presented on a device associated with the traditional video conference participant indicating a new virtual location within the virtual environment (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])( In some embodiments, the reconstructed views shown in FIGS. 3A-3C may be live video feeds of the user within the user's background. In other embodiments, the reconstructed views may be rendered avatars within a virtual environment. In embodiments where the user is using virtual reality accessories (such as head mounted displays, or HMDs), the reconstructed view may segment out the virtual reality accessories, and insert representations of the user's face, hands, or any other part of the user obstructed by virtual reality accessories. Such embodiments allow more natural interaction between participants, a main example being more natural eye-contact. See paragraph [0034])(A 3D view such as the view of FIG. 6 can be displayed on, for example, a twodimensional computer monitor of a participant in the virtual meeting (in this example the participant represented by avatar 505). See paragraph [0039])(See figure 9, feedback loop from user wants to keep his/her view? No, capture user input) ( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]);
updating the virtual environment to reflect the new virtual location of a digital representation of the traditional video conference participant based on the input (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])(See figure 9, Update configuration data if needed/allowed, modify the site model (Scale, position angle), render each site into a shared 3d representation) ( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]);
generating a two-dimensional video stream from a perspective of the new virtual location (See figure 9, Update configuration data if needed/allowed, modify the site model (Scale, position angle), render each site into a shared 3d representation) (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])( The avatars are free to be represented as sitting at the table, but are also able to move about the virtual lobby and into the reconstructed 3D view. In various embodiments, a user can choose to be displayed as an avatar, as a real-time reconstruction in their actual environment, or as a real-time reconstruction in a virtual environment. In some embodiments, the 3D view takes the form of any of the 3D reconstructions disclosed herein. See paragraph [0040])(See figure 6 and figure 8) ( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]) ( In some embodiments, the reconstructed views shown in FIGS. 3A-3C may be live video feeds of the user within the user's background. In other embodiments, the reconstructed views may be rendered avatars within a virtual environment. In embodiments where the user is using virtual reality accessories (such as head mounted displays, or HMDs), the reconstructed view may segment out the virtual reality accessories, and insert representations of the user's face, hands, or any other part of the user obstructed by virtual reality accessories. Such embodiments allow more natural interaction between participants, a main example being more natural eye-contact. See paragraph [0034]); and
transmitting the two-dimensional video stream to the device associated with the traditional video conference participant (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])( The avatars are free to be represented as sitting at the table, but are also able to move about the virtual lobby and into the reconstructed 3D view. In various embodiments, a user can choose to be displayed as an avatar, as a real-time reconstruction in their actual environment, or as a real-time reconstruction in a virtual environment. In some embodiments, the 3D view takes the form of any of the 3D reconstructions disclosed herein. See paragraph [0040])(See figure 9, provide compiled view for each user)( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]).
Regarding claim 2, Valli teaches The method of claim 1, further comprising:
providing user interface controls, via an AR/VR controller associated with an AR/VR participant, enabling the AR/VR participant to move a digital representation associated with the AR/VR participant (a user interface 1030-such as a touchscreen, keyboard, or mouse, and a display 1035, such as virtual glasses, projectors, or 3D displays. See paragraph [0057])( In embodiments where the user is using virtual reality accessories (such as head mounted displays, or HMDs), the reconstructed view may segment out the virtual reality accessories, and insert representations of the user's face, hands, or any other part of the user obstructed by virtual reality accessories. Such embodiments allow more natural interaction between participants, a main example being more natural eye-contact. See paragraph [0034])(Touchscreen, keyboard or mouse are considered the controller) (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043]).
Regarding claim 3, Valli teaches The method of claim 1, further comprising: displaying a top-down view of the virtual environment at the device associated with the traditional video conference participant (FIG. 5 depicts a top-down view of a virtual meeting space 500. See paragraph [0038]).
Regarding claim 4, Valli teaches The method of claim 1, wherein the interactive interface includes a graphical user interface (GUI) with selectable elements for choosing the new virtual location, further comprising: receiving the input as one of the selectable elements (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043]) (a user interface 1030-such as a touchscreen, keyboard, or mouse, and a display 1035, such as virtual glasses, projectors, or 3D displays. See paragraph [0057]).
Regarding claim 6, Valli teaches The method of claim 1, further comprising: dynamically adjusting respective positions of least some of other digital representations based on the new virtual location of the traditional video conference participant (Modify the site model (Scale, position, angle), render each site into a shared 3d representation, render synthetic lobby, provide compiled view for each user, see figure 9)( A meeting table is shown in the center of the virtual meeting space, however any virtual objects may be rendered within the virtual meeting space such as virtual chairs, plants, paintings, wallpaper, windows, and any other virtual objects known to one of skill in the art. See paragraph [0037])( In some embodiments, the 3D reconstruction is scaled respective to each of the other 3D reconstructions. The 3D reconstruction is scaled with respect to the common lobby connecting them, in particular, the avatar, figures and common natural objects like tables and chairs. In some embodiments, the scaling of the 3D reconstruction is based on actual measurements of real physical dimensions of the 3D captured sites. Additionally, scaling may be a combination of automatic scaling and manual adjustments initiated by users. See paragraph [0052]).
Regarding claim 7, Valli teaches The method of claim 1, further comprising: displaying indicators in the virtual environment showing available new locations including the new virtual location (Empty Virtual chair or other user with which to switch) ( A meeting table is shown in the center of the virtual meeting space, however any virtual objects may be rendered within the virtual meeting space such as virtual chairs, plants, paintings, wallpaper, windows, and any other virtual objects known to one of skill in the art. See paragraph [0037]) (See figure 6, users seated around a table at chairs).
Regarding claim 8, Valli teaches a system (Systems and methods are described that enable a 3D telepresence. See abstract)( FIG. 10 depicts an exemplary system, in accordance
with an embodiment. The exemplary system 1000 is representative of a system capable of performing the disclosed methods. The components of the exemplary system 1000 include a control system 1005, a rendering system 1010, a processing system 1015, data storage 1020, a communication interface 1025, a user interface 1030-such as a touchscreen, keyboard, or mouse, and a display 1035, such as virtual glasses, projectors, or 3D displays. See paragraph [0057]), comprising:
one or more memories ( FIG. 10 depicts an exemplary system, in accordance with an embodiment. The exemplary system 1000 is representative of a system capable of performing the disclosed methods. The components of the exemplary system 1000 include a control system 1005, a rendering system 1010, a processing system 1015, data storage 1020, a communication interface 1025, a user interface 1030-such as a touchscreen, keyboard, or mouse, and a display 1035, such as virtual glasses, projectors, or 3D displays. See paragraph [0057]); and
one or more processors, the one or more processors configured to execute instructions stored in the one or more memories (The processing system 1015 may include a processor. The processor may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the components to operate in a wireless environment. The processor may be coupled to the communication interface, or a transceiver, which may be coupled to a transmit/receive element for communication with other networks. The various components, such as the processor and the transceiver, are depicted as separate components, but it will be appreciated that the processor and the transceiver may be integrated together in an electronic package or chip. See paragraph [0058])( FIG. 10 depicts an exemplary system, in accordance with an embodiment. The exemplary system 1000 is representative of a system capable of performing the disclosed methods. The components of the exemplary system 1000 include a control system 1005, a rendering system 1010, a processing system 1015, data storage 1020, a communication interface 1025, a user interface 1030-such as a touchscreen, keyboard, or mouse, and a display 1035, such as virtual glasses, projectors, or 3D displays. See paragraph [0057]) to:
display digital representations of participants in a virtual environment, wherein the participants include augmented or virtual reality (AR/VR) participants and traditional video conference participants (In some embodiments, the reconstructed views shown in FIGS. 3A-3C may be live video feeds of the user within the user's background. See paragraph [0034])(See Figure 6);
receive an input from a traditional video conference participant via an interactive interface presented on a device associated with the traditional video conference participant indicating a new virtual location within the virtual environment (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])( In some embodiments, the reconstructed views shown in FIGS. 3A-3C may be live video feeds of the user within the user's background. In other embodiments, the reconstructed views may be rendered avatars within a virtual environment. In embodiments where the user is using virtual reality accessories (such as head mounted displays, or HMDs), the reconstructed view may segment out the virtual reality accessories, and insert representations of the user's face, hands, or any other part of the user obstructed by virtual reality accessories. Such embodiments allow more natural interaction between participants, a main example being more natural eye-contact. See paragraph [0034])(A 3D view such as the view of FIG. 6 can be displayed on, for example, a twodimensional computer monitor of a participant in the virtual meeting (in this example the participant represented by avatar 505). See paragraph [0039])(See figure 9, feedback loop from user wants to keep his/her view? No, capture user input) ( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]);
update the virtual environment to reflect the new virtual location of a digital representation of the traditional video conference participant based on the input (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])(See figure 9, Update configuration data if needed/allowed, modify the site model (Scale, position angle), render each site into a shared 3d representation) ( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]);
generate a two-dimensional video stream from a perspective of the new virtual location (See figure 9, Update configuration data if needed/allowed, modify the site model (Scale, position angle), render each site into a shared 3d representation) (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])( The avatars are free to be represented as sitting at the table, but are also able to move about the virtual lobby and into the reconstructed 3D view. In various embodiments, a user can choose to be displayed as an avatar, as a real-time reconstruction in their actual environment, or as a real-time reconstruction in a virtual environment. In some embodiments, the 3D view takes the form of any of the 3D reconstructions disclosed herein. See paragraph [0040])(See figure 6 and figure 8) ( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]) ( In some embodiments, the reconstructed views shown in FIGS. 3A-3C may be live video feeds of the user within the user's background. In other embodiments, the reconstructed views may be rendered avatars within a virtual environment. In embodiments where the user is using virtual reality accessories (such as head mounted displays, or HMDs), the reconstructed view may segment out the virtual reality accessories, and insert representations of the user's face, hands, or any other part of the user obstructed by virtual reality accessories. Such embodiments allow more natural interaction between participants, a main example being more natural eye-contact. See paragraph [0034]); and
transmit the two-dimensional video stream to the device associated with the traditional video conference participant (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])( The avatars are free to be represented as sitting at the table, but are also able to move about the virtual lobby and into the reconstructed 3D view. In various embodiments, a user can choose to be displayed as an avatar, as a real-time reconstruction in their actual environment, or as a real-time reconstruction in a virtual environment. In some embodiments, the 3D view takes the form of any of the 3D reconstructions disclosed herein. See paragraph [0040])(See figure 9, provide compiled view for each user)( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]).
Regarding claim 9, Valli teaches the system of claim 8, wherein the digital representation of the traditional video conference participant comprises a video of the traditional video conference participant ( The avatars are free to be represented as sitting at the table, but are also able to move about the virtual lobby and into the reconstructed 3D view. In various embodiments, a user can choose to be displayed as an avatar, as a real-time reconstruction in their actual environment, or as a real-time reconstruction in a virtual environment. In some embodiments, the 3D view takes the form of any of the 3D reconstructions disclosed herein. See paragraph [0040])( In some embodiments, the reconstructed views shown in FIGS. 3A-3C may be live video feeds of the user within the user's background. See paragraph [0034]).
Regarding claim 13, Valli teaches the system of claim 8, wherein the one or more processors are configured to execute instructions stored in the one or more memories to: move a digital representation associated with an AR/VR participant to another location based on an input received via an AR/VR controller associated with the AR/VR participant (a user interface 1030-such as a touchscreen, keyboard, or mouse, and a display 1035, such as virtual glasses, projectors, or 3D displays. See paragraph [0057])( In embodiments where the user is using virtual reality accessories (such as head mounted displays, or HMDs), the reconstructed view may segment out the virtual reality accessories, and insert representations of the user's face, hands, or any other part of the user obstructed by virtual reality accessories. Such embodiments allow more natural interaction between participants, a main example being more natural eye-contact. See paragraph [0034])(Touchscreen, keyboard or mouse are considered the controller) (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043]).
Regarding claim 14, Valli teaches The system of claim 8, wherein indicators are displayable in the virtual environment showing available new locations for the digital representation (Empty Virtual chair or other user with which to switch) ( A meeting table is shown in the center of the virtual meeting space, however any virtual objects may be rendered within the virtual meeting space such as virtual chairs, plants, paintings, wallpaper, windows, and any other virtual objects known to one of skill in the art. See paragraph [0037]) (See figure 6, users seated around a table at chairs).
Regarding claim 15, Valli teaches The system of claim 8, wherein the one or more processors are configured to execute instructions stored in the one or more memories to: display a top-down view of the virtual environment at the device associated with the traditional video conference participant (FIG. 5 depicts a top-down view of a virtual meeting space 500. See paragraph [0038]).
Regarding claim 16, Valli teaches One or more non-transitory computer readable media storing instructions operable to cause one or more processors to perform operations (Systems and methods are described that enable a 3D telepresence. See abstract)( FIG. 10 depicts an exemplary system, in accordance with an embodiment. The exemplary system 1000 is representative of a system capable of performing the disclosed methods. The components of the exemplary system 1000 include a control system 1005, a rendering system 1010, a processing system 1015, data storage 1020, a communication interface 1025, a user interface 1030-such as a touchscreen, keyboard, or mouse, and a display 1035, such as virtual glasses, projectors, or 3D displays. See paragraph [0057]) (The processing system 1015 may include a processor. The processor may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the components to operate in a wireless environment. The processor may be coupled to the communication interface, or a transceiver, which may be coupled to a transmit/receive element for communication with other networks. The various components, such as the processor and the transceiver, are depicted as separate components, but it will be appreciated that the processor and the transceiver may be integrated together in an electronic package or chip. See paragraph [0058])comprising:
displaying digital representations of participants in a virtual environment, wherein the participants include augmented or virtual reality (AR/VR) participants and traditional video conference participants (In some embodiments, the reconstructed views shown in FIGS. 3A-3C may be live video feeds of the user within the user's background. See paragraph [0034])(See Figure 6);
receiving an input from a traditional video conference participant via an interactive interface presented on a device associated with the traditional video conference participant indicating a new virtual location within the virtual environment (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])( In some embodiments, the reconstructed views shown in FIGS. 3A-3C may be live video feeds of the user within the user's background. In other embodiments, the reconstructed views may be rendered avatars within a virtual environment. In embodiments where the user is using virtual reality accessories (such as head mounted displays, or HMDs), the reconstructed view may segment out the virtual reality accessories, and insert representations of the user's face, hands, or any other part of the user obstructed by virtual reality accessories. Such embodiments allow more natural interaction between participants, a main example being more natural eye-contact. See paragraph [0034])(A 3D view such as the view of FIG. 6 can be displayed on, for example, a twodimensional computer monitor of a participant in the virtual meeting (in this example the participant represented by avatar 505). See paragraph [0039])(See figure 9, feedback loop from user wants to keep his/her view? No, capture user input) ( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]);
updating the virtual environment to reflect the new virtual location of a digital representation of the traditional video conference participant based on the input (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])(See figure 9, Update configuration data if needed/allowed, modify the site model (Scale, position angle), render each site into a shared 3d representation) ( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]);
generating a two-dimensional video stream from a perspective of the new virtual location (See figure 9, Update configuration data if needed/allowed, modify the site model (Scale, position angle), render each site into a shared 3d representation) (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])( The avatars are free to be represented as sitting at the table, but are also able to move about the virtual lobby and into the reconstructed 3D view. In various embodiments, a user can choose to be displayed as an avatar, as a real-time reconstruction in their actual environment, or as a real-time reconstruction in a virtual environment. In some embodiments, the 3D view takes the form of any of the 3D reconstructions disclosed herein. See paragraph [0040])(See figure 6 and figure 8) ( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]) ( In some embodiments, the reconstructed views shown in FIGS. 3A-3C may be live video feeds of the user within the user's background. In other embodiments, the reconstructed views may be rendered avatars within a virtual environment. In embodiments where the user is using virtual reality accessories (such as head mounted displays, or HMDs), the reconstructed view may segment out the virtual reality accessories, and insert representations of the user's face, hands, or any other part of the user obstructed by virtual reality accessories. Such embodiments allow more natural interaction between participants, a main example being more natural eye-contact. See paragraph [0034]); and
transmitting the two-dimensional video stream to the device associated with the traditional video conference participant (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043])( The avatars are free to be represented as sitting at the table, but are also able to move about the virtual lobby and into the reconstructed 3D view. In various embodiments, a user can choose to be displayed as an avatar, as a real-time reconstruction in their actual environment, or as a real-time reconstruction in a virtual environment. In some embodiments, the 3D view takes the form of any of the 3D reconstructions disclosed herein. See paragraph [0040])(See figure 9, provide compiled view for each user)( Using inputs from each site's 3D rendering and of the virtual model of the shared 3D space, or lobby, a synthetic lobby including avatars is rendered at step 910. At step 912, compiled view is provided to each user. Additional user inputs are captured and the configuration is updated if needed. The site model is modified to correct for appropriate scale, position, and angle. Configuration data is also shared for setup. See paragraph [0056]).
Regarding claim 17, Valli teaches the one or more non-transitory computer readable media of claim 16, wherein the operations further comprise: displaying a top-down view of the virtual environment at a device associated with one of the participants (FIG. 5 depicts a top-down view of a virtual meeting space 500. See paragraph [0038]).
Regarding claim 19, Valli teaches The one or more non-transitory computer readable media of claim 16, wherein the operations further comprise: receiving, via an AR/VR controller associated with an AR/VR participant, an input indicating to move a digital representation associated with the AR/VR participant to another location (a user interface 1030-such as a touchscreen, keyboard, or mouse, and a display 1035, such as virtual glasses, projectors, or 3D displays. See paragraph [0057])( In embodiments where the user is using virtual reality accessories (such as head mounted displays, or HMDs), the reconstructed view may segment out the virtual reality accessories, and insert representations of the user's face, hands, or any other part of the user obstructed by virtual reality accessories. Such embodiments allow more natural interaction between participants, a main example being more natural eye-contact. See paragraph [0034])(Touchscreen, keyboard or mouse are considered the controller) (The virtual viewpoint can be adjusted manually to give the appearance of eye-contact between two communicating participants, or positioning of virtual viewpoints can be assisted by a computer to provide the appearance of eye contact. When a user chooses to be represented by an avatar, the user can change his/her virtual viewpoint with interactions with a keyboard or mouse, or any other similar action. See paragraph [0043]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Valli (US 2017/0339372)(Hereinafter referred to as Valli) in view of Valli (US 2019/0253667)(Hereinafter referred to as Valli667).
Regarding claim 5, Valli teaches The method of claim 1, but is silent to further comprising: generating an audio stream synchronized with the two-dimensional video stream from the perspective of the new virtual location.
Valli667 teaches multi-channel audio capture for multi-view teleconference videos (Note that a number of possible embodiments are produced by different ways to implement audio transmission between participants, including those supporting multichannel audio for spatial perception. In general, the capturing and production of audio can apply the same spatial geometry principles as the capturing and production of video. See paragraph [0104]).
Valli and Valli667 teach of virtual teleconferences between multiple participants and Valli667 teaches that the audio can provide spatial perception, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Valli with the spatial perception of audio techniques of Valli667 such that each individual view could perceive where the audio was coming from each individual user in relation to their position.
Regarding claim 20, Valli teaches the one or more non-transitory computer readable media of claim 16, but is silent to wherein the operations further comprise: generating an audio stream from the perspective of the new virtual location; and transmitting the audio stream to an the device associated with the traditional video conference participant.
Valli667 teaches multi-channel audio capture for multi-view teleconference videos (Note that a number of possible embodiments are produced by different ways to implement audio transmission between participants, including those supporting multichannel audio for spatial perception. In general, the capturing and production of audio can apply the same spatial geometry principles as the capturing and production of video. See paragraph [0104]).
Valli and Valli667 teach of virtual teleconferences between multiple participants and Valli667 teaches that the audio can provide spatial perception, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Valli with the spatial perception of audio techniques of Valli667 such that each individual view could perceive where the audio was coming from each individual user in relation to their position.
Claims 10, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Valli (US 2017/0339372)(Hereinafter referred to as Valli) in view of Ross et al. (US 2018/0331841)(Hereinafter referred to as Ross)
Regarding claim 10, Valli teaches The system of claim 8, but is silent to wherein the input comprises a selection of the new virtual location from a predefined list of virtual locations.
Ross teaches a virtual meeting system which allows users to move to predefined locations (In one embodiment for only allowing certain users to move among predefined positions in the virtual environment, the one or more types of action includes moving to any position in the virtual environment, and the method comprises: determining a first position from among a first set of one or more predefined positions in the virtual environment at which the second user is to be located; setting the first position as the location of the second user in the virtual environment during a first period of time; providing a user device operated by the second user with images of the virtual environment that is in view from the first position; allowing the second user to move from the first position to a second position in the first set of one or more predefined positions in the virtual environment; and not allowing the second user to move to any other position in the virtual environment that is not a position in the first set of predefined positions. See paragraph [0018]).
Valli and Ross teach of virtual meeting systems and Ross teaches that the use of predefined locations can help reduce bandwidth requirements, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Valli with the predefined locations technique of Ross such that the system could reduce the required bandwidth for virtual meetings in which users would like to switch locations.
Regarding claim 18, Valli teaches the one or more non-transitory computer readable media of claim 16, but is silent to wherein receiving the input from the traditional video conference participant via the interactive interface comprises: receiving the input as a selection from a predefined list of virtual locations.
Ross teaches a virtual meeting system which allows users to move to predefined locations (In one embodiment for only allowing certain users to move among predefined positions in the virtual environment, the one or more types of action includes moving to any position in the virtual environment, and the method comprises: determining a first position from among a first set of one or more predefined positions in the virtual environment at which the second user is to be located; setting the first position as the location of the second user in the virtual environment during a first period of time; providing a user device operated by the second user with images of the virtual environment that is in view from the first position; allowing the second user to move from the first position to a second position in the first set of one or more predefined positions in the virtual environment; and not allowing the second user to move to any other position in the virtual environment that is not a position in the first set of predefined positions. See paragraph [0018]).
Valli and Ross teach of virtual meeting systems and Ross teaches that the use of predefined locations can help reduce bandwidth requirements, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Valli with the predefined locations technique of Ross such that the system could reduce the required bandwidth for virtual meetings in which users would like to switch locations.
Claim(s) 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over in view of Valli (US 2017/0339372)(Hereinafter referred to as Valli) in view of Kies et al. (US 2021/0084259)(Hereinafter referred to as Kies)
Regarding claim 11, Valli teaches The system of claim 8, but is silent to wherein the virtual environment includes an interactive whiteboard accessible by at least some of AR/VR participants and at least some of the traditional video conference participants.
Kies teaches the ability to virtually present remote participants to local users to participate together in a game or utilize white board equipment (Various embodiments are equally applicable to other applications in which virtual reality is used to enable local and remote participants, as well as virtual content, to interact as if physically located in the same environment, such as games involving a common game environment (e.g., gameboard, game scene, etc.), a conference table for a meeting, an equipment control panel for training sessions involving equipment, which may be physically present or rendered virtually, training sessions among teams of decision makers, meetings in control rooms or war rooms, etc. See paragraph [0056])( With reference to FIG. 3A, the co-located participants 304a-304d have selected their seats and/or have arranged themselves around three sides of the game table 206 so that they can communicate with one another and view a gameboard, whiteboard, TV, monitor, or presentation 302. Each of the co-located participants 304a-304d is in a slightly different physical location relative to the environment and game table 206, and there is a person-sized gap 306 between two co-located participant 304b and co-located participant 304c. See paragraph [0061])
Valli and Kies teach of virtual meetings and Kies teaches that the remote participants can be visualized with local participants and interact as if they are in the local environment, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Valli with the interaction techniques of Kies such that the user would be able to interact in various ways and with various devices regardless of their location.
Regarding claim 12, Valli teaches The system of claim 8, but is silent to wherein the two-dimensional video stream includes annotations made by the traditional video conference participant.
Kies teaches the ability to virtually present remote participants to local users to participate together in a game or utilize white board equipment or other training equipment and displays (Various embodiments are equally applicable to other applications in which virtual reality is used to enable local and remote participants, as well as virtual content, to interact as if physically located in the same environment, such as games involving a common game environment (e.g., gameboard, game scene, etc.), a conference table for a meeting, an equipment control panel for training sessions involving equipment, which may be physically present or rendered virtually, training sessions among teams of decision makers, meetings in control rooms or war rooms, etc. See paragraph [0056])( With reference to FIG. 3A, the co-located participants 304a-304d have selected their seats and/or have arranged themselves around three sides of the game table 206 so that they can communicate with one another and view a gameboard, whiteboard, TV, monitor, or presentation 302. Each of the co-located participants 304a-304d is in a slightly different physical location relative to the environment and game table 206, and there is a person-sized gap 306 between two co-located participant 304b and co-located participant 304c. See paragraph [0061])
Valli and Kies teach of virtual meetings and Kies teaches that the remote participants can be visualized with local participants and interact as if they are in the local environment, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Valli with the interaction techniques of Kies such that the user would be able to interact in various ways and with various devices regardless of their location.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS R WILSON whose telephone number is (571)272-0936. The examiner can normally be reached M-F 7:30-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (572)-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS R WILSON/Primary Examiner, Art Unit 2611