DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
1. This action is responsive to an amendment filed on 01/16/2026. Claims 1-20 are pending.
Response to Arguments
2. Applicants arguments filed in the 01/16/2026 remarks have been fully considered but are moot in view of new ground(s) of rejection which is deemed appropriate to address all of the needs at this time.
Specification
3. The disclosure is objected to because of the following informalities: The new amended subject matter claimed on 01/16/2026 are not disclosed in the specification..
Appropriate correction is required.
Claim Rejections - 35 USC § 112
4. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Regarding claim 1, the limitation recites ".... having spatial geometry…..”. The specification does not disclose new matter claimed language and does not provide any further description as to its features and function related to the new claimed language. The drawings do not disclose any claimed feature provided in the new claimed language. Examiner will interpret the claim in the broadest reasonable interpretation in order to provide a rejection based on the new claimed amendment since there is no mention of features and or figures that describe such claimed language. Applicant is asked to provide a clearer limitation so no further rejection under 35 USC § 112 statues are given.
Claims 9 and 15 are rejected under the same rationale as claim 1.
Claims 2-8 are rejected under the same rationale as they are dependent on claim 1.
Claims 10-14 are rejected under the same rationale as they are dependent on claim 9.
Claims 16-20 are rejected under the same rationale as they are dependent on claim 15.
Claim Rejections - 35 USC § 103
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
6. Claim(s) 1, 9, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (US 2024/0094863) in view of Chand et al. (US 2024/0104819) in further view of Valli et al. (US 2020/0099891).
Regarding claim 1, Smith teaches a method comprising: receiving a video stream including a two-dimensional video of a remote video conference participant; rendering a three-dimensional environment for the remote video conference participant based on an analysis of a physical environment of the remote video conference participant (see fig. 2-3, ¶ 0026-0028, 0030, 0035-0037. The device captures the physical environment and renders the environment in a 3D environment. The environment will have the avatar of the user is displayed (inserted) into the 3D environment.); and inserting the two-dimensional video of the remote video conference participant into the three-dimensional environment, whereby the two-dimensional video of the remote video conference participant appears to a local video conference participant as if it were presented in three-dimensional video (see fig. 2-3, ¶ 0026-0028, 0030, 0035-0037. During the session the users can see each other in the 3D environment. The avatar of the user is displayed in the 3D environment of the session. The 3D environment relates to physical environments that correspond to each users physical location.).
Smith discloses a three-dimensional environment of a physical environment that is relayed between users in a communication session. However Smith is vague on having spatial geometry and rendered from a viewpoint within the environment and updating the three-dimensional environment based on changes in the two- dimensional video of the remote video conference participant.
Chand teaches having spatial geometry and rendered from a viewpoint within the environment and updating the three-dimensional environment based on changes in the two- dimensional video of the remote video conference participant (see ¶ 0065, 0203, 0213-0214. In a virtual reality environment, which is a view of a three dimensional environment is visible to a user. The view of the three-dimensional environment is typically visible to the user via one or more displays. A viewpoint of a user determines what content is visible in the viewport, a viewpoint generally specifies a location and a direction relative to the three-dimensional environment, and as the viewpoint shifts, the view of the three-dimensional environment will also shift in the viewport.).
Chand discloses a 3D environment which can be a physical environment which is backed by Smith. Further spatial geometry being spatial position and surface area within a 3D environment, can be taught by Chand.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith to incorporate viewpoint in a 3D environment for updating images as a user moves in the environment. The modification of this conversion provide updating positioning of a user in a 3D environment.
Valli discloses spatial geometry for a 3D environments (see ¶ 0054, 0089-0092, 0110).
The combination of Valli to Chand and Smith provides the spatial geometry for the environment.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith and Chand to incorporate spatial geometry for a 3D environments. The modification of this conversion spatial geometry for a environment.
Regarding claim 9, Smith teaches computing system comprising: a processor; and a memory storing instructions that, when executed by the processor, configure the system to: receive a video stream including a two-dimensional video of a remote video conference participant; render a three-dimensional environment for the remote video conference participant based on an analysis of a physical environment of the remote videoconference participant (see fig. 2-3, ¶ 0026-0028, 0030, 0035-0037. The device captures the physical environment and renders the environment in a 3D environment. The environment will have the avatar of the user is displayed (inserted) into the 3D environment.); and insert the two-dimensional video of the remote video conference participant into the three-dimensional environment, whereby the two-dimensional video of the remote video conference participant appears to a local video conference participant as if it were presented in three-dimensional video (see fig. 2-3, ¶ 0026-0028, 0030, 0035-0037. During the session the users can see each other in the 3D environment. The avatar of the user is displayed in the 3D environment of the session. The 3D environment relates to physical environments that correspond to each users physical location.).
Smith discloses a three-dimensional environment of a physical environment that is relayed between users in a communication session. However Smith is vague on having spatial geometry and rendered from a viewpoint within the environment and updating the three-dimensional environment based on changes in the two- dimensional video of the remote video conference participant.
Chand teaches having spatial geometry and rendered from a viewpoint within the environment and updating the three-dimensional environment based on changes in the two- dimensional video of the remote video conference participant (see ¶ 0065, 0203, 0213-0214. In a virtual reality environment, which is a view of a three dimensional environment is visible to a user. The view of the three-dimensional environment is typically visible to the user via one or more displays. A viewpoint of a user determines what content is visible in the viewport, a viewpoint generally specifies a location and a direction relative to the three-dimensional environment, and as the viewpoint shifts, the view of the three-dimensional environment will also shift in the viewport.).
Chand discloses a 3D environment which can be a physical environment which is backed by Smith. Further spatial geometry being spatial position and surface area within a 3D environment, can be taught by Chand.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith to incorporate viewpoint in a 3D environment for updating images as a user moves in the environment. The modification of this conversion provide updating positioning of a user in a 3D environment.
Valli discloses spatial geometry for a 3D environments (see ¶ 0054, 0089-0092, 0110).
The combination of Valli to Chand and Smith provides the spatial geometry for the environment.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith and Chand to incorporate spatial geometry for a 3D environments. The modification of this conversion spatial geometry for a environment.
Regarding claim 15, Smith teaches a non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to: receive a video stream including a two-dimensional video of a remote video conference participant; render a three-dimensional environment for the remote video conference participant based on an analysis of a physical environment of the remote video conference participant (see fig. 2-3, ¶ 0026-0028, 0030, 0035-0037. The device captures the physical environment and renders the environment in a 3D environment. The environment will have the avatar of the user is displayed (inserted) into the 3D environment.); and insert the two-dimensional video of the remote video conference participant into the three-dimensional environment, whereby the two-dimensional video of the remote video conference participant appears to a local video conference participant as if it were presented in three-dimensional video (see fig. 2-3, ¶ 0026-0028, 0030, 0035-0037. During the session the users can see each other in the 3D environment. The avatar of the user is displayed in the 3D environment of the session. The 3D environment relates to physical environments that correspond to each users physical location.).
Smith discloses a three-dimensional environment of a physical environment that is relayed between users in a communication session. However Smith is vague on having spatial geometry and rendered from a viewpoint within the environment and updating the three-dimensional environment based on changes in the two- dimensional video of the remote video conference participant.
Chand teaches having spatial geometry and rendered from a viewpoint within the environment and updating the three-dimensional environment based on changes in the two- dimensional video of the remote video conference participant (see ¶ 0065, 0203, 0213-0214. In a virtual reality environment, which is a view of a three dimensional environment is visible to a user. The view of the three-dimensional environment is typically visible to the user via one or more displays. A viewpoint of a user determines what content is visible in the viewport, a viewpoint generally specifies a location and a direction relative to the three-dimensional environment, and as the viewpoint shifts, the view of the three-dimensional environment will also shift in the viewport.).
Chand discloses a 3D environment which can be a physical environment which is backed by Smith. Further spatial geometry being spatial position and surface area within a 3D environment, can be taught by Chand.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith to incorporate viewpoint in a 3D environment for updating images as a user moves in the environment. The modification of this conversion provide updating positioning of a user in a 3D environment.
Valli discloses spatial geometry for a 3D environments (see ¶ 0054, 0089-0092, 0110).
The combination of Valli to Chand and Smith provides the spatial geometry for the environment.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith and Chand to incorporate spatial geometry for a 3D environments. The modification of this conversion spatial geometry for a environment.
7. Claim(s) 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 16, 17, 18, 19, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (US 2024/0094863) in view of Chand et al. (US 2024/0104819) in further view of Valli et al. (US 2020/0099891) in further view of Barbosa da Silva et al. (US 2023/0300292).
Regarding claim 2, Smith, Chand and Valli do not teach the method of claim 1, wherein the video stream including the two-dimensional video of the remote video conference participant has been processed to remove the physical environment in a background in the two-dimensional video of the remote video conference participant.
Barbosa teaches wherein the video stream including the two-dimensional video of the remote video conference participant has been processed to remove the physical environment in a background in the two-dimensional video of the remote video conference participant (see fig. 4, 14, ¶ 0102-0104, 0149-0153. The system removes the background of the captured environment and renders a 3D AR background in the conferencing session.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate removing the background and replacing it with a 3D space. The modification of this conversion provide removing the real space and rendering a 3D environment.
Regarding claim 3, Smith, Chand and Valli do not teach the method of claim 1, further comprising: identifying a point of view of the local video conference participant; and displaying the three-dimensional environment relative to the point of view of the local video conference participant relative to a two-dimensional video display.
Barbosa teaches identifying a point of view of the local video conference participant; and displaying the three-dimensional environment relative to the point of view of the local video conference participant relative to a two-dimensional video display (see fig. 6-7, 14, ¶ 0004, 0117-00119. The shared AR video call system can enable a client device to track movement of the client device (and/or movement of a participant) and update a rendering of an AR background environment based on the tracked movement.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate tracking the point of view in the 3D environment. The modification provides for tracking users in the 3D environment.
Regarding claim 4, Smith, Chand and Valli do not teach the method of claim 3, further comprising: tracking a position of the local video conference participant in a physical environment before the two-dimensional video display; and translating the three-dimensional environment in response to a change in the position of a local video conference participant device in the physical environment.
Barbosa teaches tracking a position of the local video conference participant in a physical environment before the two-dimensional video display; and translating the three-dimensional environment in response to a change in the position of a local video conference participant device in the physical environment (see fig. 6-7, 14, ¶ 0004, 0117-00119. The shared AR video call system can enable a client device to track movement of the client device (and/or movement of a participant) and update a rendering of an AR background environment based on the tracked movement.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate tracking the point of view in the 3D environment. The modification provides for tracking users in the 3D environment.
Regarding claim 5, Smith, Chand and Valli do not teach the method of claim 1, wherein the three-dimensional environment includes a least one animated element, wherein the three-dimensional environment appears as live video.
Barbosa teaches wherein the three-dimensional environment includes a least one animated element, wherein the three-dimensional environment appears as live video (see fig. 14, ¶ 0152. The client device emulates a television within the shared AR space to render a shared video stream (1412) during the video call. The shared video stream (1412) can display videos, such as, but not limited to, movies, shows, live sport events, live news, user generated content (e.g., family videos, vacation videos), and/or videos relevant to one or more participant users of the video call.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate presenting a live video in the 3D environment. The modification provides a live video feed in the 3D environment.
Regarding claim 6, Smith, Chand and Valli do not teach the method of claim 1, further comprising: analyzing the two-dimensional video of the remote video conference participant for at least one attribute; and adjusting the three-dimensional environment based on the at least one attribute.
Barbosa teaches analyzing the two-dimensional video of the remote video conference participant for at least one attribute; and adjusting the three-dimensional environment based on the at least one attribute (see fig. 14-16C, ¶ 0044. Shared AR video call system can realistically insert participants of a video call within AR spaces that change according to the movement of a participant client device during the video call. Additionally, the shared AR video call system can facilitate an efficient synchronization of the 360 AR space across multiple client devices by enabling the client devices to share updates to the 360 AR space. The attribute can be the movement of the participants across the AR space.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate the attribute being a participant movement in the AR space can provide for syncing across the AR space. The modification provides a matching the participants movement across the 3D environment.
Regarding claim 7, Smith, Chand and Valli do not teach the method of claim 1, wherein the three-dimensional environment is constructed from a plurality of layers.
Barbosa teaches wherein the three-dimensional environment is constructed from a plurality of layers (see ¶ 0037, 0062. The shared AR video call system can enable a client device to utilize layering to render an AR background environment and an avatar for a participant captured on a video call.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate layering the video in 3D format. The modification provides a different layers in order to render a 3D environment.
Regarding claim 10, Smith, Chand and Valli do not teach the computing system of claim 9, wherein the video stream include the two-dimensional video of the remote video conference participant has been processed to remove a background in the two-dimensional video of the remote video conference participant.
Barbosa teaches wherein the video stream include the two-dimensional video of the remote video conference participant has been processed to remove a background in the two-dimensional video of the remote video conference participant (see fig. 4, 14, ¶ 0102-0104, 0149-0153. The system removes the background of the captured environment and renders a 3D AR background in the conferencing session.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate removing the background and replacing it with a 3D space. The modification of this conversion provide removing the real space and rendering a 3D environment.
Regarding claim 11, Smith, Chand and Valli do not teach the computing system of claim 9, wherein the instructions further configure the system to: identify a point of view of the local video conference participant; and display the three-dimensional environment relative to the point of view of the local video conference participant relative to a two-dimensional video display.
Barbosa teaches wherein the instructions further configure the system to: identify a point of view of the local video conference participant; and display the three-dimensional environment relative to the point of view of the local video conference participant relative to a two-dimensional video display (see fig. 6-7, 14, ¶ 0004, 0117-00119. The shared AR video call system can enable a client device to track movement of the client device (and/or movement of a participant) and update a rendering of an AR background environment based on the tracked movement.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate tracking the point of view in the 3D environment. The modification provides for tracking users in the 3D environment.
Regarding claim 12, Smith, Chand and Valli do not teach the computing system of claim 11, wherein the instructions further configure the system to: track a position of the local video conference participant in a physical environment before the two-dimensional video display; and translate the three-dimensional environment in response to a change in the position of a local video conference participant device in the physical environment.
Barbosa teaches wherein the instructions further configure the system to: track a position of the local video conference participant in a physical environment before the two-dimensional video display; and translate the three-dimensional environment in response to a change in the position of a local video conference participant device in the physical environment (see fig. 6-7, 14, ¶ 0004, 0117-00119. The shared AR video call system can enable a client device to track movement of the client device (and/or movement of a participant) and update a rendering of an AR background environment based on the tracked movement.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate tracking the point of view in the 3D environment. The modification provides for tracking users in the 3D environment.
Regarding claim 13, Smith, Chand and Valli do not teach the computing system of claim 9, wherein the instructions further configure the system to: analyze the two-dimensional video of the remote video conference participant for at least one attribute; and adjust the three-dimensional environment based on the at least one attribute.
Barbosa teaches wherein the instructions further configure the system to: analyze the two-dimensional video of the remote video conference participant for at least one attribute; and adjust the three-dimensional environment based on the at least one attribute (see fig. 14-16C, ¶ 0044. Shared AR video call system can realistically insert participants of a video call within AR spaces that change according to the movement of a participant client device during the video call. Additionally, the shared AR video call system can facilitate an efficient synchronization of the 360 AR space across multiple client devices by enabling the client devices to share updates to the 360 AR space. The attribute can be the movement of the participants across the AR space.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate the attribute being a participant movement in the AR space can provide for syncing across the AR space. The modification provides a matching the participants movement across the 3D environment.
Regarding claim 16, Smith, Chand and Valli do not teach the non-transitory computer-readable storage medium of claim 15, wherein the video stream include the two-dimensional video of the remote video conference participant has been processed to remove a background in the two-dimensional video of the remote video conference participant.
Barbosa teaches wherein the video stream include the two-dimensional video of the remote video conference participant has been processed to remove a background in the two-dimensional video of the remote video conference participant (see fig. 4, 14, ¶ 0102-0104, 0149-0153. The system removes the background of the captured environment and renders a 3D AR background in the conferencing session.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate removing the background and replacing it with a 3D space. The modification of this conversion provide removing the real space and rendering a 3D environment.
Regarding claim 17, Smith, Chand and Valli do not teach the non-transitory computer-readable storage medium of claim 15, wherein the instructions further configure the computer to: identify a point of view of the local video conference participant; and display the three-dimensional environment relative to the point of view of the local video conference participant relative to a two-dimensional video display.
Barbosa teaches wherein the instructions further configure the computer to: identify a point of view of the local video conference participant; and display the three-dimensional environment relative to the point of view of the local video conference participant relative to a two-dimensional video display (see fig. 6-7, 14, ¶ 0004, 0117-00119. The shared AR video call system can enable a client device to track movement of the client device (and/or movement of a participant) and update a rendering of an AR background environment based on the tracked movement.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate tracking the point of view in the 3D environment. The modification provides for tracking users in the 3D environment.
Regarding claim 18, Smith, Chand and Valli do not teach the non-transitory computer-readable storage medium of claim 17, wherein the instructions further configure the computer to: track a position of the local video conference participant in a physical environment before the two-dimensional video display; and translate the three-dimensional environment in response to a change in the position of a local video conference participant device in the physical environment.
Barbosa teaches wherein the instructions further configure the computer to: track a position of the local video conference participant in a physical environment before the two-dimensional video display; and translate the three-dimensional environment in response to a change in the position of a local video conference participant device in the physical environment (see fig. 6-7, 14, ¶ 0004, 0117-00119. The shared AR video call system can enable a client device to track movement of the client device (and/or movement of a participant) and update a rendering of an AR background environment based on the tracked movement.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate tracking the point of view in the 3D environment. The modification provides for tracking users in the 3D environment.
Regarding claim 19, Smith, Chand and Valli do not teach the non-transitory computer-readable storage medium of claim 15, wherein the three-dimensional environment includes a least one animated element, wherein the three-dimensional environment appears as live video.
Barbosa teaches wherein the three-dimensional environment includes a least one animated element, wherein the three-dimensional environment appears as live video (see fig. 14, ¶ 0152. The client device emulates a television within the shared AR space to render a shared video stream (1412) during the video call. The shared video stream (1412) can display videos, such as, but not limited to, movies, shows, live sport events, live news, user generated content (e.g., family videos, vacation videos), and/or videos relevant to one or more participant users of the video call.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate presenting a live video in the 3D environment. The modification provides a live video feed in the 3D environment.
Regarding claim 20, Smith, Chand and Valli do not teach the non-transitory computer-readable storage medium of claim 15, wherein the instructions further configure the computer to: analyze the two-dimensional video of the remote video conference participant for at least one attribute; and adjust the three-dimensional environment based on the at least one attribute.
Barbosa teaches wherein the instructions further configure the computer to: analyze the two-dimensional video of the remote video conference participant for at least one attribute; and adjust the three-dimensional environment based on the at least one attribute (see fig. 14-16C, ¶ 0044. Shared AR video call system can realistically insert participants of a video call within AR spaces that change according to the movement of a participant client device during the video call. Additionally, the shared AR video call system can facilitate an efficient synchronization of the 360 AR space across multiple client devices by enabling the client devices to share updates to the 360 AR space. The attribute can be the movement of the participants across the AR space.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate the attribute being a participant movement in the AR space can provide for syncing across the AR space. The modification provides a matching the participants movement across the 3D environment.
8. Claim(s) 8, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (US 2024/0094863) in view of Chand et al. (US 2024/0104819) in further view of Valli et al. (US 2020/0099891) in further view of Dempski et al. (US 2004/0155902).
Regarding claim 8, Smith, Chand and Valli do not teach the method of claim 1, further comprising: receiving a three-dimensional model of an exhibit; displaying the three-dimensional model of the exhibit in the three-dimensional environment at a location in a foreground relative to the two-dimensional video of the remote video conference participant; receiving inputs effective to manipulate the three-dimensional model of the exhibit; and rotating and translating the three-dimensional model of the exhibit in the three-dimensional environment responsive to the inputs effective to manipulate the three-dimensional model of the exhibit.
Dempski teaches receiving a three-dimensional model of an exhibit; displaying the three-dimensional model of the exhibit in the three-dimensional environment at a location in a foreground relative to the two-dimensional video of the remote video conference participant; receiving inputs effective to manipulate the three-dimensional model of the exhibit; and rotating and translating the three-dimensional model of the exhibit in the three-dimensional environment responsive to the inputs effective to manipulate the three-dimensional model of the exhibit (see fig. 1-2, ¶ 0005, 0016-0025. A video conferencing system includes display monitors, video cameras and touch-screen input devices connected to computer processing systems and the cameras and displays broadcast a video image for display at a remote location. The computer for controlling the image processing has software suitable for generating three-dimensional images superimposed or overlying the video broadcast image from the video cameras and a three-dimensional image is regenerated at the remote locations that corresponds to movement of a real object. This image appears as a virtual object in the plane of the monitor that can be manipulated in response to a participant at any location touching the screen near the object to "grab" and move the object.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate the virtual object in a 3D space to manipulate. The modification provides a user manipulating a 3D object in a virtual space.
Regarding claim 14, Smith, Chand and Valli do not teach the computing system of claim 9, wherein the instructions further configure the system to: receive a three-dimensional model of an exhibit; display the three-dimensional model of the exhibit in the three-dimensional environment at a location in a foreground relative to the two-dimensional video of the remote video conference participant; receive inputs effective to manipulate the three-dimensional model of the exhibit; and rotate and translate the three-dimensional model of the exhibit in the three-dimensional environment responsive to the inputs effective to manipulate the three-dimensional model of the exhibit.
Dempski teaches receiving a three-dimensional model of an exhibit; displaying the three-dimensional model of the exhibit in the three-dimensional environment at a location in a foreground relative to the two-dimensional video of the remote video conference participant; receiving inputs effective to manipulate the three-dimensional model of the exhibit; and rotating and translating the three-dimensional model of the exhibit in the three-dimensional environment responsive to the inputs effective to manipulate the three-dimensional model of the exhibit (see fig. 1-2, ¶ 0005, 0016-0025. A video conferencing system includes display monitors, video cameras and touch-screen input devices connected to computer processing systems and the cameras and displays broadcast a video image for display at a remote location. The computer for controlling the image processing has software suitable for generating three-dimensional images superimposed or overlying the video broadcast image from the video cameras and a three-dimensional image is regenerated at the remote locations that corresponds to movement of a real object. This image appears as a virtual object in the plane of the monitor that can be manipulated in response to a participant at any location touching the screen near the object to "grab" and move the object.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Smith, Chand and Valli to incorporate the virtual object in a 3D space to manipulate. The modification provides a user manipulating a 3D object in a virtual space.
Conclusion
9. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASSAD MOHAMMED whose telephone number is (571)270-7253. The examiner can normally be reached 9:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASSAD MOHAMMED/Examiner, Art Unit 2691
/DUC NGUYEN/Supervisory Patent Examiner, Art Unit 2691