Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/12/2025 has been entered.
Response to Amendment
This action is responsive to applicant’s amendments and remarks received on 11/12/2025.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 3-9, 11-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-6, 8-9, 11-13, 15-18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Neeter (US 20200005538 A1), hereinafter Neeter, in view of Lee et al. (US 20180101966 A1), hereinafter Lee.
Regarding claim 1, Neeter teaches A method comprising: receiving, at a client device, data indicating a 3D scene associated with a source device; the data comprising scene data and tracking data, (Para. 45 see "the remote client virtual reality device 140 updates the remote MR scene 145 based on the updated real scene MR scene model 135". Para. 46 and Figs. 2A-2G see "the user interface 142 presents the user 110 with the data captured by the client virtual reality device 125 at the real scene 130, which may include, for example, images, 360 degree videos, 3D scans, or audio". Para. 10 and 55 discloses enactment models which may record and track position, orientation, scale, etc. of scene objects to simulate interaction with a scene in motion as a function of time enabling a user in a virtual environment to play back the recorded enactment. (Examiner Note: the "client virtual reality device" is the source device, the "remote client virtual reality device" is the client device.)). wherein the data is received by the client device prior to the 3D scene being rendered; (Para. 45 see "the remote client virtual reality device 140 creates the remote MR scene 145 based on the real scene MR scene model 135." Para. 46 see "the real scene 130 is recreated in the remote MR scene 145". (Examiner Note: the real scene data is received before it is possible to recreate it with the remote client virtual reality device.)). based on the received data, determining, at the client device a set of view parameters for rendering the 3D scene at a client device; (Para. 45 see "the client virtual reality device 125 configures the VRE operable from the real scene 130 with the real scene MR scene model 135 generated from the real scene model based on the 3D geometry of the real scene 130." Para. 62 see "the initial data set may be created/collected by a client while operating at a remote location. In an illustrative example, the initial data set may include a scene's 3D geometry data." Para. 176 see "The goal in both cases is to generate a 3D representation of the scene that can be transmitted to the central location. The availability of a 3D model would enable the expert to manipulate it in the VRE, attach annotations and simulate the necessary operations to the technician more effectively than using a 2D representation for the same purposes." Para. 52 discloses using sensor/camera pose data. Para. 122-123 discloses tracking a path and state of a camera over time and rendering the frames to a surveillance screen using an enactment. (Examiner Note: The real scene model from the client device includes 3D data which is received and rendered by the remote client virtual reality device by mapping the real scene to the remote scene described in para. 45-46.)). and generating a rendering of the 3D scene according to the set of view parameters (Para. 45 see "the remote client virtual reality device 140 creates the remote MR scene 145 based on the real scene MR scene model 135." Para. 46 see "the real scene 130 is recreated in the remote MR scene 145". Para. 163 and 178 discloses rendering based on 3D data.).
Neeter does not teach and responsive to user input, the set of view parameters allowing the client device to render the 3D scene from a perspective different than a perspective of the source device; resulting in the rendering having the perspective that is different from the perspective of the source device.
However, Lee teaches and responsive to user input, (Para. 6 see "the technology described herein can be used to capture a scene (including objects in the scene) of a first location as one or more 3D models, transfer the 3D model(s) in real time to a second location that is remote from the first location, and then render viewing images of the 3D model from a different viewing perspective using the pose of a viewing element." Para . 52 see "the viewing device 112 can be used to ‘walk around’ the virtual scene which is a true 3D copy of the first location (such as via a VR viewing device)." (Examiner note: The user walks around as input, the device responds.)). the set of view parameters allowing the client device to render the 3D scene from a perspective different than a perspective of the source device; (Para. 52 see "the viewing device 112 is not tied to the original video capture perspective of the sensor 103 because the complete 3D model is recreated at the viewing device 112. Therefore, the viewing device (e.g., the CPU 114 and GPU 116) at the second location can manipulate the 3D model(s) locally in order to produce a completely independent perspective of the model(s) than what is being captured by the sensor 103 at the first location." Para. 60 see "the headset can render a different viewpoint of the scene and objects in the scene from that being captured by the sensor device. As such, the viewer at the remote location can traverse the scene and view the action from a completely independent perspective from the sensor that is viewing the scene locally providing an immersive and unique experience for the viewer."). resulting in the rendering having the perspective that is different from the perspective of the source device. (Para. 52 see "the viewing device 112 is not tied to the original video capture perspective of the sensor 103 because the complete 3D model is recreated at the viewing device 112. Therefore, the viewing device (e.g., the CPU 114 and GPU 116) at the second location can manipulate the 3D model(s) locally in order to produce a completely independent perspective of the model(s) than what is being captured by the sensor 103 at the first location." Para. 60 see "the headset can render a different viewpoint of the scene and objects in the scene from that being captured by the sensor device. As such, the viewer at the remote location can traverse the scene and view the action from a completely independent perspective from the sensor that is viewing the scene locally providing an immersive and unique experience for the viewer.").
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Neeter to incorporate the teachings of Lee to determine a set of view parameters at the client device responding to user input and to allow the client device to render the scene from a different perspective. Doing so would predictably increase user experience by automatically responding to the input of the user to determine view parameters, creating a seamless and immersive experience. Additionally, this would predictably increase user experience by allowing the user to view the scene from a different perspective than the source device, giving the user a feeling of independence as opposed to simply being an observer in the 3D environment.
Regarding claim 3, Neeter in view of Lee teaches The method of claim 1.
In addition, Neeter teaches wherein determining the set of view parameters comprises receiving virtual camera data associated with the client device. (Para. 52 discloses using the relative position of a virtual camera to provide the camera pose.).
Regarding claim 4, Neeter in view of Lee teaches The method of claim 1.
In addition, Neeter teaches wherein the method further comprises: receiving tracking data associated with the source device; (Para. 52 discloses using sensor tracking information.). and using the tracking data to determine the set of view parameters. (Para. 52 discloses using tracking information to provide the camera pose.).
Regarding claim 5, Neeter in view of Lee teaches The method of claim 1.
In addition, Neeter teaches wherein generating the rendering is performed by the client device. (Para. 45 see " the client virtual reality device 125 configures the VRE operable from the real scene 130 with the real scene MR scene model 135 generated from the real scene model based on the 3D geometry of the real scene 130", Para. 163 and 178 discloses rendering based on 3D data.).
Regarding claim 6, Neeter in view of Lee teaches The method of claim 1.
In addition, Neeter teaches wherein the 3D scene associated with the source device is mapped to a first physical environment and the rendering is mapped to a second physical environment. (Para. 50-51 discloses using sensors to create a 3D MR scene model. Points/features are used to align the remote and HQ scene models in 3D space. Para. 130 discloses connecting two physical locations into a shared virtual environment.).
Regarding claim 8, Neeter in view of Lee teaches The method of claim 1.
In addition, Neeter teaches wherein the data indicating the 3D scene associated with the source device is generated using at least a virtual reality device. (Para. 46 and Figs. 2A-2G see "the user interface 142 presents the user 110 with the data captured by the client virtual reality device 125 at the real scene 130, which may include, for example, images, 360 degree videos, 3D scans, or audio".).
Regarding claim 9, Neeter teaches A system comprising: one or more processors; (Para. 74 discloses computer-executable instructions, program memory, and a processor to execute.). and one or more computer storage hardware devices storing computer-usable instructions that when used by the one or more processors, cause the one or more processors to: (Para. 74 and Fig. 30 see "the block diagram of the exemplary client virtual reality device 125 includes processor 3005 and memory 3010", Para. 205 further discloses computing devices.). at a client device, receive data indicating a 3D scene associated with a source device, the data comprising scene data and tracking data, (Para. 45 see "the remote client virtual reality device 140 updates the remote MR scene 145 based on the updated real scene MR scene model 135", Para. 46 and Figs. 2A-2G see "the user interface 142 presents the user 110 with the data captured by the client virtual reality device 125 at the real scene 130, which may include, for example, images, 360 degree videos, 3D scans, or audio". Para. 10 and 55 discloses enactment models which may record and track position, orientation, scale, etc. of scene objects to simulate interaction with a scene in motion as a function of time enabling a user in a virtual environment to play back the recorded enactment. (Examiner Note: the "client virtual reality device" is the source device, the "remote client virtual reality device" is the client device.)). wherein the data is received by the client device prior to the 3D scene being rendered; (Para. 45 see "the remote client virtual reality device 140 creates the remote MR scene 145 based on the real scene MR scene model 135." Para. 46 see "the real scene 130 is recreated in the remote MR scene 145". (Examiner Note: the real scene data is received before it is possible to recreate it with the remote client virtual reality device.)). determine a set of view parameters for presentation of the 3D scene at a client device (Para. 45 see "the client virtual reality device 125 configures the VRE operable from the real scene 130 with the real scene MR scene model 135 generated from the real scene model based on the 3D geometry of the real scene 130." Para. 62 see "the initial data set may be created/collected by a client while operating at a remote location. In an illustrative example, the initial data set may include a scene's 3D geometry data." Para. 46 and Figs. 2A-2G see "the user interface 142 presents the user 110 with the data captured by the client virtual reality device 125 at the real scene 130, which may include, for example, images, 360 degree videos, 3D scans, or audio". Para. 176 see "The goal in both cases is to generate a 3D representation of the scene that can be transmitted to the central location. The availability of a 3D model would enable the expert to manipulate it in the VRE, attach annotations and simulate the necessary operations to the technician more effectively than using a 2D representation for the same purposes." Para. 52 discloses using sensor/camera pose data. Para. 122-123 discloses tracking a path and state of a camera over time and rendering the frames to a surveillance screen using an enactment. (Examiner Note: The real scene model from the client device includes 3D data which is received and rendered by the remote client virtual reality device by mapping the real scene to the remote scene described in para. 45-46. This is then presented to the user via the user interface.)). generate a rendering of the 3D scene according to the set of view parameters (Para. 45 see "the remote client virtual reality device 140 creates the remote MR scene 145 based on the real scene MR scene model 135." Para. 46 see "the real scene 130 is recreated in the remote MR scene 145". Para. 163 and 178 discloses rendering based on 3D data.). and transmit the rendering of the 3D scene to the client device. (Para. 45 see "the remote client virtual reality device 140 transmits the calibrated remote MR scene 165 to the client virtual reality device 125 via the collaboration server 120". Para. 52 discloses using sensor/camera pose data. Para. 122-123 discloses tracking a path and state of a camera over time and rendering the frames to a surveillance screen using an enactment.).
Neeter does not teach and responsive to user input, the set of view parameters allowing the client device to render the 3D scene from a perspective different than a perspective of the source device; resulting in the rendering having the perspective that is different from the perspective of the source device;.
However, Lee teaches and responsive to user input, the set of view parameters allowing the client device to render the 3D scene from a perspective different than a perspective of the source device; (Para. 6 see "the technology described herein can be used to capture a scene (including objects in the scene) of a first location as one or more 3D models, transfer the 3D model(s) in real time to a second location that is remote from the first location, and then render viewing images of the 3D model from a different viewing perspective using the pose of a viewing element." Para . 52 see "the viewing device 112 can be used to ‘walk around’ the virtual scene which is a true 3D copy of the first location (such as via a VR viewing device)." (Examiner note: The user walks around as input, the device responds.) Para. 66 see "a keyboard and a pointing device, e.g., a mouse, a trackball, a touchpad, or a motion sensor, by which the user can provide input to the computer" Para. 52 see "the viewing device 112 is not tied to the original video capture perspective of the sensor 103 because the complete 3D model is recreated at the viewing device 112. Therefore, the viewing device (e.g., the CPU 114 and GPU 116) at the second location can manipulate the 3D model(s) locally in order to produce a completely independent perspective of the model(s) than what is being captured by the sensor 103 at the first location." Para. 60 see "the headset can render a different viewpoint of the scene and objects in the scene from that being captured by the sensor device. As such, the viewer at the remote location can traverse the scene and view the action from a completely independent perspective from the sensor that is viewing the scene locally providing an immersive and unique experience for the viewer."). resulting in the rendering having the perspective that is different from the perspective of the source device; (Para. 52 see "the viewing device 112 is not tied to the original video capture perspective of the sensor 103 because the complete 3D model is recreated at the viewing device 112. Therefore, the viewing device (e.g., the CPU 114 and GPU 116) at the second location can manipulate the 3D model(s) locally in order to produce a completely independent perspective of the model(s) than what is being captured by the sensor 103 at the first location." Para. 60 see "the headset can render a different viewpoint of the scene and objects in the scene from that being captured by the sensor device. As such, the viewer at the remote location can traverse the scene and view the action from a completely independent perspective from the sensor that is viewing the scene locally providing an immersive and unique experience for the viewer.").
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Neeter to incorporate the teachings of Lee to determine a set of view parameters at the client device responding to user input and to allow the client device to render the scene from a different perspective. Doing so would predictably increase user experience by automatically responding to the input of the user to determine view parameters, creating a seamless and immersive experience. Additionally, this would predictably increase user experience by allowing the user to view the scene from a different perspective than the source device, giving the user a feeling of independence as opposed to simply being an observer in the 3D environment.
Regarding claim 11, Neeter in view of Lee teaches The system of claim 9.
In addition, Neeter teaches wherein determining the set of view parameters comprises receiving virtual camera data associated with the client device. (Para. 52 discloses using the relative position of a virtual camera to provide the camera pose.).
Regarding claim 12, Neeter in view of Lee teaches The system of claim 9.
In addition, Neeter teaches wherein the data indicating the 3D scene comprises virtual reality ("VR") tracking data associated with the source device used to determine the set of view parameters. (Para. 52 discloses using sensor tracking information.).
Regarding claim 13, Neeter in view of Lee teaches The system of claim 9.
In addition, Neeter teaches wherein the 3D scene associated with the source device is mapped to a first physical environment and the rendering is mapped to a second physical environment associated with the client device. (Para. 50-51 discloses using sensors to create a 3D MR scene model. Points/features are used to align the remote and HQ scene models in 3D space. Para. 130 discloses connecting two physical locations into a shared virtual environment.).
Regarding claim 15, Neeter in view of Lee teaches The system of claim 9.
In addition, Neeter teaches wherein the data indicating the 3D scene associated with the source device is generated using at least a virtual reality device. (Para. 46 and Figs. 2A-2G see "the user interface 142 presents the user 110 with the data captured by the client virtual reality device 125 at the real scene 130, which may include, for example, images, 360 degree videos, 3D scans, or audio".).
Regarding claim 16, Neeter teaches One or more non-transitory computer-readable media having computer-executable instructions embodied thereon that, when executed, perform a method comprising: (Para. 74 discloses computer-executable instructions, program memory, and a processor to execute.). receiving data corresponding to a 3D scene associated with a source device, wherein the source device comprises one or more virtual reality ("VR") sensors; the data comprising scene data and tracking data, (Para. 45 see "the remote client virtual reality device 140 updates the remote MR scene 145 based on the updated real scene MR scene model 135" and "the client virtual reality device 125 creates a real scene model based on the sensor data captured from the scan of the real scene 130", Para. 46 and Figs. 2A-2G see "the user interface 142 presents the user 110 with the data captured by the client virtual reality device 125 at the real scene 130, which may include, for example, images, 360 degree videos, 3D scans, or audio". Para. 10 and 55 discloses enactment models which may record and track position, orientation, scale, etc. of scene objects to simulate interaction with a scene in motion as a function of time enabling a user in a virtual environment to play back the recorded enactment. (Examiner Note: the "client virtual reality device" is the source device, the "remote client virtual reality device" is the client device.)). and wherein the data is received by the client device prior to the 3D scene being rendered; (Para. 45 see "the remote client virtual reality device 140 creates the remote MR scene 145 based on the real scene MR scene model 135." Para. 46 see "the real scene 130 is recreated in the remote MR scene 145". (Examiner Note: the real scene data is received before it is possible to recreate it with the remote client virtual reality device.)). determining a set of view parameters for rendering the 3D scene at a client device (Para. 45 see "the client virtual reality device 125 creates a real scene model based on the sensor data captured from the scan of the real scene 130"). wherein the set of view parameters comprises an attribute associated with a virtual camera associated with the 3D scene; (Para. 45 see "the client virtual reality device 125 creates a real scene model based on the sensor data captured from the scan of the real scene 130", Para. 52 discloses using sensor/camera pose data and relative position of a virtual camera to provide the camera pose.). and generating a rendering, at the client device, of the 3D scene in accordance with the set of view parameters. (Para. 45 see " the client virtual reality device 125 configures the VRE operable from the real scene 130 with the real scene MR scene model 135 generated from the real scene model based on the 3D geometry of the real scene 130", Para. 163 and 178 discloses rendering based on 3D data.).
Neeter does not teach and responsive to user input, and allows the client device to render the 3D scene from a perspective different than a perspective of the source device; resulting in the rendering having the perspective that is different from the perspective of the source device.
However, Lee teaches and responsive to user input, (Para. 6 see "the technology described herein can be used to capture a scene (including objects in the scene) of a first location as one or more 3D models, transfer the 3D model(s) in real time to a second location that is remote from the first location, and then render viewing images of the 3D model from a different viewing perspective using the pose of a viewing element." Para . 52 see "the viewing device 112 can be used to ‘walk around’ the virtual scene which is a true 3D copy of the first location (such as via a VR viewing device)." (Examiner note: The user walks around as input, the device responds.)). and allows the client device to render the 3D scene from a perspective different than a perspective of the source device; (Para. 52 see "the viewing device 112 is not tied to the original video capture perspective of the sensor 103 because the complete 3D model is recreated at the viewing device 112. Therefore, the viewing device (e.g., the CPU 114 and GPU 116) at the second location can manipulate the 3D model(s) locally in order to produce a completely independent perspective of the model(s) than what is being captured by the sensor 103 at the first location." Para. 60 see "the headset can render a different viewpoint of the scene and objects in the scene from that being captured by the sensor device. As such, the viewer at the remote location can traverse the scene and view the action from a completely independent perspective from the sensor that is viewing the scene locally providing an immersive and unique experience for the viewer."). resulting in the rendering having the perspective that is different from the perspective of the source device. (Para. 52 see "the viewing device 112 is not tied to the original video capture perspective of the sensor 103 because the complete 3D model is recreated at the viewing device 112. Therefore, the viewing device (e.g., the CPU 114 and GPU 116) at the second location can manipulate the 3D model(s) locally in order to produce a completely independent perspective of the model(s) than what is being captured by the sensor 103 at the first location." Para. 60 see "the headset can render a different viewpoint of the scene and objects in the scene from that being captured by the sensor device. As such, the viewer at the remote location can traverse the scene and view the action from a completely independent perspective from the sensor that is viewing the scene locally providing an immersive and unique experience for the viewer.").
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Neeter to incorporate the teachings of Lee to determine a set of view parameters at the client device responding to user input and to allow the client device to render the scene from a different perspective. Doing so would predictably increase user experience by automatically responding to the input of the user to determine view parameters, creating a seamless and immersive experience. Additionally, this would predictably increase user experience by allowing the user to view the scene from a different perspective than the source device, giving the user a feeling of independence as opposed to simply being an observer in the 3D environment.
Regarding claim 17, Neeter in view of Lee teaches The media of claim 16.
In addition, Neeter teaches wherein the 3D scene associated with the source device is mapped to a first physical environment and the rendering is mapped to a second physical environment associated with the client device. (Para. 50-51 discloses using sensors to create a 3D MR scene model. Points/features are used to align the remote and HQ scene models in 3D space. Para. 130 discloses connecting two physical locations into a shared virtual environment.).
Regarding claim 18, Neeter in view of Lee teaches The media of claim 16.
In addition, Neeter teaches wherein the rendering is presented in a user interface of the client device. (Para. 46 and Figs. 2A-2G see "the user interface 142 presents the user 110 with the data captured by the client virtual reality device 125 at the real scene 130, which may include, for example, images, 360 degree videos, 3D scans, or audio".).
Regarding claim 20, Neeter in view of Lee teaches The media of claim 16.
In addition, Neeter teaches wherein the attribute associated with the virtual camera comprises a position, direction, field of view, depth of field, or tracked object. (Para. 52 discloses using the relative position of a virtual camera to provide the camera pose.).
Claims 7, 14, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Neeter (US 20200005538 A1), hereinafter Neeter, in view of Lee et al. (US 20180101966 A1), hereinafter Lee, and Bergmann et al. (US 20190098255 A1), hereinafter Bergmann.
Regarding claim 7, Neeter in view of Lee teaches The method of claim 1.
Neeter does not teach further comprising determining a second set of view parameters for rendering the 3D scene at a second client device; and generating a second rendering of the 3D scene according to the second set of view parameters.
However, Bergmann teaches further comprising determining a second set of view parameters for rendering the 3D scene at a second client device; and generating a second rendering of the 3D scene according to the second set of view parameters. ((Para. 8 discloses transmitting 3D point cloud data to a plurality of meeting participants in a VR environment. Para. 40 discloses rendering based on points and position of the VR viewer.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Neeter and Lee to incorporate the teachings of Bergmann to allow a second client device to receive and render 3D scene data. Doing so would allow multiple users to view the ongoing session between an expert and technician for purposes such as supervision, additional expert input, or training of additional users.
Regarding claim 14, Neeter in view of Lee teaches The system of claim 9.
Neeter does not teach wherein a second set of view parameters for presentation of the 3D scene at a second client device is determined and a second rendering of the 3D scene is rendered according to the second set of view parameters.
However, Bergmann teaches wherein a second set of view parameters for presentation of the 3D scene at a second client device is determined and a second rendering of the 3D scene is rendered according to the second set of view parameters. (Para. 8 discloses transmitting 3D point cloud data to a plurality of meeting participants in a VR environment. Para. 40 discloses rendering based on points and position of the VR viewer.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Neeter and Lee to incorporate the teachings of Bergmann to allow a second client device to receive and render 3D scene data. Doing so would allow multiple users to view the ongoing session between an expert and technician for purposes such as supervision, additional expert input, or training of additional users.
Regarding claim 19, Neeter in view of Lee teaches The media of claim 16.
Neeter does not teach wherein the method further comprises determining a second set of view parameters for rendering the 3D scene at a second client device and generating a second rendering, at the second client devices, of the 3D scene in accordance with the second set of view parameters.
However, Bergmann teaches wherein the method further comprises determining a second set of view parameters for rendering the 3D scene at a second client device and generating a second rendering, at the second client devices, of the 3D scene in accordance with the second set of view parameters. (Para. 8 discloses transmitting 3D point cloud data to a plurality of meeting participants in a VR environment. Para. 40 discloses rendering based on points and position of the VR viewer.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Neeter and Lee to incorporate the teachings of Bergmann to allow a second client device to receive and render 3D scene data. Doing so would allow multiple users to view the ongoing session between an expert and technician for purposes such as supervision, additional expert input, or training of additional users.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. French (US 20180063205 A1) discloses shared virtual reality environments among users as well as rendering and displaying real and virtual environments constructed from 3D maps.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER J VAUGHN whose telephone number is (571) 272-5253. The examiner can normally be reached M-F 8:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW MOYER can be reached on (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEXANDER JOSEPH VAUGHN/Examiner, Art Unit 2675
/EDWARD PARK/Primary Examiner, Art Unit 2675