DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/21/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendment
This is in response to applicant’s amendment/response filed on 01/21/2026, which has been entered and made of record. Claims 1-5, 7-9, 11-16, and 18-19 have been amended. Claims 6, 10, 17, and 20 have been cancelled. Claims 56-59 have been added. Claims 1-5, 7-9, 11-16, 18-19, and 56-59 are pending in the application.
Response to Arguments
Applicant's arguments filed on 01/21/2026 have been fully considered but they are not persuasive or are rendered moot in view of the new grounds of rejection presented below (as necessitated by the amendment to claims 1 and 12).
The objection of claims 1 and 12 has been withdrawn after amendment.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 7-9, 11-16, 18-19, 56, and 58 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2024/0112422 to Araumi in view of U.S. PGPubs 2017/0354875 to Mark et al., further in view of U.S. PGPubs 2022/0301264 to O’Leary et al..
Regarding claim 1, Araumi teaches a method, performed by a supervisory device, comprising (par 0004):
receiving position data representing a plurality of positions of a plurality of extended reality (XR) devices and a plurality of respective replica stream data representing what is rendered by each respective XR device (par 0004, “a communication management server includes circuitry to receive, from a mobile device, real space device position information indicating a position of the mobile device in a real space. The mobile device is movable in the real space and available to capture an image. The circuitry receives, from a first communication terminal being present in the real space, real space terminal position information indicating a position of the first communication terminal in the real space. The circuitry generates a virtual space image representing a virtual space. The virtual space image includes an icon related to the mobile device, an icon related to the first communication terminal, and an icon related to a second communication terminal. Each of the icon related to the mobile device and the icon related to the first communication terminal is associated with a corresponding position in the virtual space to appear on the virtual space image based on a corresponding one of the real space device position information and the real space terminal position information”, par 0156, “In the communication management server 3, the position configuring unit 36 stores and configures, in the “virtual space position information” field of the record including the login ID and the password used for the authentication of the processing of S13, virtual space (terminal) position information indicating a position in the virtual space and corresponding to the real space terminal position information received in the processing of S12, based on the position correspondence information (matching information) indicating the correspondence between the position in the real space and the position in the virtual space”, par 0181, “the transmission/reception unit 31 of the communication management server 3 receives the real space device position information. In addition, the transmission/reception unit 31 overwrites, in the position information management table, with the real space device position information received in the processing of S52, in the “real space position information” field of the record including the login ID received in the processing of S52”, par 0165, “the explainer terminal 5a of the explainer E1 indicated by the icon e1 in the association area a1 and the user terminal 9b of the user Y2 indicated by the icon y2 can receive and display the video transmitted by the robot R1 indicated by the icon r1, by establishing a video communication session”, par 0175-0178, “the robots R1 and the user terminals 9a can perform video communication with each other via the communication management server 3. For example, the robot R1 captures a video of the exhibition and transmits the video data to the user terminal 9a, and the display 518 of the user terminal 9a displays the video of the exhibition … When the robot R1 performs video communication with a plurality of user terminals, the robot R1 transmits the video of the exhibition to the plurality of user terminals via the communication management server 3, so that the video of the exhibition is displayed on each of the plurality of user terminals”), wherein each of the plurality of XR devices is distinct from the supervisory device (Fig 14, par 0120-0122, “The communication management server 3 may include a plurality of servers … As illustrated in FIG. 14, the explainer terminal 5 includes a communication unit 50, a transmission/reception unit 51, a reception unit 52, a position acquisition unit 53, and a display control unit 54”, par 0130, “As illustrated in FIG. 14, the robot R (robot terminal 7) includes a transmission/reception unit 71, a reception unit 72, a position acquisition unit 73, a display control unit 74, and a following control unit 78”);
storing mapping data mapping the position data of each XR device of the plurality of XR devices to the respective replica stream data of the respective XR device received in association with the respective position data (par 0111, “The storage unit 40 stores position correspondence information (matching information) indicating a correspondence relationship between a position in the real space and a position in the virtual space.”, par 0118, “The position configuring unit 36 performs processing such as storing and configuring virtual space terminal (device) position information indicating a position in the virtual space corresponding to real space terminal position information in the “virtual space position information” in FIG. 15 based on the position correspondence information (matching information) indicating a correspondence relationship between a position in the real space and a position in the virtual space”, par 0156, “the position configuring unit 36 stores and configures, in the “virtual space position information” field of the record including the login ID and the password used for the authentication of the processing of S15, virtual space (device) position information corresponding to the real space device position information received in the processing of S14, based on the position correspondence information. Further, the position configuring unit 36 stores and configures, in the “virtual space position information” field of the record including the login ID and the password used for the authentication of the processing of S17, virtual space (device) position information indicating an initial position”); and
receiving an input identifying a selected position of a selected XR device (par 0190-0191, “FIG. 26 is a diagram illustrating a virtual space image on which an icon of a counterpart for dedicated voice communication to be established is selected. S71: In the user terminal 9a, the reception unit 92 receives a user operation, performed by the user Y1, of selecting an icon (in the example, the icon e1) other than his or her own icon y1 by using the cursor cl as illustrated in FIG. 26. Then, the transmission/reception unit 91 transmits information indicating the selection of the icon e1 to the communication management server 3 as a request to establish dedicated voice communication”, Fig 26, par 0194, “In FIG. 26, the user Y1 selects the icon e1 of the explainer E1, the present disclosure is not limited to this. For example, the user Y1 may select the icon y2 corresponding the other user Y2 in the same association area a1 to have a conversation with the user Y2 by dedicated voice communication”);
making the replica respective stream data available to the supervisory device (par 0165, “the explainer terminal 5a of the explainer E1 indicated by the icon e1 in the association area a1 and the user terminal 9b of the user Y2 indicated by the icon y2 can receive and display the video transmitted by the robot R1 indicated by the icon r1, by establishing a video communication session”, par 0175-0178, “the robots R1 and the user terminals 9a can perform video communication with each other via the communication management server 3. For example, the robot R1 captures a video of the exhibition and transmits the video data to the user terminal 9a, and the display 518 of the user terminal 9a displays the video of the exhibition … When the robot R1 performs video communication with a plurality of user terminals, the robot R1 transmits the video of the exhibition to the plurality of user terminals via the communication management server 3, so that the video of the exhibition is displayed on each of the plurality of user terminals”).
But Araumi keeps silent for teaching based at least in part on the input identifying the selected position of the selected XR device: identifying the respective replica stream data corresponding to the selected position of the selected XR device data based on the mapping data; making the identified respective replica stream data available to the supervisory device.
PNG
media_image1.png
312
458
media_image1.png
Greyscale
In related endeavor, Marks et al. teach receiving an input identifying a selected position of a selected XR device from a supervisory device (par 0077-0078, “spectators can be provided with controls that allow the spectator to identify specific listening zones within the virtual reality environment. The listening zones allow spectators to select where in the virtual reality environment they wish to listen from. What this means is that the spectator is essentially provided with listening audio and acoustics that mimic a situation where the spectator would actually be present in the scene from that specific location”, Figs 5-6, par 0087-0088, “a viewing region/area 600 is provided having nine locations/seats S1-S9. A plurality of spectators U.sub.1 to U.sub.9 (conceptually shown at ref. 602) are positioned in the locations S1-S9, depending upon which spectator is being provided with the view of the VR environment in which the viewing region 600 is disposed. In the viewing region 600, the location S5 is the best location/seat for spectating, and therefore each user will spectate from the location S5. For a given spectator that is placed at the location S5 when spectating, then the remaining spectators are placed in the other remaining locations/seats around him/her in the viewing region 600 “, par 0108-0112, “in a view of the VR environment provided to spectator 1108, then a different portion of the array would be selected (in order to position spectator 1108 in the preferred viewing location 1116), yet the spatial relationship with other spectators would be maintained as it is according to that defined by the array”);
based at least in part on the input identifying the selected position of the selected XR device: identifying the respective replica stream data corresponding to the selected position of the selected XR device data based on the mapping data (par 0080, “The HMD VR player 100 is therefore driving the interactivity within the VR environment 450, which will move the scenes presented in the HMD 102, as well as the replicated view shown in the display 107. The spectator can therefore be one that is viewing the display 107, such as spectator 140. As mentioned above, the spectator 140 is a social screen spectator, as that spectator is able to interact with the HMD player 100 in a co-located space. In other embodiments, or in addition to the co-located spectator 140, an HMD spectator 150 can also be provided access to the content being navigated by the HMD player 100. The HMD spectator 150 can be co-located with the HMD player 100. In other embodiments, the HMD spectator 150 can be remotely located from the HMD player and can view the content from a website, such as a Twitch-type viewing website. Therefore, the example shown in FIG. 4 is only one example, and it is possible to have multiple spectators or even thousands of spectators viewing the HMD players content from remote locations”, Figs 7A-7B, par 0091-0093, “FIGS. 7A and 7B illustrate adjustment of spectator avatars in a VR environment based on their controlling spectator's perceived object of interest, in accordance with implementations of the disclosure. As has been discussed above, when multiple spectators are viewing the VR environment, each one can be provided a view from a preferred location, while avatars of other spectators are positioned at other locations in the VR environment. Thus, the locations of the spectator avatars in the VR environment is dependent upon which spectator for whom the view is being provided. …. FIG. 7A illustrates a VR environment 700 as seen from the perspective of a user U.sub.1. In the illustrated implementation, the VR environment as experienced by the user U.sub.1 is configured such that the user U.sub.1 is positioned at a preferred location 702 in front of an object or scene of interest”);
making the identified respective replica stream data available to the supervisory device (Figs 7A-7B, par 0091-0093, “FIG. 7A illustrates a VR environment 700 as seen from the perspective of a user U.sub.1. In the illustrated implementation, the VR environment as experienced by the user U.sub.1 is configured such that the user U.sub.1 is positioned at a preferred location 702 in front of an object or scene of interest”, par 0095-0096, “ In order to provide proper adjustment of the avatars of other spectators in the VR environment, the system can be configured to determine what object in the VR environment that a given spectator is looking towards. This can be determined based on extrapolating the view direction of the given spectator to an object in the virtual environment, and will be based on the real-world pose of the HMD that the spectator is wearing, as well as the spectator's location in the VR environment. Thus, when a spectator is viewing the VR environment, logic can be configured to determine the spectator's view direction, and based on the spectator's view direction determine an object of interest towards which the view direction is pointing or directed”).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Araumi to include identifying the replica stream data corresponding to the position data received from the supervisory device based on the mapping data; making the identified replica stream data available to the supervisory device as taught by Marks et al. to render a perspective view for remote user as like at real position in the scene to provide a visually immersive experience to the user.
But Araumi as modified by Marks et al. keep silent for teaching configuring the supervisory device for generation of one or more visual indicators on a display of the selected XR device, wherein the one or more visual indicators are distinct
from an environment of the selected XR device.
PNG
media_image2.png
424
342
media_image2.png
Greyscale
In related endeavor, O’Leary et al. teach configuring the supervisory device for generation of one or more visual indicators on a display of the selected XR device, wherein the one or more visual indicators are distinct
from an environment of the selected XR device (Figs 7A-7B, par 0120-126, “The navigation user interface element 704a further includes an indication 716 of a respective location corresponding to the content 708a presented in content user interface element 706, and a field of view indicator 718a that indicates the field of view corresponding to content 708a. For example, the content 708a is an image (or video) captured from the physical location corresponding to the location of indication 716 within the navigation user interface element 704a with boundaries that correspond to the field of view indictor 718a. As shown in FIG. 7A, the electronic device 101 presents the navigation user interface element 704a so that the navigation user interface element appears to be resting on the surface of the representation 702 of real table in the physical environment of the electronic device 101”).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Araumi as modified by Marks et al. to include configuring the supervisory device for generation of one or more visual indicators on a display of the selected XR device, wherein the one or more visual indicators are distinct from an environment of the selected XR device as taught by O’Leary et al. to present navigation from a first physical location to a second physical location with reduced visual prominence in a content element in response to an input corresponding to a request to present content corresponding to the second physical location to provide an efficient way of browsing content corresponding to physical locations, which additional reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently (e.g., without the user having to direct their attention to a different region of the three-dimensional environment or provide an input to continue displaying the content at the same location).
Regarding claim 2, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 1, and Araumi further teaches wherein the mapping data maps the position data of each XR device to connection information representing a network resource used for obtaining respective replica stream data for the selected XR device, the method further comprising receiving the connection information at the supervisory device (Fig 16, par 0145-0152, “The reception unit 92 of the user terminal 9a receives a login operation (for example, input of a login ID and a password) performed by the user Y, and the transmission/reception unit 91 transmits a login request for a service of the communication management server 3 to the communication management server 3”, Fig 18, par 0164-165, “the explainer terminal 5a of the explainer E1 indicated by the icon e1 in the association area a1 and the user terminal 9b of the user Y2 indicated by the icon y2 can receive and display the video transmitted by the robot R1 indicated by the icon r1, by establishing a video communication session”, par 0175-0177, “the robots R1 and the user terminals 9a can perform video communication with each other via the communication management server 3. For example, the robot R1 captures a video of the exhibition and transmits the video data to the user terminal 9a, and the display 518 of the user terminal 9a displays the video of the exhibition, accordingly. By contrast, when the user terminal 9a captures a video including the face of the user Y1 and transmits the video data to the robot R1, the video including the face of the user Y1 is displayed on the display 518 of the robot terminal 7a of the robot R1”, par 0203-0207, “the robots R2 and the user terminals 9a can perform video communication with each other via the communication management server 3. For example, the robot R2 captures a video of the exhibition and transmits the video data to the user terminal 9a, and the display 518 of the user terminal 9a displays the video of the exhibition. By contrast, when the user terminal 9a captures a video including the face of the user Y1 and transmit the video data to the robot R2, the video including the face of the user Y1 is displayed on the display 518 of the robot terminal 7 of the robot R2 . the robots R2 and the user terminals 9a can perform video communication with each other via the communication management server 3. For example, the robot R2 captures a video of the exhibition and transmits the video data to the user terminal 9a, and the display 518 of the user terminal 9a displays the video of the exhibition. By contrast, when the user terminal 9a captures a video including the face of the user Y1 and transmit the video data to the robot R2, the video including the face of the user Y1 is displayed on the display 518 of the robot terminal 7 of the robot R2”).
Regarding claim 3, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 1, and further wherein the respective replica stream data comprises a representation of the environment of the selected XR device overlaid with one or more visual elements generated by the selected XR device to augment the environment of the selected XR device, and each of the one or more visual elements generated by the selected XR device are distinct from the one or more visual indicators generated by the supervisory device (Araumi: Fig 18, par 0161-0165, “the icon e1 indicating the explainer E1 and the icon y2 indicating the user Y2 who is remotely attending the exhibition in the virtual space appear, or is displayed, in an association area a1 centered around the icon r1 indicating the robot R1. This indicates that the Explainer E1 indicated by the icon e1 and the user Y2 indicated by the icon y2 belong to the robot R1 indicated by the icon r1”, Marks et al.: par 0058, “the lights can be configured to indicate a current status of the HMD to others in the vicinity. For example, some or all of the lights may be configured to have a certain color arrangement, intensity arrangement, be configured to blink, have a certain on/off configuration, or other arrangement indicating a current status of the HMD 102”, par 0076, “where multiple spectators are viewing the same content provided by the HMD player, e.g. in a Twitch presentation, each of the spectators can be provided with different controls that provide to them the ability to provide the visual indicators or not. From the perspective of the HMD player, the indicators may not be shown at all in the HMD of the HMD player. However, these indicators will be useful to the spectator or spectators that may be viewing the content being interacted with by the HMD player”, O’Leary et al.: Figs 7A-7B, par 0120-126, “The navigation user interface element 704a further includes an indication 716 of a respective location corresponding to the content 708a presented in content user interface element 706, and a field of view indicator 718a that indicates the field of view corresponding to content 708a. For example, the content 708a is an image (or video) captured from the physical location corresponding to the location of indication 716 within the navigation user interface element 704a with boundaries that correspond to the field of view indictor 718a. As shown in FIG. 7A, the electronic device 101 presents the navigation user interface element 704a so that the navigation user interface element appears to be resting on the surface of the representation 702 of real table in the physical environment of the electronic device 101”).
Regarding claim 4, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 2, and Araumi further teaches wherein the network resource comprises a local server, and the local server, the supervisory device and the XR device are communicative coupled to each other using a Local Area Network (par 0046, “The communication management server 3, the explainer terminal 5, the user terminal 9, and the robot terminal 7 of the robot R can communicate with each other via a communication network 100 such as the Internet. The communication may be wired communication or wireless communication. In the example of FIG. 1, the explainer terminal 5, the user terminal 9, and the robot terminal 7 are illustrated to communicate wirelessly. The microphone-equipped earphone 6 can perform short-range communication by pairing with the explainer terminal 5”, par 0087, “The short-range communication circuit 817 establishes communication with an external terminal (for example, the robot terminal 7) via the antenna 817a of the wide-angle imaging device 8 through a short-range wireless communication technology such as Wi-Fi, NFC, and BLUETOOTH (registered trademark). By the short-range communication circuit 817, the data of the equirectangular projection image can be transmitted to an external terminal.”).
Regarding claim 5, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 2, and Araumi further teaches wherein the network resource comprises information indicative of a Wide Area Network and the supervisory device and the plurality of XR devices are communicatively coupled via the Wide Area Network (par 0046, “The communication management server 3, the explainer terminal 5, the user terminal 9, and the robot terminal 7 of the robot R can communicate with each other via a communication network 100 such as the Internet. The communication may be wired communication or wireless communication. In the example of FIG. 1, the explainer terminal 5, the user terminal 9, and the robot terminal 7 are illustrated to communicate wirelessly. The microphone-equipped earphone 6 can perform short-range communication by pairing with the explainer terminal 5”, Fig 14, par 0130-0131, “The transmission/reception unit 71 performs data communication with another terminal (device) via the communication network 100”).
Regarding claim 7, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 1, and Araumi further teaches wherein the supervisory device and the selected XR device are communicatively coupled using direct device-to-device communication (par 0164-0165, “the explainer terminal 5a of the explainer E1 indicated by the icon e1 in the association area a1 and the user terminal 9b of the user Y2 indicated by the icon y2 can perform voice communication by establishing a voice communication session. Accordingly, the explainer E1 can explain the state of the exhibition in the real space (real world) to the user Y2 by using the microphone-equipped earphone 6, and the user Y2 can ask questions about the exhibition in the real space to the explainer E1 by using the user terminal 9”, par 0190-0191, “In the user terminal 9a, the reception unit 92 receives a user operation, performed by the user Y1, of selecting an icon (in the example, the icon e1) other than his or her own icon y1 by using the cursor cl as illustrated in FIG. 26. Then, the transmission/reception unit 91 transmits information indicating the selection of the icon e1 to the communication management server 3 as a request to establish dedicated voice communication”).
Regarding claim 8, Araumi as modified by Marks et al. teaches and O’Leary et al. all the limitation of claim 1, and Araumi further teaches further comprising sending one or more messages to the selected XR device; and causing display of the one or more messages on the display of the selected XR device (par 0004, “The circuitry transmits the virtual space image to the second communication terminal and establishes voice communication between the first communication terminal and the second communication terminal in response to receiving, via the second communication terminal, an operation of associating the icon related to the second communication terminal with the icon related to the mobile device on the virtual space image”, par 0164-0165, “the explainer terminal 5a of the explainer E1 indicated by the icon e1 in the association area a1 and the user terminal 9b of the user Y2 indicated by the icon y2 can receive and display the video transmitted by the robot R1 indicated by the icon r1, by establishing a video communication session”).
Regarding claim 9, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 1, and Araumi further teaches further comprising: sending audio data to the selected XR device; and causing the selected XR device to generate audio output of the audio data sent from the supervisory (par 0004, “The circuitry transmits the virtual space image to the second communication terminal and establishes voice communication between the first communication terminal and the second communication terminal in response to receiving, via the second communication terminal, an operation of associating the icon related to the second communication terminal with the icon related to the mobile device on the virtual space image”, par 0164, “the explainer E1 can explain the state of the exhibition in the real space (real world) to the user Y2 by using the microphone-equipped earphone 6, and the user Y2 can ask questions about the exhibition in the real space to the explainer E1 by using the user terminal 9b.”, par 0194, “In FIG. 26, the user Y1 selects the icon e1 of the explainer E1, the present disclosure is not limited to this. For example, the user Y1 may select the icon y2 corresponding the other user Y2 in the same association area a1 to have a conversation with the user Y2 by dedicated voice communication. Further, the user Y1 may select the icon e2 of the explainer E2 in a different association area a2 to have a conversation with the explainer E2 by dedicated voice communication, or may select the icon y3 of the user Y3 to have a conversation with the user Y3 by dedicated voice communication”).
Regarding claim 11, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 1, and further teaches wherein the identified respective replica stream data is made available to the supervisory device in real-time (Araumi: par 0147, “The real space terminal position information is acquired by the position acquisition unit 53 of the explainer terminal 5a. As a result, the transmission/reception unit 31 of the communication management server 3 receives the login request from the explainer terminal 5a. Each time the explainer terminal 5a moves, the real space terminal position information after the movement is transmitted from the explainer terminal 5a to the communication management server 3”, Marks et al.: par 0047, “ the methods, systems, image capture objects, sensors and associated interface objects (e.g., gloves, controllers, etc.) are configured to process data that is configured to be rendered in substantial real time on a display screen. The display may be the display of a head mounted display (HMD), a display of a second screen, a display of a portable device, a computer display, a display panel, a display of one or more remotely connected users (e.g., whom may be viewing content or sharing in an interactive experience), or the like”, par 0083-0084, “the VR environment 500 may be a three-dimensional (3D) gaming environment in which gameplay of a video game occurs. It will be appreciated that the activity taking place in the VR environment can be any type of virtual activity, including, without limitation, combat (e.g. a first-person shooter game), real-time strategy, racing, sports, dance, theater, musical performance, game show, etc. To accommodate a number of spectators in the VR environment, there can be a designated viewing area, where a plurality of spectator avatars can be positioned. Strictly speaking, a spectator is a user (human) that spectates the VR environment (e.g. using an HMD)”).
Regarding claim 56, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 1, and O’Leary et al. further teach wherein: the one or more visual indicators comprise at least one of an arrow or road sign not previously generated for display on the display of the selected XR device and not physically present within the environment of the selected XR device; and the one or more visual indicators indicate a particular object or location physically present within the environment of the selected XR device (Figs 7A-7B, par 0120-126, “The navigation user interface element 704a further includes an indication 716 of a respective location corresponding to the content 708a presented in content user interface element 706, and a field of view indicator 718a that indicates the field of view corresponding to content 708a. For example, the content 708a is an image (or video) captured from the physical location corresponding to the location of indication 716 within the navigation user interface element 704a with boundaries that correspond to the field of view indictor 718a. As shown in FIG. 7A, the electronic device 101 presents the navigation user interface element 704a so that the navigation user interface element appears to be resting on the surface of the representation 702 of real table in the physical environment of the electronic device 101”).
Regarding claim 12, Araumi teaches a system comprising: input/output circuitry configured to (Fig 4, par 0064-0067) and processing circuitry configured to (Fig 3, par 0057, par 0064). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale.
Regarding claim 13-16, 18-19 and 58, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 12, the claims 13-16, 18-19, and 58 are similar in scope to claims 2-5, 7, 8+9, and 56 and are rejected under the same rational.
Claim(s) 57 and 59 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2024/0112422 to Araumi in view of U.S. PGPubs 2017/0354875 to Mark et al., further in view of , further in view of U.S. PGPubs 2022/0301264 to O’Leary et al., further in view of U.S. PGPubs 2021/0192851 to Doptis et al.
Regarding claim 57, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 1, but keeps silent for teaching further comprising: configuring the supervisory device to be able to power the selected XR device on or off.
In related endeavor, Doptis et al. teach further comprising: configuring the supervisory device to be able to power the selected XR device on or off (par 0026-0027, “The AR platforms 120 and 122 are also capable of being controlled using the mobile devices 110 and 112 based upon control input received from the mobile devices 110 and 112 input by users 115 and 117”).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Araumi as modified by Marks et al. and O’Leary et al. to include further comprising: configuring the supervisory device to be able to power the selected XR device on or off as taught by Doptis et al. to control a AR platform directly by a controller or mobile device to expand it through augmented reality that is experienced from the perspective of the AR platform.
Regarding claim 59, Araumi as modified by Marks et al. and O’Leary et al. teaches all the limitation of claim 12, the claim 59 is similar in scope to claim 57 and is rejected under the same rational.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JIN . GE
Examiner
Art Unit 2619
/JIN GE/Primary Examiner, Art Unit 2619