DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
2. Applicant’s amendment filed on February 20, 2026 has been entered. Claims 1, 3-11, 21, 23-27 and 101-102 have been amended. Claims 12-20 and 28-100 have been cancelled. Claims 1-11, 21-27 and 101-102 are pending in this application.
Response to Arguments
3. Applicant’s arguments with respect to claim(s) 1 and 21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Objections
4. Claim 1 is objected to because of the following informalities:
Regarding claim 1, the limitation “a physical object” should be amended to read ---a physical device---.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
6. Claim(s) 1-5, 7, 9-11, 21-25, 27 and 101 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Park et al. (US 2019/0339840).
Regarding claim 1, Park discloses a method comprising:
receiving an indication of a physical object of user focus determined for an extended reality (XR) device (Fig. 7A; [0040], [0111], [0113], e.g., receive gaze tracking information from a wearable device or an augmented reality (AR) device 101 that the user is focused on a speech recognition device 102);
receiving an identification of an application currently running on the physical device of user focus ([0106], [0117], e.g., identify that a food delivery application is current running on the speech recognition device 102 based on context information);
determining, based at least in part on a user profile, that a current time corresponds to a time at which a visual element for the application is in an activatable mode (Fig. 12; [0039], [0106]-[0107], [0119], e.g., based on the user profile, the present time corresponds to a time at which a visual element “App 3” for the food delivery application is in an activatable mode); and
based at least in part on the determining that the visual element for the application is in the activatable mode, causing the XR device to display the visual element associated with the application (Fig. 12; [0039], [0049], [0119], [0198], e.g., the visual element “App 3” is displayed in the activable mode).
Regarding claim 2, Park further discloses the method of claim 1, wherein the XR device is a head-mounted device ([0040], e.g., the XR device 101 is a head-mounted device).
Regarding claim 3, Park further discloses the method of claim 1, wherein the application is a first application, and the method further comprising: causing the XR device to prevent displaying a graphical element generated by a second application associated with the physical device of user focus (Fig. 12; [0106], e.g., prevent displaying a graphical element generated by a weather application or a news application).
Regarding claim 4, Park further discloses the method of claim 1, wherein the application is a first application, and the method further comprising: causing the XR device to display more prominently the visual element associated with the first application than a visual element generated by a second application associated with the physical device of user focus (Fig. 12; [0199]-[0200], e.g., when a user utterance or user gesture for selecting the recommended food APP 3 is received, the electronic device 101 applies a blur effect or a shadow effect to another image (or icon) other than the image of the recommended application (e.g., APP 3)).
Regarding claim 5, Park further discloses the method of claim 1, further comprising: causing the XR device to display the visual element such that the visual element is perceived to be at a fixed location anchored relative to the physical device of user focus (Fig 12; e.g., the visual element is displayed adjacent to the speech recognition device 1002).
Regarding claim 7, Park further discloses the method of claim 1, wherein the physical device of user focus is determined based on physical proximity of the XR device to the physical device of user focus ([0047], [0122], e.g., determine a proximity of the XR device 101 to the speech recognition device 102 based on location information).
Regarding claim 9, Park further discloses the method of claim 1, wherein the visual element comprises an interface for interacting with the application or a visual indication of status information about the physical device of user focus ([0198]-[0199], e.g., the visual element can be selected by a user utterance or user gesture).
Regarding claim 10, Park further discloses the method of claim 1, wherein the physical device of user focus is determined based on the physical device of user focus being in a field of view of the XR device ([0056], e.g., the camera 320 may obtain the image of an object, which the user watches or which is positioned in a direction close or similar to a direction in which the user's head faces).
Regarding claim 11, Park further discloses the method of claim 1, wherein the physical device of user focus is determined based on a user gaze determined by the XR device ([0056], [0113]).
Regarding claim 21, Park discloses a system (Fig. 1; [0037], e.g., system 100) comprising:
a memory (Fig. 3; [0055], e.g., memory 340); and
control circuitry (e.g., processor 330) configured:
to receive an indication of a physical device of user focus determined for an extended reality (XR) device (Fig. 7A; [0040], [0056], [0111], [0113], e.g., receive gaze tracking information from a wearable device or an augmented reality (AR) device 101 that the user is focused on a speech recognition device 102), and to store in the memory an identification of the physical
to receive an identification of an application currently running on the physical device of user focus ([0096], [0106], [0117], e.g., identify that a food delivery application is current running on the speech recognition device 102 based on context information);
to determine, based at least in part on a user profile, that a current time corresponds to a time at which a visual element for the application is in an activatable mode (Fig. 12; [0039], [0106]-[0107], [0119], e.g., based on the user profile, the present time corresponds to a time at which a visual element “App 3” for the food delivery application is in an activatable mode); and
to cause, based at least in part on the determining that the visual element for the application is in the activatable mode, the XR device to display the visual element associated with the application (Fig. 12; [0039], [0049], [0119], [0198], e.g., the visual element ‘App 3 ” is displayed in the activable mode).
Regarding claim 22, Park further discloses the system of claim 21, wherein the XR device is a head-mounted device ([0040], e.g., the XR device 101 is a head-mounted device).
Regarding claim 23, Park further discloses the system of claim 21, wherein the application is a first application, and wherein the system is configured: to cause the XR device to prevent displaying a graphical element generated by a second application associated with the physical device of user focus (Fig. 12; [0106], e.g., prevent displaying a graphical element generated by a weather application or a news application).
Regarding claim 24, Park further discloses the system of claim 21, wherein the application is a first application, and wherein the system is configured: to cause the XR device to display more prominently the visual element associated with the first application than a visual element generated by a second application associated with the physical device of user focus. (Fig. 12; [0199]-[0200], e.g., when a user utterance or user gesture for selecting the recommended food APP 3 is received, the electronic device 101 applies a blur effect or a shadow effect to another image (or icon) other than the image of the recommended application (e.g., APP 3)).
Regarding claim 25, Park further discloses the system of claim 21, wherein the system is configured: to cause the XR device to display the visual element such that the visual element is perceived to be at a fixed location anchored relative to the physical device of user focus (Figs 7A and 12; e.g., the visual element is displayed adjacent to the speech recognition device 1002).
Regarding claim 27, Park further discloses the system of claim 21, wherein the physical device of user focus is determined based on physical proximity of the XR device to the physical device of user focus ([0047], [0122], e.g., determine a proximity of the XR device 101 to the speech recognition device 102 based on location information).
Regarding claim 101, Park further discloses the method of claim 1, wherein the method further comprises:
determining, based at least in part on the user profile, that a second current time corresponds to a time at which the visual element is not in the activatable mode; and
based at least in part on the determining that the visual element is not in the activatable mode, preventing the causing the XR device to display the visual element associated with the application ([0106], e.g., when the location where the speech recognition device 102 is located is the user's home and the present time is morning, the XR is caused to prevent to display the visual element associated with the food delivery application).
Claim Rejections - 35 USC § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claim(s) 6 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2019/0339840) in view of Komatsu et al. (US 2015/0356789).
Regarding claim 6, Park does not specifically disclose the method of claim 5, further comprising: receiving an indication input at the XR device selecting the fixed location of the visual element, wherein the fixed location anchored relative to the physical device of user focus is determined based on the indication.
However, Komatsu discloses a method comprising: receiving an indication input at a XR device selecting a fixed location of a visual element (Fig. 12; [0092]-[0093], e.g., the position of the AR object E is specified by the user using the input screen S6 of an augmented display device), the fixed location anchored relative to a physical object is determined based on the indication ([0064], [0093], e.g., the AR object is set in a state of being aligned to the object H to which the AR object E is added).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to use the teachings of Komatsu in the invention of Park for setting a visual element at a location to be aligned to a physical device of user focus according to an operation of a user on an input screen in order to enhance the visual element to be supplied to the user.
Regarding claim 26, Park does not specifically disclose the system of claim 25, wherein the system is configured: to receive an indication input at the XR device selecting the fixed location of the visual element, wherein the fixed location anchored relative to the physical device of user focus is determined based on the indication.
However, Komatsu discloses a method comprising: receiving an indication input at a XR device selecting a fixed location of a visual element (Fig. 12; [0092]-[0093], e.g., the position of the AR object E is specified by the user using the input screen S6 of an augmented display device), the fixed location anchored relative to a physical object is determined based on the indication ([0064], [0093], e.g., the AR object is set in a state of being aligned to the object H to which the AR object E is added).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to use the teachings of Komatsu in the invention of Park for setting a visual element at a location to be aligned to a physical device of user focus according to an operation of a user on an input screen in order to enhance the visual element to be supplied to the user.
9. Claim(s) 8 is rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2019/0339840) in view of in view of LILJEROOS et al. (US 2021/0055791).
Regarding claim 8, Park does not specifically disclose the method of claim 1, wherein the physical device of user focus is determined based on the user profile associated with the XR device.
However, LILJEROOS discloses a method comprising: receiving an indication of an object of user focus determined for an extended reality (XR) device (Figs 1-2 and 4; [0044], e.g., the object 52 of user focus is determined based on the direction of the user's 42 gaze), wherein the object of user focus is determined based on a user profile associated with the XR device (Fig. 4; [0065 ]-[0066], e.g., the object of user focus is determined based on a user profile).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to use the teachings of LILJEROOS in the invention of Park for determining a physical device of user focus based on a user profile associated with a XR device in order to allow for a user of the XR device to set a user’s preferred device.
10. Claim(s) 102 is rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2019/0339840) in view of Lopez et al. (US 2016/0274762).
Regarding claim 102, Park does not specifically disclose the method of claim 1, wherein the method further comprises: detecting a change in gaze of a user wearing the XR device; and based at least in part on detecting the change in the gaze of the user wearing the XR device, causing the visual element to remain at a fixed location relative to the physical device of user focus.
However, Lopez discloses a method comprising:
causing a XR device to display a visual element associated with a physical object of user focus (Fig. 8; [0054]-[0059], e.g., display a UI associated with a physical object in response to the user's gaze being directed at the physical object);
detecting a change in gaze of a user wearing the XR device ([0066], e.g., detect that the user has been looking away from the UI); and
based at least in part on detecting the change in the gaze of the user wearing the XR device, causing the visual element to remain at a fixed location relative to the physical object of user focus ([0059], [0066], e.g., if the field of view does not change, display the UI near the device being controlled).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to use the teachings of Lopez in the invention of Park for causing a visual element to remain at a fixed location relative to a physical object of user focus within a field of view after a change in gaze of the user has been detected because it would allow the user to interact the visual element by gazing back at the visual element.
Conclusion
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HONG ZHOU whose telephone number is (571)270-5372. The examiner can normally be reached 9:00-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BENJAMIN C LEE can be reached at 571-272-2963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HONG ZHOU/Primary Examiner, Art Unit 2629