Prosecution Insights
Last updated: April 19, 2026
Application No. 18/486,729

WEARABLE DEVICE, METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM PROVIDING GRAPHIC REGION

Non-Final OA §103§112
Filed
Oct 13, 2023
Examiner
AMIN, JWALANT B
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
94%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
500 granted / 631 resolved
+17.2% vs TC avg
Strong +15% interview lift
Without
With
+15.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
14 currently pending
Career history
645
Total Applications
across all art units

Statute-Specific Performance

§101
13.4%
-26.6% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 631 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement filed 10/13/2023 fails to comply with 37 CFR 1.98(a)(3)(i) because it does not include a concise explanation of the relevance, as it is presently understood by the individual designated in 37 CFR 1.56(c) most knowledgeable about the content of the information, of each reference listed that is not in the English language. It has been placed in the application file, but the information referred to therein (stroked-out in the IDS) has not been considered. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 8, the limitation “display, via the display, a portion of the region and a portion of the graphic region by stopping to display the graphic region” on lines 5-6 is considered as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor. It is not clear how can the displaying of a portion of the graphic region on a display is possible by stopping to display the graphic region. Paragraphs [0153] of the specification recites “… the processor 210 may display a portion (e.g., a portion of the region 1102 to the region 1105) of the region and a portion (e.g., a portion of the first graphic region 1131 to the fourth graphic region 1134) of the graphic region through the display 220, by stopping to display the at least a portion of the graphic region.” However, this is not what is being claimed. Therefore, the claim is considered indefinite. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haddick et al. (US 2016/0048023, hereinafter Haddick), in view of Kobayashi (US 2015/0168729), in view of Ogawa et al. (US 2022/0202357, hereinafter Ogawa), and further in view of Laurent et al. (US 11699266, hereinafter Laurent). Regarding claim 1, Haddick teaches a wearable device (HWC 102, fig 1; [0101]: head-worn computing “HWC” that mimics the appearance of head-worn glasses or sunglasses; [0105]: The HWC 102 is a computing platform intended to be worn on a person's head) comprising: a display (head-worn see-through display, [0326]) arranged with respect to eyes of a user wearing the wearable device ([0101]: The glasses may be a fully developed computing platform, such as including computer displays presented in each of the lenses of the glasses to the eyes of the user; [0102]: HWC involves more than just placing a computing system on a person's head. The system may need to be designed as a lightweight, compact and fully functional computer display, such as wherein the computer display includes a high resolution digital display that provides a high level of emersion comprised of the displayed digital content and the see-through view of the environmental surroundings; [0105]: In situations where the HWC 102 has integrated computer displays the displays may be configured as see-through displays such that the digital imagery can be overlaid with respect to the user's view of the environment 114); a camera comprising at least one lens that faces a direction corresponding to a direction in which the eyes faces ([0103]: HWC 102 may interpret gestures 116 (e.g captured from forward, downward, upward, rearward facing sensors such as camera(s), range finders, IR sensors, etc.); [0106]: sensors such as a camera, hyper-spectral camera; [0243]: HWC 102 forward facing camera image capture processing, etc.); [0234]: an image captured by camera on the HWC 102 that images the surrounding environment in front of the wearer); a processor ([0106]: The HWC 102 may also have a number of integrated computing facilities, such as an integrated processor); and memory storing instructions that, when executed by the processor (a computing system such as HWC 102 will inherently include a memory to store instructions and data to be executed by the processor; [0101]: The glasses may be a fully developed computing platform), cause the wearable device to: identify, (a pre-selected physical location where the content is scheduled to be displayed to a person is identified from the personal information related to the person; [0307]: content is presented in a FOV of a HWC 102 when the HWC 102 is at a physical location that was selected based on personal information particular to the wearer of the HWC 102 … Personal information relating to the person may be stored such that it can be retrieved during a process of determining at what physical location in the world certain digital content should be presented to the person; [0308]: a method of presenting digital content in a FOV of a HWC 102 may include identifying that the HWC 102 has arrived at a physical location, wherein the physical location is pre-determined based on personal information relating to the person wearing the HWC, and presenting the digital content in relation to an attribute in the surroundings where the attribute was pre-selected based on the personal information. The personal information may relate to personal attributes, demographics, behaviors, prior visited locations, stored personal locations, preferred locations, travel habits, etc.; [0310]: FIG. 67 illustrates a person entering a location proximate to his place of work 6702. This location has been pre-selected 6704 as a physical location for the presentation of digital content in the HWC 102 based on stored personal information 6708; [0328]: There are situations where content is scheduled to be presented to the wearer of a HWC 102 when the wearer enters a region or physical location or looks towards a physical location), based on the wearable device being positioned in the place associated with the schedule ([0310]: FIG. 67 illustrates a person entering a location proximate to his place of work 6702. This location has been pre-selected 6704 as a physical location for the presentation of digital content in the HWC 102 based on stored personal information 6708 … once the person arrives at the physical location that was pre-selected based on the person's personal information, the content may be displayed in the FOV of the HWC 102 when the HWC 102 is aligned with a predetermined direction. So, for example, if the person is standing in a hallway and then looks north the content may be displayed; [0314]: The movements may be traced by tracing GPS movements, IMU movements, or other sensor feedback from the HWC 102 or other device, for example), identify a direction of the wearable device (sight heading of the HWC 102) with respect to a region in the place (wall/painting/picture/television/doorway/hallway etc. are the priority placements of the physical location for the virtual placement of the digital content) to which a graphic region (region where the virtual world-locked position of the digital content is set) for the schedule is set (fig. 67; [0248]: the heading that the user is looking along (generally referred to herein as “sight heading” or “sight vector”); [0257]: sight vector or heading (i.e. the direction of the wearer's sight direction); [0289]: the process may involve setting a pre-determined eye/sight heading from a pre-determined geospatial location and using them as triggers. In the event that a head worn computer enters the geospatial location and an eye/sight heading associated with the head worn computer aligns with the pre-determined eye/sight heading, the system may collect the fact that there was an apparent alignment and/or the system may record information identifying how long the eye/sight heading remains substantially aligned with the pre-determined eye/sight heading to form a persistence statistic; [0310]: A virtual marker may also be set based on the person's physical location and the sight heading of the HWC 102 (e.g. as determined by an eCompass on the HWC). For example, once the person arrives at the physical location that was pre-selected based on the person's personal information, the content may be displayed in the FOV of the HWC 102 when the HWC 102 is aligned with a predetermined direction. So, for example, if the person is standing in a hallway and then looks north the content may be displayed. In embodiments, the HWC 102 identifies a physically present attribute in the surroundings and then associates the content with the attribute such that the content appears locked to the attribute from the person's perspective. For example, the person, once at the physical location, looks north and then the HWC 102 performs an analysis of the surroundings in the north direction (e.g. by imaging the surroundings with an onboard camera) to select a physical attribute to be used as a virtual marker. In embodiments, the physical marker may be pre-determined. For example, the doorway in a particular hallway may be presented as the object to key off of when setting the virtual world-locked position of the digital content. In embodiments, the physical attribute is selected based on a pre-determined criteria. For example, the system may have a list of priority placements such as being proximate a painting, picture, television, doorway, blank wall, etc., and the head-worn computer may review the physical location and select one of the priority placements for the virtual placement of the digital content), based on identifying that the direction of the camera (sight heading of the HWC or the direction that the person is looking along) corresponds to a first direction (pre-determined direction such as sight vector A 4402 in fig. 44 from where the object in the environment is visible) in which the camera faces the region (sight heading of the HWC is aligned with a predetermined direction such as a user looking north while standing in a particular hallway of his office location), display, via the display, at least portion of the graphic region (region where the virtual world-locked position of the digital content is set) on at least portion of the region ([0260]: FIG. 44 also illustrates an object in the environment 4408 at a position relative to the sight vectors A 4402 and B 4404. When the person is looking along sight vector A 4402, the environment object 4408 can be seen through the field of view A 4414 at position 4412. As illustrated, sight heading aligned content is presented as TEXT in proximity with the environment object 4412… When the sight vector of the person is sight vector B 4404 the environmental object 4408 is not seen in the field of view B 4420. As a result, the sight aligned content 4410 is not presented in field of view B 4420; [0289]: the process may involve setting a pre-determined eye/sight heading from a pre-determined geospatial location and using them as triggers. In the event that a head worn computer enters the geospatial location and an eye/sight heading associated with the head worn computer aligns with the pre-determined eye/sight heading, the system may collect the fact that there was an apparent alignment and/or the system may record information identifying how long the eye/sight heading remains substantially aligned with the pre-determined eye/sight heading to form a persistence statistic; [0310]: For example, a work wall 6714 proximate to the geo-spatial location of work 6702 may be identified for placement of the content presentation 6712 to be viewed within the FOV of the HWC 6710. The wall may then be used as a virtual marker or a virtual marker or physical marker may be identified on the wall such that the HWC 102 identifies the marker and then presents the content proximate the marker. A virtual marker may be based on physical markers, objects, lines, figures, etc. For example, the intersection of a top edge and side edge of a doorway at the place of work may be used as a virtual marker and the digital content in the FOV of the HWC 102 may be presented proximate the intersection. A virtual marker may also be established at some point distant from an object (e.g. a foot from the intersection). A virtual marker may also be set based on the person's physical location and the sight heading of the HWC 102 (e.g. as determined by an eCompass on the HWC). For example, once the person arrives at the physical location that was pre-selected based on the person's personal information, the content may be displayed in the FOV of the HWC 102 when the HWC 102 is aligned with a predetermined direction. So, for example, if the person is standing in a hallway and then looks north the content may be displayed. In embodiments, the HWC 102 identifies a physically present attribute in the surroundings and then associates the content with the attribute such that the content appears locked to the attribute from the person's perspective. For example, the person, once at the physical location, looks north and then the HWC 102 performs an analysis of the surroundings in the north direction (e.g. by imaging the surroundings with an onboard camera) to select a physical attribute to be used as a virtual marker. In embodiments, the physical marker may be pre-determined. For example, the doorway in a particular hallway may be presented as the object to key off of when setting the virtual world-locked position of the digital content. In embodiments, the physical attribute is selected based on a pre-determined criteria. For example, the system may have a list of priority placements such as being proximate a painting, picture, television, doorway, blank wall, etc., and the head-worn computer may review the physical location and select one of the priority placements for the virtual placement of the digital content; [0318]: the content may be ready for presentation once the person has reached the physical location identified but the presentation may be conditioned on the person looking in a particular direction. The direction may be indicative of the person's eye heading or sight heading; [0322]: the method further includes presenting the digital content when the head-worn computer is aligned in a pre-selected direction. For example, the sight heading, as described herein elsewhere, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented. In embodiments, the step of presenting further includes presenting the digital content when an eye of the user is predicted to be aligned in a pre-selected direction. For example, the user's eye heading, as described herein elsewhere, such as through the use of eye imaging, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented); and identifying that the direction (sight heading of the HWC or the direction that the person is looking along) corresponds to a second direction (direction such as sight vector B 4404 in fig. 44 from where the object in the environment is not visible) different from the first direction (direction indicated by sight vector B is different from the direction indicated by sight vector A; when the user is at a particular location where the digital content is placed, the digital content is not displayed to the user when the user’s sight heading does not align with the pre-determined direction set for viewing the digital content from that particular location, and therefore when the user is looking in the direction of sight vector B, the digital content is not displayed to the user; [0260]: FIG. 44 also illustrates an object in the environment 4408 at a position relative to the sight vectors A 4402 and B 4404. When the person is looking along sight vector A 4402, the environment object 4408 can be seen through the field of view A 4414 at position 4412. As illustrated, sight heading aligned content is presented as TEXT in proximity with the environment object 4412… When the sight vector of the person is sight vector B 4404 the environmental object 4408 is not seen in the field of view B 4420. As a result, the sight aligned content 4410 is not presented in field of view B 4420; [0310]: For example, a work wall 6714 proximate to the geo-spatial location of work 6702 may be identified for placement of the content presentation 6712 to be viewed within the FOV of the HWC 6710. The wall may then be used as a virtual marker or a virtual marker or physical marker may be identified on the wall such that the HWC 102 identifies the marker and then presents the content proximate the marker. A virtual marker may be based on physical markers, objects, lines, figures, etc. For example, the intersection of a top edge and side edge of a doorway at the place of work may be used as a virtual marker and the digital content in the FOV of the HWC 102 may be presented proximate the intersection. A virtual marker may also be established at some point distant from an object (e.g. a foot from the intersection). A virtual marker may also be set based on the person's physical location and the sight heading of the HWC 102 (e.g. as determined by an eCompass on the HWC). For example, once the person arrives at the physical location that was pre-selected based on the person's personal information, the content may be displayed in the FOV of the HWC 102 when the HWC 102 is aligned with a predetermined direction. So, for example, if the person is standing in a hallway and then looks north the content may be displayed. In embodiments, the HWC 102 identifies a physically present attribute in the surroundings and then associates the content with the attribute such that the content appears locked to the attribute from the person's perspective. For example, the person, once at the physical location, looks north and then the HWC 102 performs an analysis of the surroundings in the north direction (e.g. by imaging the surroundings with an onboard camera) to select a physical attribute to be used as a virtual marker. In embodiments, the physical marker may be pre-determined. For example, the doorway in a particular hallway may be presented as the object to key off of when setting the virtual world-locked position of the digital content. In embodiments, the physical attribute is selected based on a pre-determined criteria. For example, the system may have a list of priority placements such as being proximate a painting, picture, television, doorway, blank wall, etc., and the head-worn computer may review the physical location and select one of the priority placements for the virtual placement of the digital content; [0318]: the content may be ready for presentation once the person has reached the physical location identified but the presentation may be conditioned on the person looking in a particular direction. The direction may be indicative of the person's eye heading or sight heading; [0322]: the method further includes presenting the digital content when the head-worn computer is aligned in a pre-selected direction. For example, the sight heading, as described herein elsewhere, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented. In embodiments, the step of presenting further includes presenting the digital content when an eye of the user is predicted to be aligned in a pre-selected direction. For example, the user's eye heading, as described herein elsewhere, such as through the use of eye imaging, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented). Haddick does not explicitly teach the camera direction is also the direction of the wearer’s sight direction; identify, based on a schedule, a place associated with the schedule; and based on identifying that the direction corresponds to a second direction different from the first direction, display, via the display, information for informing to change a direction of the camera. Kobayashi teaches the camera direction is also the direction of the wearer’s sight direction (camera set in a particular imaging direction captures images in the line-of-sight direction of the user wearing the head mounted display, and therefore the camera direction is functionally analogous to the line-of-sight direction of the user wearing the head mounted display comprising the camera; [0009]: , a camera that can perform imaging in a line-of-sight direction of the user; [0039]: The camera 61 is provided in the end part ER of the image display unit 20 when the user wears the image display unit 20. The camera 61 is set in an imaging direction in the anterior direction of the image display unit 20, in other words, a line-of-sight direction of the user when the user wears the head mounted display 100, images an outside scenery (a scenery of the outside) in front of the user in the imaging direction, and acquires an outside scenery image. In this case, when the user wearing the head mounted display 100 moves the head vertically and horizontally, the line-of-sight direction of the user changes according to the motion and the imaging direction of the camera 61 also changes. The camera 61 is the so-called visible light camera, and includes an image sensing device such as a CCD (Charge Coupled Device), a CMOS (Complementary Metal-Oxide Semiconductor), or the like. The outside scenery image acquired by the camera 61 is an image representing a shape of an object from visible light radiated from the object, and the imaging data is output to a CPU 140, which will be described later, and used for virtual image formation processing, which will be described later. The camera 61 in the embodiment is a monocular camera, or may be a stereo camera. In addition, it is only necessary that the camera 61 may perform imaging in the line-of-sight direction of the user). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Kobayashi’s knowledge of setting a camera direction to align with the line-of-sight direction of the user as taught and modify the system of Haddick because in such a head mounted display device with camera, the feeling of strangeness when the imaged image of the camera is provided as the virtual image may be relaxed and the convenience may be improved ([0009]). Ogawa teaches to identify, based on a schedule, a place associated with the schedule ([0239]: In Step S1105, the terminal device 10 acquires information of the current location of the user using the location information sensor 150 or the like. In addition, in a case in which information of places is input to each schedule in information of schedules of a user, the terminal device 10 may identify a current location of the user at a time of the schedule on the basis of the information of places associated with the schedule). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Ogawa’s knowledge of identifying a place associated with a user’s schedule as taught and modifying the system of Haddick and Kobayashi because such a system notifies encourages the user at his home to follow his/her sleep schedule ([0138]). Laurent teaches based on identifying that the direction corresponds to a second direction (current viewing direction that is not an optimal viewing direction is functionally analogous to direction corresponding to a second direction; claim 1: current viewpoint has a current location and a current viewing direction) different from the first direction (optimal viewpoint/subsequent viewing direction; claim 1: the subsequent viewpoint has the subsequent location and a subsequent viewing direction within the virtual scene and wherein the subsequent location is different from the current location of the current viewpoint), display, via the display, information (visual indicator) for informing to change a direction of the camera (fig. 7; col. 3 lines 2-5: the notion of optimal viewpoint (OV) comprises a location and direction (orientation) in which to direct a user's attention. In various embodiments the OV can coincide with the ROI; col. 5 lines 20-53: The renderer knows the current pose and orientation of a user in the rendered scene using techniques known in the art. Such techniques will not be described herein. Such information enables a determination of a path a user should follow to reach the OV and a direction in which a user should look to view the ROI. A user can be alerted to look in a particular direction or more particularly navigation information can be directed to a user using a visual indicator such as at least one or a combination of the following: a compass. a bar located at the edge of the screen which moves towards the direction to follow. in a case having a scene in which the ROIs are identified by object IDs, it is possible to use a miniature of the asset or group of assets representing the ROI. footprint symbols showing one or more path(s) to follow (to reach optimal viewpoint location for ROI(s)), in which a color pattern linked to the type(s) of objects of interests to which the OV is related. For example, FIG. 7 depicts a portion of scene of content including a bar at the edge of a screen to indicate to a user in which direction the user should look/navigate the scene in accordance with an embodiment of the present principles. More specifically, in FIG. 7, the bar at the bottom left edge of the screen indicates to the user to follow the bottom left direction. Although in the embodiment of FIG. 7 the bar is depicted as being positioned in the bottom left of the content directing the user to look in the left direction, in alternate embodiments of the present principles a user's attention can be directed toward any portion of the video content and in any direction of the video content using a visual indicator of the present principles; col. 6 lines 64-67: At step 904, a visual indicator indicating a direction in which to move in the video content to cause the display of the region of interest is displayed in a portion of the video content currently being displayed). The combination of Haddick, Kobayashi and Ogawa contains a “base” system of displaying content at a location associated with a user’s schedule based on the user viewing the content from a predetermined direction which the claimed invention can be seen as an “improvement” in that displaying a message to the user to change the viewing direction when the user’s viewing direction is not the predetermined direction. Laurent contains a “comparable” system of displaying a visual indicator to the user to change the viewing direction to a subsequent optimal viewing direction (fig. 7, col. 3 lines 2-5, col. 5 lines 20-53, col. 6 lines 64-67 and claim 1) that has been improved in the same way as the claimed invention. Laurent’s known “improvement” could have been applied in the same way to the “base” system of the combination of Haddick, Kobayashi and Ogawa and the results would have been predicted and resulted in displaying a message to the user at a location where the content can be viewed from a pre-determined viewing direction to change the viewing direction as directed by the visual indicators. Furthermore, both Laurent and the combination of Haddick, Kobayashi and Ogawa uses and discloses similar system functionality (i.e. Head Mounted Displays to display different content based on viewing direction of the user which is also in a related field of endeavor) so that the combination is more easily implemented. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 2, the combination of Haddick, Kobayashi, Ogawa and Laurent teaches the wearable device of claim 1, wherein the instructions further cause, when executed by the processor, the wearable device to: display, via the display, information for informing a movement to the place, based on the wearable device being positioned outside of the place associated with the schedule (Ogawa – [0080]: The notification control unit 194 performs a process of presenting information to a user. The notification control unit 194 performs a process of causing the display 132 to display a display image; Ogawa - [0239]: In Step S1105, the terminal device 10 acquires information of the current location of the user using the location information sensor 150 or the like. In addition, in a case in which information of places is input to each schedule in information of schedules of a user, the terminal device 10 may identify a current location of the user at a time of the schedule on the basis of the information of places associated with the schedule; Ogawa - [0240]: The terminal device 10 identifies a required movement time that is necessary for the user to move from the current location of the user to the place at which sleep is taken on the basis information of the current location of the user and the information of the place at which the user takes sleep. For example, the terminal device 10 may identify a required time for movement from the current location of the user to the place at which sleep is taken on the basis of route information of a public transportation organization or the like, information of congestion statuses of roads, and the like; Ogawa - [0241]: The terminal device 10 determines a timing at which a notification is given to the user on the basis of the required time for movement such that the user can perform an action for taking sleep (going to bed or the like) up to the planned sleep time set by the user. The terminal device 10 determines a timing at which a notification is given to the user by going back a time corresponding to the required time for movement from the planned sleep time and setting a predetermined margin. For example, in a case in which the planned sleep time is “23:00,” and the required time for movement is “55 minutes,” the terminal device 10 sets “35 minutes” as a margin and, for example, determines a timing at which a notification is given to the user as “21:30.” In addition, the terminal device 10 notifies the user of a time at which the user needs to start movement to the place (his or her home or the like) at which the user takes sleep (for example, a notification of “Let's depart at 21:30 from the current location such that sleep can be taken at the planned sleep time of 23:00”) such that the user can achieve the object of sleep (can take sleep at the planned sleep time)). Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haddick, in view of Kobayashi, in view of Ogawa, in view of Laurent, and further in view of Berliner et al. (US 2023/0139626, hereinafter Berliner). Regarding claim 3, the combination of Haddick, Kobayashi, Ogawa and Laurent describes the wearable device of claim 1, wherein the instructions cause, when executed by the processor, the wearable device to display, based on the direction (Haddick - sight heading of the HWC or the direction that the person is looking along) corresponding to the first direction (Haddick - pre-determined direction such as sight vector A 4402 in fig. 44 from where the object in the environment is visible), the at least portion of the graphic region (Haddick - region where the virtual world-locked position of the digital content is set; Haddick - [0310]: For example, a work wall 6714 proximate to the geo-spatial location of work 6702 may be identified for placement of the content presentation 6712 to be viewed within the FOV of the HWC 6710. The wall may then be used as a virtual marker or a virtual marker or physical marker may be identified on the wall such that the HWC 102 identifies the marker and then presents the content proximate the marker. A virtual marker may be based on physical markers, objects, lines, figures, etc. For example, the intersection of a top edge and side edge of a doorway at the place of work may be used as a virtual marker and the digital content in the FOV of the HWC 102 may be presented proximate the intersection. A virtual marker may also be established at some point distant from an object (e.g. a foot from the intersection). A virtual marker may also be set based on the person's physical location and the sight heading of the HWC 102 (e.g. as determined by an eCompass on the HWC). For example, once the person arrives at the physical location that was pre-selected based on the person's personal information, the content may be displayed in the FOV of the HWC 102 when the HWC 102 is aligned with a predetermined direction. So, for example, if the person is standing in a hallway and then looks north the content may be displayed. In embodiments, the HWC 102 identifies a physically present attribute in the surroundings and then associates the content with the attribute such that the content appears locked to the attribute from the person's perspective. For example, the person, once at the physical location, looks north and then the HWC 102 performs an analysis of the surroundings in the north direction (e.g. by imaging the surroundings with an onboard camera) to select a physical attribute to be used as a virtual marker. In embodiments, the physical marker may be pre-determined. For example, the doorway in a particular hallway may be presented as the object to key off of when setting the virtual world-locked position of the digital content. In embodiments, the physical attribute is selected based on a pre-determined criteria. For example, the system may have a list of priority placements such as being proximate a painting, picture, television, doorway, blank wall, etc., and the head-worn computer may review the physical location and select one of the priority placements for the virtual placement of the digital content; Haddick - [0318]: the content may be ready for presentation once the person has reached the physical location identified but the presentation may be conditioned on the person looking in a particular direction. The direction may be indicative of the person's eye heading or sight heading; Haddick - [0322]: the method further includes presenting the digital content when the head-worn computer is aligned in a pre-selected direction. For example, the sight heading, as described herein elsewhere, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented. In embodiments, the step of presenting further includes presenting the digital content when an eye of the user is predicted to be aligned in a pre-selected direction. For example, the user's eye heading, as described herein elsewhere, such as through the use of eye imaging, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented). The combination of Haddick, Kobayashi, Ogawa and Laurent does not explicitly teach expanding the at least portion of the graphic region from a portion of the region spaced apart from the wearable device to another portion of the region adjacent to the wearable device. Berliner teaches expanding the at least portion of the graphic region from a portion of the region (a first display region on the blank wall) spaced apart from the wearable device to another portion of the region (a second display region on the window adjacent to the wall) adjacent to the wearable device (window is positioned adjacent to the wall that is facing the wearable extended reality appliance; [0165]: a blank white wall facing the wearable extended reality appliance may be identified as a first display region (e.g., requiring an average duty cycle to display content), and a window positioned adjacent to the wall may be identified as a second display region (e.g., requiring a higher duty cycle to display content to overcome daylight). As another example, a desktop facing the wearable extended reality appliance may be identified as a first display region, and a ceiling may be identified as a second display region). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Berliner’s knowledge of expanding the virtual display regions from a first region on the wall to a second region on a window adjacent to the wall as taught and modify the system of Haddick, Kobayashi, Ogawa and Laurent because such a system involves live direct or indirect view of a physical real-world environment that is enhanced with virtual computer-generated perceptual information, such as virtual objects that the user may interact with ([0075]). Claim(s) 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haddick, in view of Kobayashi, in view of Ogawa, in view of Laurent, and further in view of Kang et al. (US 2020/0020334, hereinafter Kang). Regarding claim 4, the combination of Haddick, Kobayashi, Ogawa and Laurent describes the wearable device of claim 1, wherein the instructions further cause, when executed by the processor, the wearable device to display, based on the direction (Haddick - sight heading of the HWC or the direction that the person is looking along) corresponding to the first direction (Haddick - pre-determined direction such as sight vector A 4402 in fig. 44 from where the object in the environment is visible), the at least portion of the graphic region (Haddick - region where the virtual world-locked position of the digital content is set; Haddick - [0310]: For example, a work wall 6714 proximate to the geo-spatial location of work 6702 may be identified for placement of the content presentation 6712 to be viewed within the FOV of the HWC 6710. The wall may then be used as a virtual marker or a virtual marker or physical marker may be identified on the wall such that the HWC 102 identifies the marker and then presents the content proximate the marker. A virtual marker may be based on physical markers, objects, lines, figures, etc. For example, the intersection of a top edge and side edge of a doorway at the place of work may be used as a virtual marker and the digital content in the FOV of the HWC 102 may be presented proximate the intersection. A virtual marker may also be established at some point distant from an object (e.g. a foot from the intersection). A virtual marker may also be set based on the person's physical location and the sight heading of the HWC 102 (e.g. as determined by an eCompass on the HWC). For example, once the person arrives at the physical location that was pre-selected based on the person's personal information, the content may be displayed in the FOV of the HWC 102 when the HWC 102 is aligned with a predetermined direction. So, for example, if the person is standing in a hallway and then looks north the content may be displayed. In embodiments, the HWC 102 identifies a physically present attribute in the surroundings and then associates the content with the attribute such that the content appears locked to the attribute from the person's perspective. For example, the person, once at the physical location, looks north and then the HWC 102 performs an analysis of the surroundings in the north direction (e.g. by imaging the surroundings with an onboard camera) to select a physical attribute to be used as a virtual marker. In embodiments, the physical marker may be pre-determined. For example, the doorway in a particular hallway may be presented as the object to key off of when setting the virtual world-locked position of the digital content. In embodiments, the physical attribute is selected based on a pre-determined criteria. For example, the system may have a list of priority placements such as being proximate a painting, picture, television, doorway, blank wall, etc., and the head-worn computer may review the physical location and select one of the priority placements for the virtual placement of the digital content; Haddick - [0318]: the content may be ready for presentation once the person has reached the physical location identified but the presentation may be conditioned on the person looking in a particular direction. The direction may be indicative of the person's eye heading or sight heading; Haddick - [0322]: the method further includes presenting the digital content when the head-worn computer is aligned in a pre-selected direction. For example, the sight heading, as described herein elsewhere, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented. In embodiments, the step of presenting further includes presenting the digital content when an eye of the user is predicted to be aligned in a pre-selected direction. For example, the user's eye heading, as described herein elsewhere, such as through the use of eye imaging, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented). The combination of Haddick, Kobayashi, Ogawa and Laurent does not explicitly teach to display an execution screen of each of one or more software applications set for the schedule at a portion of the graphic region. Kang teaches to display an execution screen of each of one or more software applications set for the schedule at a portion of the graphic region ([0106]: Referring to FIG. 4A, an electronic device 101 (e.g., the processor 120 or 210) may execute a first application program including a first user interface in operation 410. First application program including a first user interface may mean that at least part of an execution screen of the first application program includes the first user interface; [0107]: For example, as shown in FIG. 5B, the electronic device 101 may display a second execution screen 510 including a text box 511 capable of displaying a text input by the user through a keyboard 512 (e.g., a virtual keyboard) displayed. The operations of FIG. 5B are described below in greater detail with reference to FIG. 4C.; [0113]: As shown on the right side of FIG. 5A, the electronic device 101 may display an execution screen 520 of the schedule management application and display the result of registering the schedule 522 of “Study” on the February 2 item 521. According to various embodiments of the present invention, the electronic device 101 may keep displaying the first execution screen 500 while executing the schedule management application on background and register the schedule included in the task). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Kang’s knowledge of displaying an execution screen of one or more software applications set for schedule at a graphic region as taught and modify the system of Haddick, Kobayashi, Ogawa and Laurent because such a system enhances the user experience by showing the user the schedules set for execution of the applications. Regarding claim 5, the combination of Haddick, Kobayashi, Ogawa, Laurent and Kang teaches the wearable device of claim 4, wherein the instructions further cause, when executed by the processor, the wearable device to cease displaying one or more execution screens of one or more other software applications (Kang - [0200]: The electronic device 101 may stop displaying the speech recognition application execution screen and display the prior screen), distinct from the one or more software applications (Kang - speech recognition application is different from schedule management application), based on the direction corresponding to the first direction (Haddick - region where the virtual world-locked position of the digital content is set; Haddick - [0310]: For example, a work wall 6714 proximate to the geo-spatial location of work 6702 may be identified for placement of the content presentation 6712 to be viewed within the FOV of the HWC 6710. The wall may then be used as a virtual marker or a virtual marker or physical marker may be identified on the wall such that the HWC 102 identifies the marker and then presents the content proximate the marker. A virtual marker may be based on physical markers, objects, lines, figures, etc. For example, the intersection of a top edge and side edge of a doorway at the place of work may be used as a virtual marker and the digital content in the FOV of the HWC 102 may be presented proximate the intersection. A virtual marker may also be established at some point distant from an object (e.g. a foot from the intersection). A virtual marker may also be set based on the person's physical location and the sight heading of the HWC 102 (e.g. as determined by an eCompass on the HWC). For example, once the person arrives at the physical location that was pre-selected based on the person's personal information, the content may be displayed in the FOV of the HWC 102 when the HWC 102 is aligned with a predetermined direction. So, for example, if the person is standing in a hallway and then looks north the content may be displayed. In embodiments, the HWC 102 identifies a physically present attribute in the surroundings and then associates the content with the attribute such that the content appears locked to the attribute from the person's perspective. For example, the person, once at the physical location, looks north and then the HWC 102 performs an analysis of the surroundings in the north direction (e.g. by imaging the surroundings with an onboard camera) to select a physical attribute to be used as a virtual marker. In embodiments, the physical marker may be pre-determined. For example, the doorway in a particular hallway may be presented as the object to key off of when setting the virtual world-locked position of the digital content. In embodiments, the physical attribute is selected based on a pre-determined criteria. For example, the system may have a list of priority placements such as being proximate a painting, picture, television, doorway, blank wall, etc., and the head-worn computer may review the physical location and select one of the priority placements for the virtual placement of the digital content; Haddick - [0318]: the content may be ready for presentation once the person has reached the physical location identified but the presentation may be conditioned on the person looking in a particular direction. The direction may be indicative of the person's eye heading or sight heading; Haddick - [0322]: the method further includes presenting the digital content when the head-worn computer is aligned in a pre-selected direction. For example, the sight heading, as described herein elsewhere, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented. In embodiments, the step of presenting further includes presenting the digital content when an eye of the user is predicted to be aligned in a pre-selected direction. For example, the user's eye heading, as described herein elsewhere, such as through the use of eye imaging, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haddick, in view of Kobayashi, in view of Ogawa, in view of Laurent, and further in view of Kacelenga (US 2020/0089321, hereinafter Kacelenga). Regarding claim 10, the combination of Haddick, Kobayashi, Ogawa and Laurent does not explicitly teach the wearable device of claim 1, wherein the instructions further cause, when executed by the processor, the wearable device to: obtain biometric data of the user, display, in a virtual reality environment, the at least portion of the graphic region, based on the direction corresponding to the first direction and the biometric data within reference range, and display, in a mixed reality environment, the at least portion of the graphic region, based on the direction corresponding to the first direction and the biometric data outside of the reference range. Kacelenga teaches to obtain biometric data of the user ([0043]: the physiological tracking system 220 HMD 102A may include a heartrate sensor 221 that monitors the user heart beat and, in some cases, heartrate variability, thus providing an indication of the user's stress, excitement, and/or mood. HMD 102A may also include a gravimetric skin response (GSR) sensor 222 utilized to monitor the user's sweat level, thus providing an indication of stress, fright and/or thermal stress by the user. The physiological tracking system 220 of HMD 102A may also include an electromyography (EMG) sensor 224 for measuring the user's muscle activity, which may be used for inferring degrees of emotional tension or levels of physical activity by the user. The physiological tracking system 220 may also utilize a body temperature sensor 223 that measures the user's temperature. In certain embodiments, one or more of the sensors of the physiological tracking system 220 may be additionally or alternatively incorporated in a device worn or held by the user, such as markers 106 of FIG. 1), display, in a virtual reality environment (virtual reality session), the at least portion of the graphic region, based on the direction corresponding to the first direction (direction of user’s head in real-time) and the biometric data within reference range (when the determined physiological state of a user is within a threshold range or below a threshold, the contents are displayed in a virtual reality session; [0032]: HMD 102A includes components configured to create and/or display an all-immersive virtual environment; and/or to overlay digitally-created content or images on a display, panel, or surface (e.g., an LCD panel, an OLED film, a projection surface, etc.) in place of and/or in addition to the user's natural perception of the real-world; [0040]: Gaze tracking system 212 may include an inward-facing projector configured to create a pattern of infrared (or near-infrared) light on the user's eyes, and an inward-facing camera 229 configured to take high-frame-rate images of the eyes and their reflection patterns, which are then used to calculate the user's eye's position and gaze point. In some cases, gaze detection or tracking system 212 may be configured to identify a direction, extent, and/or speed of movement of the user's eyes in real-time, during execution of an xR application; [0041]: In some cases, IMU system 213 may be configured to a detect a direction, extent, and/or speed of rotation (e.g., an angular speed) of the user's head in real-time, during execution of an xR application; [0082]: response to a determination of a stressed physiological state, a virtual reality session may be downgraded to a mixed reality session, or to an augmented reality session. Similarly, a user may be upgraded from an augmented reality session to a mixed reality session and eventually to a virtual reality session based on detection of a specific physiological states that indicate the user's comfort with the progression), and display, in a mixed reality environment (mixed reality or augmented reality session), the at least portion of the graphic region, based on the direction corresponding to the first direction (direction of user’s head in real-time) and the biometric data outside of the reference range (when the determined physiological state of a user is outside the threshold range or above the threshold, the contents are displayed in mixed or augmented reality session; [0082]: response to a determination of a stressed physiological state, a virtual reality session may be downgraded to a mixed reality session, or to an augmented reality session. Similarly, a user may be upgraded from an augmented reality session to a mixed reality session and eventually to a virtual reality session based on detection of a specific physiological states that indicate the user's comfort with the progression). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Kacelenga’s knowledge of using stressed physiological state of a user to select displaying in virtual reality or augmented/mixed reality environment as taught and modify the system of the combination of Haddick, Kobayashi, Ogawa and Laurent because such a system enhances the user experience by switching display sessions from virtual reality to augmented/mixed reality based detected physiological state of the user ([0073]). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haddick, in view of Kobayashi, in view of Ogawa, in view of Laurent, and further in view of Sztuk et al. (US 2023/0071993, hereinafter Sztuk). Regarding claim 12, the combination of Haddick, Kobayashi, Ogawa and Laurent describes the wearable device of claim 1, wherein the instructions cause, when executed by the processor, the wearable device to display the at least portion of the graphic region (Haddick - region where the virtual world-locked position of the digital content is set) based on the direction (Haddick - sight heading of the HWC or the direction that the person is looking along) corresponding to the first direction (Haddick - pre-determined direction such as sight vector A 4402 in fig. 44 from where the object in the environment is visible; Haddick - [0310]: For example, a work wall 6714 proximate to the geo-spatial location of work 6702 may be identified for placement of the content presentation 6712 to be viewed within the FOV of the HWC 6710. The wall may then be used as a virtual marker or a virtual marker or physical marker may be identified on the wall such that the HWC 102 identifies the marker and then presents the content proximate the marker. A virtual marker may be based on physical markers, objects, lines, figures, etc. For example, the intersection of a top edge and side edge of a doorway at the place of work may be used as a virtual marker and the digital content in the FOV of the HWC 102 may be presented proximate the intersection. A virtual marker may also be established at some point distant from an object (e.g. a foot from the intersection). A virtual marker may also be set based on the person's physical location and the sight heading of the HWC 102 (e.g. as determined by an eCompass on the HWC). For example, once the person arrives at the physical location that was pre-selected based on the person's personal information, the content may be displayed in the FOV of the HWC 102 when the HWC 102 is aligned with a predetermined direction. So, for example, if the person is standing in a hallway and then looks north the content may be displayed. In embodiments, the HWC 102 identifies a physically present attribute in the surroundings and then associates the content with the attribute such that the content appears locked to the attribute from the person's perspective. For example, the person, once at the physical location, looks north and then the HWC 102 performs an analysis of the surroundings in the north direction (e.g. by imaging the surroundings with an onboard camera) to select a physical attribute to be used as a virtual marker. In embodiments, the physical marker may be pre-determined. For example, the doorway in a particular hallway may be presented as the object to key off of when setting the virtual world-locked position of the digital content. In embodiments, the physical attribute is selected based on a pre-determined criteria. For example, the system may have a list of priority placements such as being proximate a painting, picture, television, doorway, blank wall, etc., and the head-worn computer may review the physical location and select one of the priority placements for the virtual placement of the digital content; Haddick - [0318]: the content may be ready for presentation once the person has reached the physical location identified but the presentation may be conditioned on the person looking in a particular direction. The direction may be indicative of the person's eye heading or sight heading; Haddick - [0322]: the method further includes presenting the digital content when the head-worn computer is aligned in a pre-selected direction. For example, the sight heading, as described herein elsewhere, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented. In embodiments, the step of presenting further includes presenting the digital content when an eye of the user is predicted to be aligned in a pre-selected direction. For example, the user's eye heading, as described herein elsewhere, such as through the use of eye imaging, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented). The combination of Haddick, Kobayashi, Ogawa and Laurent does not explicitly teach to obtain data indicating illuminance around the wearable device, and display the at least portion of the graphic region in a brightness identified based on the illuminance. Sztuk teaches to obtain data indicating illuminance around the wearable device ([0042]: Ambient light sensor 423 may include one or more photodetectors (e.g. photodiodes). Ambient light sensor 423 may include more than one photodetector with corresponding filters so that ambient light sensor 423 can measure the color as well as the intensity of scene light 456. Ambient light sensor 423 may include a red-green-blue (RGB)/infrared/monochrome camera sensor to generate high certainty measurements about the state of the ambient light environment. In some implementations, a world-facing image sensor of head mounted device 400 that is oriented to receive scene light 456 may function as an ambient light sensor. Ambient light sensor 423 is configured to generate an ambient light measurement 429; [0047]: In process block 510, an ambient light measurement is initiated with a photodetector (e.g. ambient light sensor 423) of the head mounted device. The ambient light measurement (e.g. ambient light measurement 429) may be initiated during a same time period as the eye data is captured), and display the at least portion of the graphic region in a brightness identified based on the illuminance ([0065]: In process block 615, a brightness of a display of the head mounted device is adjusted in response to the eye data and the ambient light measurement. In the illustration of FIG. 4, a brightness of display layer 440 may be adjusted in response to eye data and ambient light measurement 429; [0074]: In an implementation of process 600, adjusting the brightness of the display of the head mounted device in response to the eye data and the ambient light measurement includes adjusting the brightness of the display to a predetermined display brightness associated with aggregate eye data corresponding to the ambient light measurement). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Sztuk’s knowledge of using ambient light to adjust the brightness of the display of the head mounted device as taught and modify the system of Haddick, Kobayashi, Ogawa and Laurent because such a system enhances user experience by ensuring the user is more comfortable in viewing the display ([0020] and [0074]). Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haddick, in view of Kobayashi, in view of Ogawa, in view of Laurent, and further in view of Gopalakrishnan (US 2020/0335211). Regarding claim 14, the combination of Haddick, Kobayashi, Ogawa and Laurent teach the wearable device of claim 1, wherein the instructions cause, when executed by the processor, the wearable device to: another schedule (Ogawa - information of places is input to each schedule in information of schedules of a user inherently implies that there are a plurality of different schedules, each associated with a location), distinct from the schedule, registered with respect to the region (place/location) through an account of the user (Ogawa - [0239]: In Step S1105, the terminal device 10 acquires information of the current location of the user using the location information sensor 150 or the like. In addition, in a case in which information of places is input to each schedule in information of schedules of a user, the terminal device 10 may identify a current location of the user at a time of the schedule on the basis of the information of places associated with the schedule). The combination of Haddick, Kobayashi, Ogawa and Laurent does not explicitly teach to identify another schedule, distinct from the schedule, while the at least portion of the graphic region is displayed via the display, and display, via the display, at least portion of another graphic region for the other schedule, on at least portion of the region, in response to the other schedule being identified. Gopalakrishnan teaches to identify another schedule (“Meeting at 5 pm”, fig. 12B), distinct from the schedule (“Call your wife at 3 pm”, fig. 12B), while the at least portion of the graphic region is displayed via the display (fig. 12B shows multiple schedules related to the user are displayed on the screen of the wearable device, wherein the schedule related to “Call your wife at 3 pm” is displayed in a first portion of the display screen; [0042]: A daily work management feature on this interface is used to schedule professional work activity with priority; [0046]: The work schedule can also be organized through this interface; [0111]: FIG. 12B shows a persona oriented psychological stress management application. The mobile apparatus's mini screen 136 displays queued work schedule with priority rating 163, real-time stress levels (Emotional Index meter) 164, and information on stress levels and stress management 165. The user initially marks several reference data points to train the smart apparatus for learning the persona-oriented stress levels. The real-time stress levels are generated through previously marked subjective data points. The reference data points are generated based on the analysis of biosensor and other vital information. Based on the reference points and real-time signals, the apparatus generates subjective psychological stress data 164. On recognizing the state of stress or anxiety, the application automatically guides the user to a stress management method. The push buttons 134-135, crown 133 and mini-touch display 136 are utilized as the means to mark the stress data points, to navigate through the work schedule and to operate the functionalities of the application), and display, via the display, at least portion of another graphic region for the other schedule, on at least portion of the region, in response to the other schedule being identified (fig. 12B shows multiple schedules related to the user are displayed on the screen of the wearable device, wherein the schedule related to “Meeting at 5 pm” is displayed in a different portion of the display screen; [0111]: FIG. 12B shows a persona oriented psychological stress management application. The mobile apparatus's mini screen 136 displays queued work schedule with priority rating 163, real-time stress levels (Emotional Index meter) 164, and information on stress levels and stress management 165. The user initially marks several reference data points to train the smart apparatus for learning the persona-oriented stress levels. The real-time stress levels are generated through previously marked subjective data points. The reference data points are generated based on the analysis of biosensor and other vital information. Based on the reference points and real-time signals, the apparatus generates subjective psychological stress data 164. On recognizing the state of stress or anxiety, the application automatically guides the user to a stress management method. The push buttons 134-135, crown 133 and mini-touch display 136 are utilized as the means to mark the stress data points, to navigate through the work schedule and to operate the functionalities of the application). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Gopalakrishnan’s knowledge of displaying different schedules in different regions of the display as taught and modify the system of the combination of Haddick, Kobayashi, Ogawa and Laurent because such a system enhances user experience by displaying multiple vital information related to a user’s schedule to make the user aware of the scheduled activities. Claim(s) 15-16 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haddick, in view of Kobayashi, in view of Ogawa, in view of Laurent, and further in view of Jung et al. (US 2020/0258029, hereinafter Jung). Regarding claim 15, the combination of Haddick, Kobayashi, Ogawa and Laurent teaches the wearable device of claim 1, further comprising: a communication circuit (Haddick - [0103]: The HWC 102 may communicate with external user interfaces 104. The external user interfaces 104 may provide a physical user interface to take control instructions from a user of the HWC 102 and the external user interfaces 104 and the HWC 102 may communicate bi-directionally to affect the user's command and provide feedback to the external device 108; Haddick - [0106]: The HWC 102 may also have a number of integrated computing facilities, such as an integrated processor, integrated power management, communication structures (e.g. cell net, WiFi, Bluetooth, local area connections, mesh connections, remote connections (e.g. client server, etc.)), and the like), wherein the instructions further cause, when executed by the processor, the wearable device to: identify an electronic device (Haddick - other HWC’s 102) being positioned in the region (Haddick - identifying other HWC’s 102 in the proximity of a first HWC 102), via the camera or the communication circuit (Haddick - identifying two HWC’s 102 are looking at each other based on forward facing camera of the HWC 102 is functionally analogous to identify an electronic device positioned in the region using a camera; Haddick - [0243]: An aspect of the present invention relates to securely linking HWC 102 and securely sharing files, streams, etc. (referred to generally as file sharing) with other HWC's 102 and/or other computers. Eye imaging, position, and tracking described herein elsewhere may be used in connection with the secure linking and file sharing. For example, a first HWC 102 may only be permitted to securely link with another HWC 102 if the wearer of the other HWC 102 is verified as a known person of a certain security level. In embodiments, the security level may be a government determined security level, a known person, a known friend, etc. Eye imaging may be used to identify the other HWCs 102 that may be allowed for sharing. For example, GPS or other location technologies may be used to identify other HWC's 102 in the proximity of a first HWC 102 and those proximal HWC's may be sorted into ones verified as secure HWC's, as verified by eye imaging, for example, and ones not verified. The sorted information or portion thereof may be presented in the first HWC 102 such that the wearer of the first HWC 102 can select secure sharing partners. Other sensor information may be used in connection with the secure sharing process. For example, identifying that two HWC's 102 are looking at one another (e.g. through e-compass readings, or HWC 102 forward facing camera image capture processing, etc.) may indicate that the two would like to share files or otherwise link communications and this information may be used in connection with the eye imaging verification for the secure sharing process). The combination of Haddick, Kobayashi, Ogawa and Laurent does not explicitly teach to transmit, to the electronic device via the communication circuit, a signal for changing settings of the electronic device to settings for the schedule, based on the direction corresponding to the first direction. Jung teaches to transmit, to the electronic device (external electronic device 102 or 104, fig. 1) via the communication circuit (communication module 190, fig. 1), a signal for changing settings of the electronic device to settings for the schedule (transmit the schedule information to acquire options capable of changing the operation of the electronic device related to the schedule; fig. 16 step 1627: change device schedule), based on the direction corresponding to the first direction (fig. 1; fig. 16; [0057]: Referring to FIG. 1, the electronic device 101 in the network environment 100 may communicate with an electronic device 102 via a first network 198 (e.g., a short-range wireless communication network), with an electronic device 104 or a server 108 via a second network 199 (e.g., a long-range wireless communication network), or with the electronic device 104 via the server 108; [0012]: An electronic device according to various embodiments includes: a communication circuit; an output device; and at least one processor configured to be connected to the communication circuit and the output device, wherein the processor is configured to acquire schedule information associated with an operation of the electronic device from a user, to generate a schedule based on the schedule information, to transmit the schedule information to an external server using the communication circuit, to acquire options capable of changing the operation of the electronic device related to the schedule, to output the options using the output device, to select any one option of the options based on a user input, and to change the schedule based on schedule information of the selected option; [0076]: Commands or data may be transmitted or received between the electronic device 101 and the external electronic device 104 via the server 108 coupled with the second network 199. Each of the electronic devices 102 and 104 may be a device of a same type as, or a different type, from the electronic device 101; [0134]: the electronic device 101 may include a processor 120, a memory 130, and a communication module 190; [0144]: the communication circuit 540 of the external server 530 may be a communicator including a circuit for communication processing. According to an embodiment, the communication circuit 540 may transmit schedule information according to a user input to the electronic device 101 based on the control of the processor 550; [0253]: In operation 1605, the electronic device 101 may transmit the input schedule information to the external server 530 based on the first user input. According to an embodiment, the electronic device 101 may display the schedule information input according to the first user input on the user interface and may provide the same to the user; [0264]: In operation 1625, the electronic device 101 may transmit response information to the external server 530 based on the second user input. According to an embodiment, the electronic device 101 may transmit response information (e.g., an ACK signal) requesting to change (or apply) the device schedule according to the change option based on the second user input, to the external server 530 through the communication module 190; [0267]: In operation 1631, the external server 530 may perform a corresponding function on the schedule. According to an embodiment, the external server 530 may perform a function related to the schedule at an alarm and/or a control time point related to the specific schedule, based on the result of the scheduling (or the result of monitoring). For example, when the corresponding schedule is a device control based on the device schedule, the external server 530 may transmit control information to a device (e.g., the electronic device 101 or a central control device)(or device management application {e.g., smart thing application} of the corresponding electronic device) that can control the corresponding device. In another example, when the external server 530 can directly control the device, the external server 530 may transmit, to the corresponding device, a command related to the operation control (e.g., operation control according to a configured option) of the device. In another example, when the corresponding schedule is an alarm associated with a specific user schedule, the external server 530 may transmit, to the electronic device (e.g., the electronic device 101) of the corresponding user, control information (e.g., control information for generating (or output) an alarm {e.g., schedule information and/or alarm sound} associated with the schedule)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Jung’s knowledge of transmitting a device schedule change signal to an external device as taught and modify the system of the combination of Haddick, Kobayashi, Ogawa and Laurent because such a system improves user experience by parallelizing the device schedule while enabling the user schedule of the user even when a device schedule and a user schedule are in conflict ([0055]). Regarding claim 16, the combination of Haddick, Kobayashi, Ogawa and Laurent teaches the wearable device of claim 1, further comprising: a communication circuit (Haddick - [0103]: The HWC 102 may communicate with external user interfaces 104. The external user interfaces 104 may provide a physical user interface to take control instructions from a user of the HWC 102 and the external user interfaces 104 and the HWC 102 may communicate bi-directionally to affect the user's command and provide feedback to the external device 108; Haddick - [0106]: The HWC 102 may also have a number of integrated computing facilities, such as an integrated processor, integrated power management, communication structures (e.g. cell net, WiFi, Bluetooth, local area connections, mesh connections, remote connections (e.g. client server, etc.)), and the like), wherein the instructions further cause, when executed by the processor, the wearable device to: identify an electronic device (Haddick - other HWC’s 102) comprising a display (Haddick - [0101]: The glasses may be a fully developed computing platform, such as including computer displays presented in each of the lenses of the glasses to the eyes of the user. In embodiments, the lenses and displays may be configured to allow a person wearing the glasses to see the environment through the lenses while also seeing, simultaneously, digital imagery, which forms an overlaid image that is perceived by the person as a digitally augmented image of the environment, or augmented reality (“AR”); Haddick - [0102]: HWC involves more than just placing a computing system on a person's head. The system may need to be designed as a lightweight, compact and fully functional computer display, such as wherein the computer display includes a high resolution digital display that provides a high level of emersion comprised of the displayed digital content and the see-through view of the environmental surroundings; Haddick - [0105]: In situations where the HWC 102 has integrated computer displays the displays may be configured as see-through displays such that the digital imagery can be overlaid with respect to the user's view of the environment 114), positioned in the region (Haddick - identifying other HWC’s 102 in the proximity of a first HWC 102), via the camera or the communication circuit (Haddick - identifying two HWC’s 102 are looking at each other based on forward facing camera of the HWC 102 is functionally analogous to identify an electronic device positioned in the region using a camera; Haddick - [0243]: An aspect of the present invention relates to securely linking HWC 102 and securely sharing files, streams, etc. (referred to generally as file sharing) with other HWC's 102 and/or other computers. Eye imaging, position, and tracking described herein elsewhere may be used in connection with the secure linking and file sharing. For example, a first HWC 102 may only be permitted to securely link with another HWC 102 if the wearer of the other HWC 102 is verified as a known person of a certain security level. In embodiments, the security level may be a government determined security level, a known person, a known friend, etc. Eye imaging may be used to identify the other HWCs 102 that may be allowed for sharing. For example, GPS or other location technologies may be used to identify other HWC's 102 in the proximity of a first HWC 102 and those proximal HWC's may be sorted into ones verified as secure HWC's, as verified by eye imaging, for example, and ones not verified. The sorted information or portion thereof may be presented in the first HWC 102 such that the wearer of the first HWC 102 can select secure sharing partners. Other sensor information may be used in connection with the secure sharing process. For example, identifying that two HWC's 102 are looking at one another (e.g. through e-compass readings, or HWC 102 forward facing camera image capture processing, etc.) may indicate that the two would like to share files or otherwise link communications and this information may be used in connection with the eye imaging verification for the secure sharing process), and based on the direction (Haddick - sight heading of the HWC or the direction that the person is looking along) corresponding to the first direction (Haddick - pre-determined direction such as sight vector A 4402 in fig. 44 from where the object in the environment is visible; Haddick – [0310], [0318], [0322]): display, via the display, an execution screen of a first software application set for the schedule (Ogawa - fig. 12; Ogawa – [0248]: A screen example (A) illustrates an example of display according to (1) the technology for handling a user's operation on the terminal device 10 before falling asleep. As illustrated in the screen example (A), the terminal device 10 displays a notification 132T1, a notification 132T2, and a button 132T3 on the display 132; [0252]: A screen example (B) and a screen example (C) illustrate display examples according to (2) the technology for giving a notification for encouraging the user to take sleep at an appropriate timing and (3) the technology for encouraging the user to move such that the user can appropriately take sleep on the basis of information of the current location of the user), with the at least portion of the graphic region (Haddick - region where the virtual world-locked position of the digital content is set; Haddick - [0310]: For example, a work wall 6714 proximate to the geo-spatial location of work 6702 may be identified for placement of the content presentation 6712 to be viewed within the FOV of the HWC 6710. The wall may then be used as a virtual marker or a virtual marker or physical marker may be identified on the wall such that the HWC 102 identifies the marker and then presents the content proximate the marker. A virtual marker may be based on physical markers, objects, lines, figures, etc. For example, the intersection of a top edge and side edge of a doorway at the place of work may be used as a virtual marker and the digital content in the FOV of the HWC 102 may be presented proximate the intersection. A virtual marker may also be established at some point distant from an object (e.g. a foot from the intersection). A virtual marker may also be set based on the person's physical location and the sight heading of the HWC 102 (e.g. as determined by an eCompass on the HWC). For example, once the person arrives at the physical location that was pre-selected based on the person's personal information, the content may be displayed in the FOV of the HWC 102 when the HWC 102 is aligned with a predetermined direction. So, for example, if the person is standing in a hallway and then looks north the content may be displayed. In embodiments, the HWC 102 identifies a physically present attribute in the surroundings and then associates the content with the attribute such that the content appears locked to the attribute from the person's perspective. For example, the person, once at the physical location, looks north and then the HWC 102 performs an analysis of the surroundings in the north direction (e.g. by imaging the surroundings with an onboard camera) to select a physical attribute to be used as a virtual marker. In embodiments, the physical marker may be pre-determined. For example, the doorway in a particular hallway may be presented as the object to key off of when setting the virtual world-locked position of the digital content. In embodiments, the physical attribute is selected based on a pre-determined criteria. For example, the system may have a list of priority placements such as being proximate a painting, picture, television, doorway, blank wall, etc., and the head-worn computer may review the physical location and select one of the priority placements for the virtual placement of the digital content; Haddick - [0318]: the content may be ready for presentation once the person has reached the physical location identified but the presentation may be conditioned on the person looking in a particular direction. The direction may be indicative of the person's eye heading or sight heading; Haddick - [0322]: the method further includes presenting the digital content when the head-worn computer is aligned in a pre-selected direction. For example, the sight heading, as described herein elsewhere, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented. In embodiments, the step of presenting further includes presenting the digital content when an eye of the user is predicted to be aligned in a pre-selected direction. For example, the user's eye heading, as described herein elsewhere, such as through the use of eye imaging, may align with a pre-set vector or place seen from the geo-spatial location and then the content may be presented). The combination of Haddick, Kobayashi, Ogawa and Laurent does not explicitly teach to transmit, to the electronic device via the communication circuit, a signal for displaying an execution screen of a second software application set for the schedule via the display of the electronic device. Jung teaches to transmit, to the electronic device (external electronic device 102 or 104, fig. 1) via the communication circuit (communication module 190, fig. 1), a signal for displaying an execution screen of a second software application set for the schedule (screen displaying change device schedule 1780 as shown in fig. 17B) via the display of the electronic device (fig. 1; fig. 16; fig. 17B; [0057]: Referring to FIG. 1, the electronic device 101 in the network environment 100 may communicate with an electronic device 102 via a first network 198 (e.g., a short-range wireless communication network), with an electronic device 104 or a server 108 via a second network 199 (e.g., a long-range wireless communication network), or with the electronic device 104 via the server 108; [0012]: An electronic device according to various embodiments includes: a communication circuit; an output device; and at least one processor configured to be connected to the communication circuit and the output device, wherein the processor is configured to acquire schedule information associated with an operation of the electronic device from a user, to generate a schedule based on the schedule information, to transmit the schedule information to an external server using the communication circuit, to acquire options capable of changing the operation of the electronic device related to the schedule, to output the options using the output device, to select any one option of the options based on a user input, and to change the schedule based on schedule information of the selected option; [0076]: Commands or data may be transmitted or received between the electronic device 101 and the external electronic device 104 via the server 108 coupled with the second network 199. Each of the electronic devices 102 and 104 may be a device of a same type as, or a different type, from the electronic device 101; [0134]: the electronic device 101 may include a processor 120, a memory 130, and a communication module 190; [0144]: the communication circuit 540 of the external server 530 may be a communicator including a circuit for communication processing. According to an embodiment, the communication circuit 540 may transmit schedule information according to a user input to the electronic device 101 based on the control of the processor 550; [0253]: In operation 1605, the electronic device 101 may transmit the input schedule information to the external server 530 based on the first user input. According to an embodiment, the electronic device 101 may display the schedule information input according to the first user input on the user interface and may provide the same to the user; [0264]: In operation 1625, the electronic device 101 may transmit response information to the external server 530 based on the second user input. According to an embodiment, the electronic device 101 may transmit response information (e.g., an ACK signal) requesting to change (or apply) the device schedule according to the change option based on the second user input, to the external server 530 through the communication module 190; [0267]: In operation 1631, the external server 530 may perform a corresponding function on the schedule. According to an embodiment, the external server 530 may perform a function related to the schedule at an alarm and/or a control time point related to the specific schedule, based on the result of the scheduling (or the result of monitoring). For example, when the corresponding schedule is a device control based on the device schedule, the external server 530 may transmit control information to a device (e.g., the electronic device 101 or a central control device)(or device management application {e.g., smart thing application} of the corresponding electronic device) that can control the corresponding device. In another example, when the external server 530 can directly control the device, the external server 530 may transmit, to the corresponding device, a command related to the operation control (e.g., operation control according to a configured option) of the device. In another example, when the corresponding schedule is an alarm associated with a specific user schedule, the external server 530 may transmit, to the electronic device (e.g., the electronic device 101) of the corresponding user, control information (e.g., control information for generating (or output) an alarm {e.g., schedule information and/or alarm sound} associated with the schedule); [0289]: According to an embodiment, when there is the section where the time information between the first schedule and the second schedule at least partially overlap each other (e.g., when a conflict occurs), in operation 1715, the device 1700 may output an option 1780 capable of changing the operation of the device related to the first schedule. For example, the device 1700 may output at least one option 1780 such as “There is a conflict with OOO schedule in currently set mode 5. Do you want to switch to mode 2 where washing is completed more quickly? Otherwise, do you want to request drying laundry from Family A?”. According to an embodiment, the device 1700 may provide information related to the option 1780 to the user through a visual output (e.g., display indication) and/or audio output (e.g., voice output) based on an output device (e.g., a display and/or a speaker) provided therein). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Jung’s knowledge of transmitting a device schedule change signal to an external device as taught and modify the system of the combination of Haddick, Kobayashi, Ogawa and Laurent because such a system improves user experience by parallelizing the device schedule while enabling the user schedule of the user even when a device schedule and a user schedule are in conflict ([0055]). Regarding claim 19, the combination of Haddick, Kobayashi, Ogawa and Laurent teaches the wearable device of claim 1, further comprising: a communication circuit (Haddick - [0103]: The HWC 102 may communicate with external user interfaces 104. The external user interfaces 104 may provide a physical user interface to take control instructions from a user of the HWC 102 and the external user interfaces 104 and the HWC 102 may communicate bi-directionally to affect the user's command and provide feedback to the external device 108; Haddick - [0106]: The HWC 102 may also have a number of integrated computing facilities, such as an integrated processor, integrated power management, communication structures (e.g. cell net, WiFi, Bluetooth, local area connections, mesh connections, remote connections (e.g. client server, etc.)), and the like), wherein the instructions further cause, when executed by the processor, the wearable device to: register, through a software application (Ogawa - inputting information of places associated with each schedule inherently involves using an application is functionally equivalent of the act of registering information), the schedule associated with the place (Haddick -location associated with the scheduled content; Ogawa - place associated with the schedule) including the region to which the graphic region is set (Haddick - a pre-selected physical location where the content is scheduled to be displayed to a person is identified from the personal information related to the person; Haddick - [0307]: content is presented in a FOV of a HWC 102 when the HWC 102 is at a physical location that was selected based on personal information particular to the wearer of the HWC 102 … Personal information relating to the person may be stored such that it can be retrieved during a process of determining at what physical location in the world certain digital content should be presented to the person; Haddick - [0308]: a method of presenting digital content in a FOV of a HWC 102 may include identifying that the HWC 102 has arrived at a physical location, wherein the physical location is pre-determined based on personal information relating to the person wearing the HWC, and presenting the digital content in relation to an attribute in the surroundings where the attribute was pre-selected based on the personal information. The personal information may relate to personal attributes, demographics, behaviors, prior visited locations, stored personal locations, preferred locations, travel habits, etc.; Haddick - [0310]: FIG. 67 illustrates a person entering a location proximate to his place of work 6702. This location has been pre-selected 6704 as a physical location for the presentation of digital content in the HWC 102 based on stored personal information 6708; Haddick - [0328]: There are situations where content is scheduled to be presented to the wearer of a HWC 102 when the wearer enters a region or physical location or looks towards a physical location; Ogawa - [0239]: In Step S1105, the terminal device 10 acquires information of the current location of the user using the location information sensor 150 or the like. In addition, in a case in which information of places is input to each schedule in information of schedules of a user, the terminal device 10 may identify a current location of the user at a time of the schedule on the basis of the information of places associated with the schedule). The combination of Haddick, Kobayashi, Ogawa and Laurent does not explicitly teach based on the registration, transmit, via the communication circuit to a server, information on the schedule, and provide, through an operating system, data for accessing to the information in the server, to one or more other software applications in the wearable device, the one or more other software applications capable of processing the schedule. Jung teaches based on the registration, transmit, via the communication circuit to a server (communication module 190, fig. 1; [0057]: Referring to FIG. 1, the electronic device 101 in the network environment 100 may communicate with an electronic device 102 via a first network 198 (e.g., a short-range wireless communication network), with an electronic device 104 or a server 108 via a second network 199 (e.g., a long-range wireless communication network), or with the electronic device 104 via the server 108), information on the schedule (fig. 1; fig. 16; fig. 17B; [0012]: An electronic device according to various embodiments includes: a communication circuit; an output device; and at least one processor configured to be connected to the communication circuit and the output device, wherein the processor is configured to acquire schedule information associated with an operation of the electronic device from a user, to generate a schedule based on the schedule information, to transmit the schedule information to an external server using the communication circuit, to acquire options capable of changing the operation of the electronic device related to the schedule, to output the options using the output device, to select any one option of the options based on a user input, and to change the schedule based on schedule information of the selected option; [0076]: Commands or data may be transmitted or received between the electronic device 101 and the external electronic device 104 via the server 108 coupled with the second network 199. Each of the electronic devices 102 and 104 may be a device of a same type as, or a different type, from the electronic device 101; [0134]: the electronic device 101 may include a processor 120, a memory 130, and a communication module 190; [0144]: the communication circuit 540 of the external server 530 may be a communicator including a circuit for communication processing. According to an embodiment, the communication circuit 540 may transmit schedule information according to a user input to the electronic device 101 based on the control of the processor 550; [0264]: In operation 1625, the electronic device 101 may transmit response information to the external server 530 based on the second user input. According to an embodiment, the electronic device 101 may transmit response information (e.g., an ACK signal) requesting to change (or apply) the device schedule according to the change option based on the second user input, to the external server 530 through the communication module 190; [0267]: In operation 1631, the external server 530 may perform a corresponding function on the schedule. According to an embodiment, the external server 530 may perform a function related to the schedule at an alarm and/or a control time point related to the specific schedule, based on the result of the scheduling (or the result of monitoring). For example, when the corresponding schedule is a device control based on the device schedule, the external server 530 may transmit control information to a device (e.g., the electronic device 101 or a central control device)(or device management application {e.g., smart thing application} of the corresponding electronic device) that can control the corresponding device. In another example, when the external server 530 can directly control the device, the external server 530 may transmit, to the corresponding device, a command related to the operation control (e.g., operation control according to a configured option) of the device. In another example, when the corresponding schedule is an alarm associated with a specific user schedule, the external server 530 may transmit, to the electronic device (e.g., the electronic device 101) of the corresponding user, control information (e.g., control information for generating (or output) an alarm {e.g., schedule information and/or alarm sound} associated with the schedule); [0289]: According to an embodiment, when there is the section where the time information between the first schedule and the second schedule at least partially overlap each other (e.g., when a conflict occurs), in operation 1715, the device 1700 may output an option 1780 capable of changing the operation of the device related to the first schedule. For example, the device 1700 may output at least one option 1780 such as “There is a conflict with OOO schedule in currently set mode 5. Do you want to switch to mode 2 where washing is completed more quickly? Otherwise, do you want to request drying laundry from Family A?”. According to an embodiment, the device 1700 may provide information related to the option 1780 to the user through a visual output (e.g., display indication) and/or audio output (e.g., voice output) based on an output device (e.g., a display and/or a speaker) provided therein), and provide, through an operating system ([0061]: The program 140 may be stored in the memory 130 as software, and may include an operating system (OS) 142, middleware 144, or an application 146), data for accessing to the information in the server ([0252]: In operation 1603, the electronic device 101 may receive a user input (hereinafter, referred to as a “first user input”) for inputting schedule information based on the user interface; [0253]: In operation 1605, the electronic device 101 may transmit the input schedule information to the external server 530 based on the first user input. According to an embodiment, the electronic device 101 may display the schedule information input according to the first user input on the user interface and may provide the same to the user), to one or more other software applications in the wearable device, the one or more other software applications (calendar application) capable of processing the schedule ([0137]: According to an embodiment, the processor 120 of the electronic device 101 may execute the application 131 (e.g., a calendar application), and may receive a user input through a user interface related to the calendar application to generate a device schedule related to the device. According to an embodiment, when a device schedule (e.g., a new schedule) according to a user input and a user schedule neighboring to the device schedule conflicts with each other, the processor 120 may propose a change option associated with a device operation required time. For example, the processor 120 may propose an option of changing the operation required time of the device to match the operation completion time of the device to the user schedule; [0166]: The processor may acquire schedule information associated with an operation of the electronic device from a user, may generate a schedule based on the schedule information, may transmit the schedule information to an external server using the communication circuit, may acquire options capable of changing the operation of the electronic device associated with the schedule, may output the options using the output device, may select any one option among the options based on a user input, and may change the schedule based on the schedule information of the selected option; [0196]: According to an embodiment, the processor 120 may analyze schedule information according to a new schedule and at least one piece of schedule information that has been previously registered in the calendar application by a user who has registered the new schedule, thereby generating the identification tag associated with the corresponding device. An example of this is shown in FIG. 8B). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Jung’s knowledge of transmitting a registered schedule and providing for processing as taught and modify the system of the combination of Haddick, Kobayashi, Ogawa and Laurent because such a system improves user experience by parallelizing the device schedule while enabling the user schedule of the user even when a device schedule and a user schedule are in conflict ([0055]). Regarding claim 20, the combination of Haddick, Kobayashi, Ogawa and Laurent teaches the wearable device of claim 1, wherein the instructions further cause, when executed by the processor, the wearable device to: register, through a software application (Ogawa - inputting information of places associated with each schedule inherently involves using an application is functionally equivalent of the act of registering information), the schedule associated with the place (Haddick -location associated with the scheduled content; Ogawa - place associated with the schedule) including the region to which the graphic region is set (Haddick - a pre-selected physical location where the content is scheduled to be displayed to a person is identified from the personal information related to the person; Haddick - [0307]: content is presented in a FOV of a HWC 102 when the HWC 102 is at a physical location that was selected based on personal information particular to the wearer of the HWC 102 … Personal information relating to the person may be stored such that it can be retrieved during a process of determining at what physical location in the world certain digital content should be presented to the person; Haddick - [0308]: a method of presenting digital content in a FOV of a HWC 102 may include identifying that the HWC 102 has arrived at a physical location, wherein the physical location is pre-determined based on personal information relating to the person wearing the HWC, and presenting the digital content in relation to an attribute in the surroundings where the attribute was pre-selected based on the personal information. The personal information may relate to personal attributes, demographics, behaviors, prior visited locations, stored personal locations, preferred locations, travel habits, etc.; Haddick - [0310]: FIG. 67 illustrates a person entering a location proximate to his place of work 6702. This location has been pre-selected 6704 as a physical location for the presentation of digital content in the HWC 102 based on stored personal information 6708; Haddick - [0328]: There are situations where content is scheduled to be presented to the wearer of a HWC 102 when the wearer enters a region or physical location or looks towards a physical location; Ogawa - [0239]: In Step S1105, the terminal device 10 acquires information of the current location of the user using the location information sensor 150 or the like. In addition, in a case in which information of places is input to each schedule in information of schedules of a user, the terminal device 10 may identify a current location of the user at a time of the schedule on the basis of the information of places associated with the schedule); and provide, data indicating a location in which information on the schedule is stored according to the registration (Ogawa - [0239]: In Step S1105, the terminal device 10 acquires information of the current location of the user using the location information sensor 150 or the like. In addition, in a case in which information of places is input to each schedule in information of schedules of a user, the terminal device 10 may identify a current location of the user at a time of the schedule on the basis of the information of places associated with the schedule). The combination of Haddick, Kobayashi, Ogawa and Laurent does not explicitly teach the data is provided, through an operating system to one or more other software applications in the wearable device, the one or more other software applications capable of processing the schedule. Jung teaches the data is provided, through an operating system to one or more other software applications in the wearable device ([0061]: The program 140 may be stored in the memory 130 as software, and may include an operating system (OS) 142, middleware 144, or an application 146; [0252]: In operation 1603, the electronic device 101 may receive a user input (hereinafter, referred to as a “first user input”) for inputting schedule information based on the user interface; [0253]: In operation 1605, the electronic device 101 may transmit the input schedule information to the external server 530 based on the first user input. According to an embodiment, the electronic device 101 may display the schedule information input according to the first user input on the user interface and may provide the same to the user), the one or more other software applications (calendar application) capable of processing the schedule ([0137]: According to an embodiment, the processor 120 of the electronic device 101 may execute the application 131 (e.g., a calendar application), and may receive a user input through a user interface related to the calendar application to generate a device schedule related to the device. According to an embodiment, when a device schedule (e.g., a new schedule) according to a user input and a user schedule neighboring to the device schedule conflicts with each other, the processor 120 may propose a change option associated with a device operation required time. For example, the processor 120 may propose an option of changing the operation required time of the device to match the operation completion time of the device to the user schedule; [0166]: The processor may acquire schedule information associated with an operation of the electronic device from a user, may generate a schedule based on the schedule information, may transmit the schedule information to an external server using the communication circuit, may acquire options capable of changing the operation of the electronic device associated with the schedule, may output the options using the output device, may select any one option among the options based on a user input, and may change the schedule based on the schedule information of the selected option; [0196]: According to an embodiment, the processor 120 may analyze schedule information according to a new schedule and at least one piece of schedule information that has been previously registered in the calendar application by a user who has registered the new schedule, thereby generating the identification tag associated with the corresponding device. An example of this is shown in FIG. 8B). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Jung’s knowledge of transmitting a registered schedule and providing for processing as taught and modify the system of the combination of Haddick, Kobayashi, Ogawa and Laurent because such a system improves user experience by parallelizing the device schedule while enabling the user schedule of the user even when a device schedule and a user schedule are in conflict ([0055]). Allowable Subject Matter Claims 6-7, 9, 11, 13 and 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claims 6-7, none of the cited prior art references of record, teach either individually or in combination, the limitation “cease displaying a first execution screen positioned outside of the gaze from among the one or more execution screens, based on the direction corresponding to the first direction, and wherein a second execution screen in which the gaze is positioned from among the one or more execution screens is maintained via the display, independently from the direction corresponding to the first direction”. Regarding claim 9, none of the cited prior art references of record, teach either individually or in combination, the limitation “to display, via the display, a message comprising an executable object for ceasing to display the graphic region on the region, while at least portion of the graphic region appears, and maintain to provide the region, by ceasing to display a portion of the graphic region displayed based on the direction corresponding to the first direction, in response to a user input on the executable object”. Regarding claim 11, none of the cited prior art references of record, teach either individually or in combination, the limitation “to identify a level of the schedule, display, in a virtual reality environment, the at least portion of the graphic region, based on the direction corresponding to the first direction and the level higher than a reference label, and display, in a mixed reality environment, the at least portion of the graphic region, based on the direction corresponding to the first direction and the level lower than or equal to the reference label”. Regarding claim 13, none of the cited prior art references of record, teach either individually or in combination, the limitation “to identify, while the at least portion of the graphic region is displayed via the display, a progress status of the schedule, based on biometric data of the user, and based on the progress status, maintain to display the at least portion of the graphic region or change the at least portion of the graphic region to at least portion of another graphic region set with respect to the region for the schedule”. Regarding claim 17, none of the cited prior art references of record, teach either individually or in combination, the limitation “the wearable device of claim 1, further comprising: a communication circuit, wherein the instructions further cause, when executed by the processor, the wearable device to: register, through a software application, the schedule associated with the place including the region to which the graphic region is set, based on the registration, transmit, via the communication circuit to a server, information on the schedule, while the software application is in an inactive state, receive, via the communication circuit, a signal transmitted from the server in response to identifying the schedule based on the information, change, in response to the signal, a state of the software application from the inactive state to an active state, and execute operations for displaying via the display the at least portion of the graphic region, by using the software application changed to the active state”. Regarding claim 18, none of the cited prior art references of record, teach either individually or in combination, the limitation “the wearable device of claim 1, further comprising: a communication circuit, wherein the instructions further cause, when executed by the processor, the wearable device to: register, through a software application, the schedule associated with the place including the region to which the graphic region is set, based on the registration, transmit, via the communication circuit to a server, information on the schedule, receive, via the communication circuit, a signal transmitted from the server in response to identifying the schedule based on the information, change, in response to the signal, states of one or more other software applications indicated by the signal to active states, and execute operations for displaying via the display the at least portion of the graphic region, based at least in part on the one or more other software applications changed to the active states”. Claim 8 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. You et al. (US 2017/0026577) describes a sensory cue may be provided to also indicate the direction of a future event in the recorded panoramic video output 206 which is within a current field of view. For example, if a first future event is due to happen in 20 seconds time within the current field of view and a second future event is due to happen in 25 seconds time outside the current field of view, two sensory cues may be provided to the user, one relating to each future event. The sensory cue relating to the event due to occur within the user's current field of view in 20 seconds may be provided to notify the user so the user can decide not to change his field of view to look away at the event currently outside his field of view, instead maintaining his current field of view to see the upcoming event due to appear in that current field of view. Lee et al. (US 2014/0333566) describes a mobile terminal, including a touch screen configured to display visual information, a camera unit configured to detect user's gaze information and a controller configured to execute an application in response to a user input, wherein an execution screen of the application is displayed on the touch screen, detect the user's gaze information, and stop displaying the execution screen of the application based on an elapsed time from when the user's gaze information is not detected, wherein the elapsed time is varied based on a type of the application. Kurz et al. (US 2017/0109916) describes the selecting a presentation mode from the plurality of presentation modes according to the spatial relationship comprises determining if the spatial relationship indicates that a distance between the camera and the real object is below a threshold, if yes, selecting the augmented reality mode as the presentation mode, otherwise, selecting at least one of the virtual reality mode and an audio mode as the presentation mode. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JWALANT B AMIN whose telephone number is (571)272-2455. The examiner can normally be reached Monday-Friday 10am - 630pm CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JWALANT AMIN/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Mar 14, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597091
COMPUTER-IMPLEMENTED METHOD, APPARATUS, SYSTEM AND COMPUTER PROGRAM FOR CONTROLLING A SIGHTEDNESS IMPAIRMENT OF A SUBJECT
2y 5m to grant Granted Apr 07, 2026
Patent 12592020
TRACKING SYSTEM, TRACKING METHOD, AND SELF-TRACKING TRACKER
2y 5m to grant Granted Mar 31, 2026
Patent 12585324
PROCESSOR, IMAGE PROCESSING DEVICE, GLASSES-TYPE INFORMATION DISPLAY DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12585130
LUMINANCE-AWARE UNINTRUSIVE RECTIFICATION OF DEPTH PERCEPTION IN EXTENDED REALITY FOR REDUCING EYE STRAIN
2y 5m to grant Granted Mar 24, 2026
Patent 12579571
METHOD FOR IMPROVING AESTHETIC APPEARANCE OF RETAILER GRAPHICAL USER INTERFACE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
94%
With Interview (+15.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 631 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month