Prosecution Insights
Last updated: April 19, 2026
Application No. 18/640,606

WEARABLE DEVICE, METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR DISPLAYING MULTIMEDIA CONTENT

Final Rejection §103
Filed
Apr 19, 2024
Examiner
AMIN, JWALANT B
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
94%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
500 granted / 631 resolved
+17.2% vs TC avg
Strong +15% interview lift
Without
With
+15.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
14 currently pending
Career history
645
Total Applications
across all art units

Statute-Specific Performance

§101
13.4%
-26.6% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 631 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 21-25, 28-32 and 35-38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wong et al. (US 2016/0232717, hereinafter Wong), and further in view of Terre et al. (US 2022/0229534, hereinafter Terre). Regarding claim 21, Wong teaches an electronic device (head-mounted device 102, fig. 1A-B) comprising: displays (lens elements 110, 112,fig. 1A; [0036]: One or more of each of the lens elements 110, 112 may be formed of any material that can suitably display a projected image or graphic. Each of the lens elements 110, 112 may also be sufficiently transparent to allow a user to see through the lens element. Combining these two features of the lens elements may facilitate an augmented reality or heads-up display where the projected image or graphic is superimposed over a real-world view as perceived by the user through the lens elements 110, 112) positioned in front of a user's eyes (it is inherent that when the head-mounted device 102 is worn by the user, the lens elements 110 and 112 are in front of the user’s eyes; [0049]: The single lens element 230 may be positioned in front of or proximate to a user's eye when the wearable computing device 222 is worn by a user) when the electronic device is worn by a user (it is inherent that the head-mounted device 102 comprising the lens elements 110 and 112 is to be worn by the user; [0049]: The single lens element 230 may be positioned in front of or proximate to a user's eye when the wearable computing device 222 is worn by a user); at least one camera (video camera 120, fig. 1); memory (memory 404, fig. 4), storing instructions, comprising one or more storage mediums ([0038]: The system 100 may also include an on-board computing system 118 … The on-board computing system 118 may include a processor and memory, for example; [0052]: The device 310 may further include on-board data storage, such as memory 318 coupled to the processor 314. The memory 318 may store software that can be accessed and executed by the processor 314, for example; [0056]: Computing system 400 may include at least one processor 402 and system memory 404; [0058]: computing system 400 may also include one or more data storage devices 424, which can be removable storage devices, non-removable storage devices, or a combination thereof. Examples of removable storage devices and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and/or any other storage device now known or later developed. Computer storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. For example, computer storage media may take the form of RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium now known or later developed that can be used to store the desired information and which can be accessed by computing system 400); and at least one processor (processor 402, fig. 4) comprising processing circuitry ([0038]: The system 100 may also include an on-board computing system 118 … The on-board computing system 118 may include a processor and memory, for example; [0052]: The device 310 may further include on-board data storage, such as memory 318 coupled to the processor 314. The memory 318 may store software that can be accessed and executed by the processor 314, for example; [0056]: Computing system 400 may include at least one processor 402 and system memory 404), wherein the instructions, when executed by the at least one processor individually or collectively ([0052]: The device 310 may further include on-board data storage, such as memory 318 coupled to the processor 314. The memory 318 may store software that can be accessed and executed by the processor 314, for example; [0059]: According to an example embodiment, computing system 400 may include program instructions 426 that are stored in system memory 404 (and/or possibly in another data-storage medium) and executable by processor 402 to facilitate the various functions described herein), cause the electronic device to: display, on the displays, a multimedia content (menu 504 including content object 506, fig. 5A-B-5D) with a first opacity (as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency; making virtual objects at least partially transparent inherently implies that the virtual objects are displayed with a first transparency/opacity that is than changed to a second transparency/opacity; [0031]: A wearable computer may include a head-mounted display (HMD) that presents virtual objects (e.g., graphical media content such as text, images, application windows, or video) on a substantially transparent display screen; [0066]: the wearable computing device may, in response to receiving the movement data corresponding to the upward movement, cause one or both of the view region 502 and the menu 504 to move such that the menu 504 becomes more visible in the view region 502; [0089]: Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604; [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent); while displaying the multimedia content with the first opacity, obtain, through the at least one camera, images (as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency; [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; [0096]: The wearable computer may use with various sensors or combinations of sensors to acquire the data that is used to initiate a collision-avoidance action. For example, some embodiments may utilize data from video cameras. As a specific example, an HMD may include front-facing cameras, which may be configured to capture images that extend beyond the field of view provided in the view region. Then, an integral or remotely attached computing device may employ image processing techniques to determine that a portion of the captured image represents a physical object and further estimate the distance between the camera and the object. In this case, the camera and wearable computer may transmit this estimated distance data to the computing system carrying out an object-detection process. Hence the object detection process may use the already determined distance as a basis for activating a collision-avoidance action); while displaying the multimedia content with the first opacity, identify, using the images, an external object in a field of view (FOV) of the at least one camera (as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency; [0033]: The object detection procedure may run continuously as a background process on the wearable computer or it may run only when activated by the wearer; [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; [0096]: an HMD may include front-facing cameras, which may be configured to capture images that extend beyond the field of view provided in the view region. Then, an integral or remotely attached computing device may employ image processing techniques to determine that a portion of the captured image represents a physical object and further estimate the distance between the camera and the object. In this case, the camera and wearable computer may transmit this estimated distance data to the computing system carrying out an object-detection process; [0131]: if a physical object is detected in front of the HMD but slightly left of the field of view's center; [0133]: a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object … the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist); based on the external object being identified in the FOV of the at least one camera ([0096]: some embodiments may utilize data from video cameras. As a specific example, an HMD may include front-facing cameras, which may be configured to capture images that extend beyond the field of view provided in the view region. Then, an integral or remotely attached computing device may employ image processing techniques to determine that a portion of the captured image represents a physical object and further estimate the distance between the camera and the object; [0131]: a physical object is detected in front of the HMD but slightly left of the field of view's center) while displaying the multimedia content with the first opacity (as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency), reduce an opacity of a portion of the multimedia content (only a portion of the virtual object that is directly in front of a detected physical object is made transparent) displayed on the displays from the first opacity to a second opacity (as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency; as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; making virtual objects at least partially transparent inherently implies that the transparency/opacity of the virtual objects is changed from a first transparency/opacity to a second transparency/opacity), for allowing the user to view the external object through the portion of the multimedia content with the second opacity (as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; fig. 6 step 608; fig. 7G; [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque); while the portion of the multimedia content is displayed with the second opacity, identify the external object moved beyond a threshold distance (as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; the system determines that the detected object, such as a cyclist, displayed a higher transparency (or reduced opacity) has turned a corner within a threshold distance of the HMD; [0033]: The object detection procedure may run continuously as a background process on the wearable computer or it may run only when activated by the wearer; [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; [0106]: In some embodiments, distance determination or collision avoidance actions using distance as a basis for activation may only apply to physical objects sufficiently overlaid by the HMD. For example, a system may determine that a physical object is near to an HMD, but that view of the object is not overlaid by the HMD. In this example, a collision-avoidance action that de-emphasizes virtual objects displayed to the HMD would not provide a less obstructed view of the physical object; [0107]: Some embodiments may use determination of the relative movement of an object as a basis for activating a collision-avoidance action. For instance, some exemplary embodiments may be configured to activate a collision-avoidance action only in response to objects that are sufficiently approaching the display. In such cases, determined distance may still be used as a secondary basis for activating a collision-avoidance action, but only after the condition of determined relative movement is satisfied. Some embodiments may include setting a relative velocity threshold in which physical objects determined to be moving sufficiently towards the HMD with a velocity higher than a threshold velocity may be the basis for activating a collision-avoidance action. Likewise, if an object is determined to be accelerating towards the HMD at greater than a threshold rate of acceleration, an exemplary wearable computer may responsively initiate a collision-avoidance action. Other exemplary movement patterns may also be used; [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; [0132]: Some exemplary procedures may include steps for de-activating the collision-avoidance action in response to receiving indication that the physical object is no longer a hazard or as a result of user-input. Exemplary collision-avoidance actions may therefore include procedures to move the virtual objects, which occupied the view region before activation of the collision-avoidance action, back to their original locations in the view region. Further, such procedures may also include re-establishing the size and opacity of the virtual objects. For example, an exemplary de-activation procedure may include storing the original position and content of the virtual objects in the view region before a collision-avoidance action is activated. Then, in response to the collision-avoidance action being de-activated, the procedure may move the original content of virtual objects back to their original positions in the view region; [0133]: Subsequent to initiating the collision-avoidance action, an exemplary embodiment may also include techniques for de-activating the collision-avoidance action in response to receiving second data from the sensors. For example, a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object. The wearer, alerted by the collision-avoidance action, may adjust course to avoid the stationary object. Then, the system may determine that the object no longer has a relative movement directed at the HMD and responsively de-activate the collision-avoidance action. As another example, the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist and activate a collision-avoidance action to alert the wearer. Subsequently, the cyclist may move beyond the threshold distance from the HMD and responsively the system may de-activate the collision-avoidance action. Many other exemplary de-activation procedures may be utilized); and based on the external object being identified while the portion of the multimedia content is displayed with the second opacity as moved beyond the threshold distance, restore the opacity of the portion of the multimedia content to the first opacity (as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency; as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; the system determines that the detected object, such as a cyclist, displayed a higher transparency (or reduced opacity) has turned a corner within a threshold distance of the HMD, the system de-activates the collision avoidance action and re-establishes the transparency/opacity of the virtual objects to their original state or value; [0033]: The object detection procedure may run continuously as a background process on the wearable computer or it may run only when activated by the wearer; [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; [0106]: In some embodiments, distance determination or collision avoidance actions using distance as a basis for activation may only apply to physical objects sufficiently overlaid by the HMD. For example, a system may determine that a physical object is near to an HMD, but that view of the object is not overlaid by the HMD. In this example, a collision-avoidance action that de-emphasizes virtual objects displayed to the HMD would not provide a less obstructed view of the physical object; [0107]: Some embodiments may use determination of the relative movement of an object as a basis for activating a collision-avoidance action. For instance, some exemplary embodiments may be configured to activate a collision-avoidance action only in response to objects that are sufficiently approaching the display. In such cases, determined distance may still be used as a secondary basis for activating a collision-avoidance action, but only after the condition of determined relative movement is satisfied. Some embodiments may include setting a relative velocity threshold in which physical objects determined to be moving sufficiently towards the HMD with a velocity higher than a threshold velocity may be the basis for activating a collision-avoidance action. Likewise, if an object is determined to be accelerating towards the HMD at greater than a threshold rate of acceleration, an exemplary wearable computer may responsively initiate a collision-avoidance action. Other exemplary movement patterns may also be used; [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; [0132]: Some exemplary procedures may include steps for de-activating the collision-avoidance action in response to receiving indication that the physical object is no longer a hazard or as a result of user-input. Exemplary collision-avoidance actions may therefore include procedures to move the virtual objects, which occupied the view region before activation of the collision-avoidance action, back to their original locations in the view region. Further, such procedures may also include re-establishing the size and opacity of the virtual objects. For example, an exemplary de-activation procedure may include storing the original position and content of the virtual objects in the view region before a collision-avoidance action is activated. Then, in response to the collision-avoidance action being de-activated, the procedure may move the original content of virtual objects back to their original positions in the view region; [0133]: Subsequent to initiating the collision-avoidance action, an exemplary embodiment may also include techniques for de-activating the collision-avoidance action in response to receiving second data from the sensors. For example, a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object. The wearer, alerted by the collision-avoidance action, may adjust course to avoid the stationary object. Then, the system may determine that the object no longer has a relative movement directed at the HMD and responsively de-activate the collision-avoidance action. As another example, the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist and activate a collision-avoidance action to alert the wearer. Subsequently, the cyclist may move beyond the threshold distance from the HMD and responsively the system may de-activate the collision-avoidance action. Many other exemplary de-activation procedures may be utilized). Wong does not explicitly teach while the portion of the multimedia content is displayed with the second opacity, identify whether the external object is moved out of the FOV of the at least one camera; and based on the external object being identified while the portion of the multimedia content is displayed with the second opacity as moved out of the FOV of the at least one camera, restore the opacity of the portion of the multimedia content to the first opacity. Terre, in a similar field of endeavor, teaches while the portion of the multimedia content is displayed with the second opacity (when a non-user or a person enters the field of view of the user, the opacity of the virtual content is adjusted (reduced) such that the user can at least partially see the person entering his/her field of view; [0303]: Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0306]: if a person is detected entering the field of view, one or more of the following extended reality display parameter changes might occur, based on rules that, upon detection of the person, cause the adjustment to occur: change an opacity level of the display so that the person becomes at least partially visible; [0317]: a default rule may be to adjust the opacity of the extended reality display to a 50% setting such that the user may perceive the change in the physical environment; [0038]: if a person approaches the wearer, the rule may cause the at least one adjustable extended reality display parameter to change (e.g., adjusting the opacity of the virtually displayed content and/or reducing the size of at least one virtual screen associated with the virtually displayed content) so that the wearer can see the non-user), identify whether the external object is moved out of the FOV of the at least one camera (the wearable extended reality appliance is configured to detect changes to the physical environment around the user such as when a person enters or exits the field of view of the imaging sensor of a user’s wearable extended reality appliance; [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0210]: the at least one image sensor of the wearable extended reality appliance may continuously or periodically monitor the scenes in the field of view of the at least one image sensor; [0294]: the wearable extended reality appliance may be configured to detect changes to the physical environment around the user and to automatically adjust one or more display parameters of an extended reality display being provided by the wearable extended reality appliance, thereby heightening the user's awareness of the environmental changes. For example, a change to the physical environment may include another person (i.e., a non-user) walking toward or into the field of view of the wearable extended reality appliance; [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring); and based on the external object being identified while the portion of the multimedia content is displayed with the second opacity as moved out of the FOV of the at least one camera, restore the opacity of the portion of the multimedia content to the first opacity (when a person exits the user’s field of view, ongoing change in the physical environment of the user is detected to no longer occur, and the opacity of the virtual content that was reduced when the environmental change first occurred (i.e., when the person entered into the user’s field of view) is automatically returned to the prior state (original opacity at which the virtual content was displayed prior to the change in environment of the user); [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Terre’s knowledge of restoring the opacity of the virtual content to a prior state when a person exits a user’s field of view and modify the system of Wong because such a system enhances the user’s experience by allowing the user to see the non-user and interact with the non-user without needing to remove the wearable extended reality appliance ([0338]). Regarding claim 22, the combination of Wong and Terre teaches the electronic device of claim 21, wherein the instructions, when executed by the at least one processor individually or collectively, further cause the electronic device to: recognize one or more of the images obtained while displaying the multimedia content with the first opacity (Wong - as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency; making virtual objects at least partially transparent inherently implies that the virtual objects are displayed with a first transparency/opacity that is than changed to a second transparency/opacity; Wong - [0032]: In particular embodiments, the system may detect the user's indication to look through a virtual panel based on comparison of the user gazing point and an identified object located beyond the virtual panel. As an example and not by way of limitation, the system may measure a vergence movement of the user using an eye tracking system and determine the gazing point of the user. The system may use one or more cameras and an object recognition model to identify an object which is located beyond/behind the virtual panel. The cameras may be forward-facing cameras associated with the headset worn by the user. The system may compare the gazing point of the user and the location of the identified object and determine that the user is looking at that object if the gazing point of the user is within a threshold distance to the location of the identified object. In particular embodiments, the system may determine that that the user is looking at the object when the user's gazing point is on or near that object for a time period longer than a threshold time period (e.g., 0.5 seconds, 1 second, 2 seconds); Wong - [0033]: The object detection procedure may run continuously as a background process on the wearable computer or it may run only when activated by the wearer; Wong - [0043]: The system 300 may recognize the face of the second user 370 using one or more cameras and a face recognition model. The system 300 may detect a gesture or behavior of the first user 360 or/and the second user 370, such as, waving a hand or nodding head; Wong - [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; Wong - [0096]: an HMD may include front-facing cameras, which may be configured to capture images that extend beyond the field of view provided in the view region. Then, an integral or remotely attached computing device may employ image processing techniques to determine that a portion of the captured image represents a physical object and further estimate the distance between the camera and the object. In this case, the camera and wearable computer may transmit this estimated distance data to the computing system carrying out an object-detection process; Wong - [0131]: if a physical object is detected in front of the HMD but slightly left of the field of view's center; Wong - [0133]: a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object … the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist; Terre - [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0294]: the wearable extended reality appliance may be configured to detect changes to the physical environment around the user and to automatically adjust one or more display parameters of an extended reality display being provided by the wearable extended reality appliance, thereby heightening the user's awareness of the environmental changes. For example, a change to the physical environment may include another person (i.e., a non-user) walking toward or into the field of view of the wearable extended reality appliance; Terre - [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); Terre - [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; Terre - [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring); identify, in accordance with recognizing one or more of the images, the external object, the external object moved into the FOV of the at least one camera while displaying the multimedia content with the first opacity (Wong - as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency; Wong - [0032]: The system may use one or more cameras and an object recognition model to identify an object which is located beyond/behind the virtual panel. The cameras may be forward-facing cameras associated with the headset worn by the user. The system may compare the gazing point of the user and the location of the identified object and determine that the user is looking at that object if the gazing point of the user is within a threshold distance to the location of the identified object; Wong - [0036]: the system may use one or more cameras with a face recognition model to detect a face of a second user in the field of view of the first user who is wearing the headset. The cameras used for the face detection may be associated with the headset (e.g., forward-facing or side-facing cameras of the headset) or may be associated with a third-party system (e.g., a vehicle) and communicate with the headset system. The system may recognize the second user nearby is a friend of the first user based on a social graph of a social network system of which both the first and second users are members. The system may infer that the first user is likely to interact with the second user; Wong - [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; Wong - [0096]: an HMD may include front-facing cameras, which may be configured to capture images that extend beyond the field of view provided in the view region. Then, an integral or remotely attached computing device may employ image processing techniques to determine that a portion of the captured image represents a physical object and further estimate the distance between the camera and the object. In this case, the camera and wearable computer may transmit this estimated distance data to the computing system carrying out an object-detection process; Terre - the wearable extended reality appliance is configured to detect changes to the physical environment around the user such as when a person enters the field of view of the imaging sensor of a user’s wearable extended reality appliance and adjusts the opacity of the virtual content displayed; Terre - [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; Terre - [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0210]: the at least one image sensor of the wearable extended reality appliance may continuously or periodically monitor the scenes in the field of view of the at least one image sensor; Terre - [0294]: the wearable extended reality appliance may be configured to detect changes to the physical environment around the user and to automatically adjust one or more display parameters of an extended reality display being provided by the wearable extended reality appliance, thereby heightening the user's awareness of the environmental changes. For example, a change to the physical environment may include another person (i.e., a non-user) walking toward or into the field of view of the wearable extended reality appliance; Terre - [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); Terre - [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; Terre - [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring); and based on the external object moving into the FOV of the at least one camera while displaying the multimedia content with the first opacity, reduce the opacity of the portion of the multimedia content (Wong - only a portion of the virtual object that is directly in front of a detected physical object is made transparent) from the first opacity to the second opacity (Wong - making virtual objects at least partially transparent inherently implies that the transparency/opacity of the virtual objects is changed from a first transparency/opacity to a second transparency/opacity; as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency) and maintain an opacity of another portion of the multimedia content as the first opacity (Wong - a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; fig. 6 step 608; fig. 7G; Wong - [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; Wong - [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; Wong - [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; Terre - the wearable extended reality appliance is configured to detect changes to the physical environment around the user such as when a person enters the field of view of the imaging sensor of a user’s wearable extended reality appliance and adjusts (reduces) the original opacity of the virtual content displayed to a second opacity; Terre - [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; Terre - [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0210]: the at least one image sensor of the wearable extended reality appliance may continuously or periodically monitor the scenes in the field of view of the at least one image sensor; Terre - [0294]: the wearable extended reality appliance may be configured to detect changes to the physical environment around the user and to automatically adjust one or more display parameters of an extended reality display being provided by the wearable extended reality appliance, thereby heightening the user's awareness of the environmental changes. For example, a change to the physical environment may include another person (i.e., a non-user) walking toward or into the field of view of the wearable extended reality appliance; Terre - [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); Terre - [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; Terre - [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring). Regarding claim 23, the combination of Wong and Terre teaches the electronic device of claim 21, wherein the instructions, when executed by the at least one processor individually or collectively, further cause the electronic device to: based on identifying, while displaying the multimedia content with the first opacity (Wong - as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency), that the external object is moved with respect to the electronic device in the FOV of the at least one camera, reduce the opacity of the portion of the multimedia content to the second opacity (Wong - when the wearer of an HMD is standing still and a cyclist suddenly turns a corner within a threshold distance of the HMD and appearing in the field of view of the wearer’s camera, the system may detect the nearby cyclist and de-emphasize the virtual object (i.e. change transparency of a portion of the virtual object that blocks the view of the cyclist) by activating the collision-avoidance action) activate the collision-avoidance action; as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; Wong - fig. 6 step 608; Wong - fig. 7G; Wong - [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; Wong - [0107]: Some embodiments may use determination of the relative movement of an object as a basis for activating a collision-avoidance action. For instance, some exemplary embodiments may be configured to activate a collision-avoidance action only in response to objects that are sufficiently approaching the display. In such cases, determined distance may still be used as a secondary basis for activating a collision-avoidance action, but only after the condition of determined relative movement is satisfied. Some embodiments may include setting a relative velocity threshold in which physical objects determined to be moving sufficiently towards the HMD with a velocity higher than a threshold velocity may be the basis for activating a collision-avoidance action. Likewise, if an object is determined to be accelerating towards the HMD at greater than a threshold rate of acceleration, an exemplary wearable computer may responsively initiate a collision-avoidance action. Other exemplary movement patterns may also be used; Wong - [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; Wong - [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; Wong - [0131]: if a physical object is detected in front of the HMD but slightly left of the field of view's center; Wong - [0132]: Some exemplary procedures may include steps for de-activating the collision-avoidance action in response to receiving indication that the physical object is no longer a hazard or as a result of user-input. Exemplary collision-avoidance actions may therefore include procedures to move the virtual objects, which occupied the view region before activation of the collision-avoidance action, back to their original locations in the view region. Further, such procedures may also include re-establishing the size and opacity of the virtual objects. For example, an exemplary de-activation procedure may include storing the original position and content of the virtual objects in the view region before a collision-avoidance action is activated. Then, in response to the collision-avoidance action being de-activated, the procedure may move the original content of virtual objects back to their original positions in the view region; Wong - [0133]: Subsequent to initiating the collision-avoidance action, an exemplary embodiment may also include techniques for de-activating the collision-avoidance action in response to receiving second data from the sensors. For example, a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object. The wearer, alerted by the collision-avoidance action, may adjust course to avoid the stationary object. Then, the system may determine that the object no longer has a relative movement directed at the HMD and responsively de-activate the collision-avoidance action. As another example, the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist and activate a collision-avoidance action to alert the wearer. Subsequently, the cyclist may move beyond the threshold distance from the HMD and responsively the system may de-activate the collision-avoidance action. Many other exemplary de-activation procedures may be utilized; Terre - the wearable extended reality appliance is configured to detect changes to the physical environment around the user such as when a person enters the field of view of the imaging sensor of a user’s wearable extended reality appliance and adjusts (reduces) the original opacity of the virtual content displayed to a second opacity; Terre - [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; Terre - [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0210]: the at least one image sensor of the wearable extended reality appliance may continuously or periodically monitor the scenes in the field of view of the at least one image sensor; Terre - [0294]: the wearable extended reality appliance may be configured to detect changes to the physical environment around the user and to automatically adjust one or more display parameters of an extended reality display being provided by the wearable extended reality appliance, thereby heightening the user's awareness of the environmental changes. For example, a change to the physical environment may include another person (i.e., a non-user) walking toward or into the field of view of the wearable extended reality appliance; Terre - [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); Terre - [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; Terre - [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring). Regarding claim 24, the combination of Wong and Terre teaches the electronic device of claim 21, wherein the instructions, when executed by the at least one processor individually or collectively, further cause the electronic device to: while displaying the portion of the multimedia content with the second opacity, obtain images through the at least one camera (Wong - the system continuously captures images and performs object detection in the background, and determines that the detected object, such as a cyclist, displayed a higher transparency (or reduced opacity) has moved out of the field of view of the user (i.e., the camera on the HMD worn by the user) based on the user changing the course (i.e. changing his field of view) or the object moving beyond a threshold distance; as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; Wong - [0033]: The object detection procedure may run continuously as a background process on the wearable computer or it may run only when activated by the wearer; [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; Wong - [0096]: The wearable computer may use with various sensors or combinations of sensors to acquire the data that is used to initiate a collision-avoidance action. For example, some embodiments may utilize data from video cameras. As a specific example, an HMD may include front-facing cameras, which may be configured to capture images that extend beyond the field of view provided in the view region. Then, an integral or remotely attached computing device may employ image processing techniques to determine that a portion of the captured image represents a physical object and further estimate the distance between the camera and the object. In this case, the camera and wearable computer may transmit this estimated distance data to the computing system carrying out an object-detection process. Hence the object detection process may use the already determined distance as a basis for activating a collision-avoidance action; Wong - [0106]: In some embodiments, distance determination or collision avoidance actions using distance as a basis for activation may only apply to physical objects sufficiently overlaid by the HMD. For example, a system may determine that a physical object is near to an HMD, but that view of the object is not overlaid by the HMD. In this example, a collision-avoidance action that de-emphasizes virtual objects displayed to the HMD would not provide a less obstructed view of the physical object; Wong - [0107]: Some embodiments may use determination of the relative movement of an object as a basis for activating a collision-avoidance action. For instance, some exemplary embodiments may be configured to activate a collision-avoidance action only in response to objects that are sufficiently approaching the display. In such cases, determined distance may still be used as a secondary basis for activating a collision-avoidance action, but only after the condition of determined relative movement is satisfied. Some embodiments may include setting a relative velocity threshold in which physical objects determined to be moving sufficiently towards the HMD with a velocity higher than a threshold velocity may be the basis for activating a collision-avoidance action. Likewise, if an object is determined to be accelerating towards the HMD at greater than a threshold rate of acceleration, an exemplary wearable computer may responsively initiate a collision-avoidance action. Other exemplary movement patterns may also be used; Wong - [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; Wong - [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; Wong - [0132]: Some exemplary procedures may include steps for de-activating the collision-avoidance action in response to receiving indication that the physical object is no longer a hazard or as a result of user-input. Exemplary collision-avoidance actions may therefore include procedures to move the virtual objects, which occupied the view region before activation of the collision-avoidance action, back to their original locations in the view region. Further, such procedures may also include re-establishing the size and opacity of the virtual objects. For example, an exemplary de-activation procedure may include storing the original position and content of the virtual objects in the view region before a collision-avoidance action is activated. Then, in response to the collision-avoidance action being de-activated, the procedure may move the original content of virtual objects back to their original positions in the view region; Wong - [0133]: Subsequent to initiating the collision-avoidance action, an exemplary embodiment may also include techniques for de-activating the collision-avoidance action in response to receiving second data from the sensors. For example, a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object. The wearer, alerted by the collision-avoidance action, may adjust course to avoid the stationary object. Then, the system may determine that the object no longer has a relative movement directed at the HMD and responsively de-activate the collision-avoidance action. As another example, the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist and activate a collision-avoidance action to alert the wearer. Subsequently, the cyclist may move beyond the threshold distance from the HMD and responsively the system may de-activate the collision-avoidance action. Many other exemplary de-activation procedures may be utilized; Terre - the wearable extended reality appliance is configured to detect changes to the physical environment around the user such as when a person enters the field of view of the imaging sensor of a user’s wearable extended reality appliance and adjusts (reduces) the original opacity of the virtual content displayed to a second opacity; Terre - [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; Terre - [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0210]: the at least one image sensor of the wearable extended reality appliance may continuously or periodically monitor the scenes in the field of view of the at least one image sensor; Terre - [0294]: the wearable extended reality appliance may be configured to detect changes to the physical environment around the user and to automatically adjust one or more display parameters of an extended reality display being provided by the wearable extended reality appliance, thereby heightening the user's awareness of the environmental changes. For example, a change to the physical environment may include another person (i.e., a non-user) walking toward or into the field of view of the wearable extended reality appliance; Terre - [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); Terre - [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; Terre - [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring); identify, using the images obtained while displaying the portion of the multimedia content with the second opacity, whether a movement of the external object in the FOV of the at least one camera is ceased (Wong - system determines that the detected object, such as a cyclist, displayed a higher transparency (or reduced opacity) has moved out of the field of view of the user (i.e., the camera on the HMD worn by the user) based on the user changing the course (i.e. changing his field of view) or the object moving beyond a threshold distance; since the object that moved out of the user’s field of view, the relative movement of the object with respect to the user cannot be detected and is functionally analogous to being ceased; as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; Wong - [0033]: The object detection procedure may run continuously as a background process on the wearable computer or it may run only when activated by the wearer; Wong - [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; Wong - [0096]: The wearable computer may use with various sensors or combinations of sensors to acquire the data that is used to initiate a collision-avoidance action. For example, some embodiments may utilize data from video cameras. As a specific example, an HMD may include front-facing cameras, which may be configured to capture images that extend beyond the field of view provided in the view region. Then, an integral or remotely attached computing device may employ image processing techniques to determine that a portion of the captured image represents a physical object and further estimate the distance between the camera and the object. In this case, the camera and wearable computer may transmit this estimated distance data to the computing system carrying out an object-detection process. Hence the object detection process may use the already determined distance as a basis for activating a collision-avoidance action; Wong - [0106]: In some embodiments, distance determination or collision avoidance actions using distance as a basis for activation may only apply to physical objects sufficiently overlaid by the HMD. For example, a system may determine that a physical object is near to an HMD, but that view of the object is not overlaid by the HMD. In this example, a collision-avoidance action that de-emphasizes virtual objects displayed to the HMD would not provide a less obstructed view of the physical object; Wong - [0107]: Some embodiments may use determination of the relative movement of an object as a basis for activating a collision-avoidance action. For instance, some exemplary embodiments may be configured to activate a collision-avoidance action only in response to objects that are sufficiently approaching the display. In such cases, determined distance may still be used as a secondary basis for activating a collision-avoidance action, but only after the condition of determined relative movement is satisfied. Some embodiments may include setting a relative velocity threshold in which physical objects determined to be moving sufficiently towards the HMD with a velocity higher than a threshold velocity may be the basis for activating a collision-avoidance action. Likewise, if an object is determined to be accelerating towards the HMD at greater than a threshold rate of acceleration, an exemplary wearable computer may responsively initiate a collision-avoidance action. Other exemplary movement patterns may also be used; Wong - [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; Wong - [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; Wong - [0132]: Some exemplary procedures may include steps for de-activating the collision-avoidance action in response to receiving indication that the physical object is no longer a hazard or as a result of user-input. Exemplary collision-avoidance actions may therefore include procedures to move the virtual objects, which occupied the view region before activation of the collision-avoidance action, back to their original locations in the view region. Further, such procedures may also include re-establishing the size and opacity of the virtual objects. For example, an exemplary de-activation procedure may include storing the original position and content of the virtual objects in the view region before a collision-avoidance action is activated. Then, in response to the collision-avoidance action being de-activated, the procedure may move the original content of virtual objects back to their original positions in the view region; Wong - [0133]: Subsequent to initiating the collision-avoidance action, an exemplary embodiment may also include techniques for de-activating the collision-avoidance action in response to receiving second data from the sensors. For example, a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object. The wearer, alerted by the collision-avoidance action, may adjust course to avoid the stationary object. Then, the system may determine that the object no longer has a relative movement directed at the HMD and responsively de-activate the collision-avoidance action. As another example, the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist and activate a collision-avoidance action to alert the wearer. Subsequently, the cyclist may move beyond the threshold distance from the HMD and responsively the system may de-activate the collision-avoidance action. Many other exemplary de-activation procedures may be utilized; Terre - the wearable extended reality appliance is configured to detect changes to the physical environment around the user such as when a person enters or exits the field of view of the imaging sensor of a user’s wearable extended reality appliance; Terre - [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; Terre - [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0210]: the at least one image sensor of the wearable extended reality appliance may continuously or periodically monitor the scenes in the field of view of the at least one image sensor; Terre - [0294]: the wearable extended reality appliance may be configured to detect changes to the physical environment around the user and to automatically adjust one or more display parameters of an extended reality display being provided by the wearable extended reality appliance, thereby heightening the user's awareness of the environmental changes. For example, a change to the physical environment may include another person (i.e., a non-user) walking toward or into the field of view of the wearable extended reality appliance; Terre - [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); Terre - [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; Terre - [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring); and based on identifying that the movement of the external object in the FOV of the at least one camera is ceased, increase the opacity of the portion of the multimedia content displayed on the displays from the second opacity to a third opacity (Wong - as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; the system determines that the detected object, such as a cyclist, displayed a higher transparency (or reduced opacity) has moved out of the field of view of the user (i.e., the camera on the HMD worn by the user), the system de-activates the collision avoidance action and re-establishes the transparency/opacity of the virtual objects to their original state or value; Wong - [0033]: The object detection procedure may run continuously as a background process on the wearable computer or it may run only when activated by the wearer; Wong - [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; Wong - [0106]: In some embodiments, distance determination or collision avoidance actions using distance as a basis for activation may only apply to physical objects sufficiently overlaid by the HMD. For example, a system may determine that a physical object is near to an HMD, but that view of the object is not overlaid by the HMD. In this example, a collision-avoidance action that de-emphasizes virtual objects displayed to the HMD would not provide a less obstructed view of the physical object; Wong - [0107]: Some embodiments may use determination of the relative movement of an object as a basis for activating a collision-avoidance action. For instance, some exemplary embodiments may be configured to activate a collision-avoidance action only in response to objects that are sufficiently approaching the display. In such cases, determined distance may still be used as a secondary basis for activating a collision-avoidance action, but only after the condition of determined relative movement is satisfied. Some embodiments may include setting a relative velocity threshold in which physical objects determined to be moving sufficiently towards the HMD with a velocity higher than a threshold velocity may be the basis for activating a collision-avoidance action. Likewise, if an object is determined to be accelerating towards the HMD at greater than a threshold rate of acceleration, an exemplary wearable computer may responsively initiate a collision-avoidance action. Other exemplary movement patterns may also be used; Wong - [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; Wong - [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; Wong - [0132]: Some exemplary procedures may include steps for de-activating the collision-avoidance action in response to receiving indication that the physical object is no longer a hazard or as a result of user-input. Exemplary collision-avoidance actions may therefore include procedures to move the virtual objects, which occupied the view region before activation of the collision-avoidance action, back to their original locations in the view region. Further, such procedures may also include re-establishing the size and opacity of the virtual objects. For example, an exemplary de-activation procedure may include storing the original position and content of the virtual objects in the view region before a collision-avoidance action is activated. Then, in response to the collision-avoidance action being de-activated, the procedure may move the original content of virtual objects back to their original positions in the view region; Wong - [0133]: Subsequent to initiating the collision-avoidance action, an exemplary embodiment may also include techniques for de-activating the collision-avoidance action in response to receiving second data from the sensors. For example, a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object. The wearer, alerted by the collision-avoidance action, may adjust course to avoid the stationary object. Then, the system may determine that the object no longer has a relative movement directed at the HMD and responsively de-activate the collision-avoidance action. As another example, the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist and activate a collision-avoidance action to alert the wearer. Subsequently, the cyclist may move beyond the threshold distance from the HMD and responsively the system may de-activate the collision-avoidance action. Many other exemplary de-activation procedures may be utilized; Terre - when a person exits the user’s field of view, ongoing change in the physical environment of the user is detected to no longer occur, and the opacity of the virtual content that was reduced when the environmental change first occurred (i.e., when the person entered into the user’s field of view) is automatically returned to the prior state (original opacity (third opacity in interpreted to be of the same value as the first opacity) at which the virtual content was displayed prior to the change in environment of the user); Terre - [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; Terre - [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; Terre - [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); Terre - [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; Terre - [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring). Regarding claim 25, the combination of Wong and Terre teaches the electronic device of claim 21, wherein the external object is viewable through the portion of the multimedia content displayed with the second opacity (Wong - as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; Wong - fig. 7G; Wong - [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; Terre - [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring). Claims 28-32 and 35-38 are similar in scope to claims 21-25 and 21-24, respectively, and therefore the examiner provides similar rationale to reject these claims. Moreover, Wong teaches a non-transitory computer readable storage medium as claimed ([0009] and [0058]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 26, 33 and 39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wong, in view of Terre, and further in view of Nomura et al. (US 2019/0158717, hereinafter Nomura). Regarding claim 26, the combination of Wong and Terre does not explicitly teach the electronic device of claim 21, wherein recognizing of the images is performed in a stand-alone state of the electronic device. Nomura teaches recognizing of the images is performed in a stand-alone state of the electronic device ([0137]: the head mounted display 100 is configured to perform processing in a stand-alone state). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Nomura’s knowledge of using a standalone electronic device as taught and modify the system of Wong and Terre because such a system reduces cost by eliminating the need for a communication interface ([0137]). Claims 33 and 39 are similar in scope to claim 26, and therefore the examiner provides similar rationale to reject these claims. Claim(s) 27, 34 and 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wong, in view of Terre, and further in view of Terrano (US 2020/0090401). Regarding claim 27, the combination of Wong and Terre teaches the electronic device of claim 21, wherein the instructions, when executed by the at least one processor individually or collectively, further cause the electronic device to: display the multimedia content with the first opacity as superimposed on a background layer (Wong - as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency as being superimposed on a background layer, i.e. the map is displayed superimposed on a background layer and the grocery list text is displayed as superimposed on a background layer; making virtual objects at least partially transparent inherently implies that the virtual objects are displayed with a first transparency/opacity that is than changed to a second transparency/opacity; Wong - fig. 7A, Wong - fig. 7G; Wong - [0031]: A wearable computer may include a head-mounted display (HMD) that presents virtual objects (e.g., graphical media content such as text, images, application windows, or video) on a substantially transparent display screen; Wong - [0066]: the wearable computing device may, in response to receiving the movement data corresponding to the upward movement, cause one or both of the view region 502 and the menu 504 to move such that the menu 504 becomes more visible in the view region 502; Wong - [0089]: Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604; Wong - [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; Wong - [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque) based on identifying, in accordance with recognizing the images, the external object moving into the FOV of the at least one camera, (Wong - only a portion of the virtual object that is directly in front of a detected physical object is made transparent), for allowing the user to view the external object through the portion of the multimedia content with the second opacity (Wong - as shown in fig 7G, only portions of virtual objects 706 and 708 that are detected by the collision-avoidance as hazards are displayed with a second opacity/transparency as being superimposed on a background layer, i.e. the map is displayed superimposed on a background layer and the grocery list text is displayed as superimposed on a background layer; Wong - [0033]: The object detection procedure may run continuously as a background process on the wearable computer or it may run only when activated by the wearer; Wong - [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; Wong - [0096]: an HMD may include front-facing cameras, which may be configured to capture images that extend beyond the field of view provided in the view region. Then, an integral or remotely attached computing device may employ image processing techniques to determine that a portion of the captured image represents a physical object and further estimate the distance between the camera and the object. In this case, the camera and wearable computer may transmit this estimated distance data to the computing system carrying out an object-detection process; Wong - [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; Wong - [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; Wong - [0131]: if a physical object is detected in front of the HMD but slightly left of the field of view's center; Wong - [0133]: a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object … the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist; Terre - when a non-user or a person enters the field of view of the user, the opacity of the virtual content is adjusted (reduced) such that the user can at least partially see the person entering his/her field of view; Terre - [0303]: Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); Terre - [0306]: if a person is detected entering the field of view, one or more of the following extended reality display parameter changes might occur, based on rules that, upon detection of the person, cause the adjustment to occur: change an opacity level of the display so that the person becomes at least partially visible; Terre - [0317]: a default rule may be to adjust the opacity of the extended reality display to a 50% setting such that the user may perceive the change in the physical environment; Terre - [0038]: if a person approaches the wearer, the rule may cause the at least one adjustable extended reality display parameter to change (e.g., adjusting the opacity of the virtually displayed content and/or reducing the size of at least one virtual screen associated with the virtually displayed content) so that the wearer can see the non-user). The combination of Wong and Terre does not explicitly teach to display the multimedia content with the first opacity as superimposed on a background layer the background layer is generated by the processor, and that to cease displaying at least portion of the background layer. Terrano teaches to display the multimedia content (display content) with the first opacity (first opacity) as superimposed on a background layer the background layer is generated by the processor (system generates and adjusts the display content that is superimposed on the virtual panel background; [0025]: the system may adjust the opacity of the virtual panel and the opacity of the display content on the panel differently. The display content may have the same opacity or different opacity with the virtual panel which is the background to the display content. When the system adjusts the virtual panel from the first opacity (e.g., opaque) to the second opacity (e.g., transparent), the system may adjust the display content opacity to a fourth opacity. The fourth opacity may be equal to or more opaque than the second opacity), and that to cease displaying at least portion of the background layer (making the virtual panel background completely transparent while the display content displayed on it to a different translucent opacity is functionally analogous to cease displaying at least a portion of the background layer; [0003]: a method of changing opacity of virtual display panels to allow a user to see through the panels when the user needs to look at objects behind the panels. Particular embodiments may use eye tracking cameras to determine the user's vergence distance and control the panels' opacity accordingly based on the user's vergence distance. For example, the panel background may change to transparent or translucent when the user's vergence distance moves beyond the panel (indicating the user is trying to look at objects behind the panel); [0024]: the artificial reality system may display a virtual panel having a first opacity to a user. The virtual panel may be displayed at a fixed distance (e.g., 1 m) from the user in a virtual space of artificial reality. The virtual panel having the first opacity may block the user from seeing through the virtual panel. The system may automatically detect an indication that the user wants or needs to look through the virtual panel. In response, the system may adjust the opacity of the virtual panel to allow the user to look through it. As an example and not by way of limitation, the system may use an eye tracking system to constantly monitor the user eye movement and determine the vergence distance of the user based on the eye tracking data. The system may compare the vergence distance of the user to the distance between the virtual panel and the user. In response to a determination that the vergence distance of the user is greater than the distance between the virtual panel and the user by a first threshold distance, the system may adjust the virtual panel to have a second opacity which is less opaque than the first opacity. The virtual panel having the second opacity may allow the user to see through the virtual panel. For example, the virtual panels having the second opacity may be transparent or translucent to the user and allow the user to see objects behind the virtual panel; [0025]: the system may adjust the opacity of the virtual panel and the opacity of the display content on the panel differently. The display content may have the same opacity or different opacity with the virtual panel which is the background to the display content. When the system adjusts the virtual panel from the first opacity (e.g., opaque) to the second opacity (e.g., transparent), the system may adjust the display content opacity to a fourth opacity. The fourth opacity may be equal to or more opaque than the second opacity. As an example and not by way of limitation, the virtual panel and display content may both become transparent or translucent with the same opacity and allow the user to see through. As another example, the virtual panel may be transparent but the display content may be opaque or translucent. As another example, the virtual panel including the display content may be transformed to other visual forms such as a wireframe which allow the user to see through. As another example, the virtual panel including the display elements may be partially visible to the user. The virtual panel may keep one portion of the panel visible to the user and hide other portions of the panel from the user. As another example, the virtual panel may keep one or more elements (e.g., text, icons, images) of the display content visible to the user and hide the rest of the display content; [0045]: In particular embodiments, when a virtual panel (e.g., 320) is turned into transparent or translucent, one or more visual anchors may be displayed to indicate the disappeared virtual panel. As an example and not by way of limitation, when the virtual panel 320 becomes transparent, the visual anchor 324 may be displayed at a corner of the transparent virtual panel 320 in an unintrusive manner to the view of the first user 360. The visual anchor 324 may be a corner-fitting object or an icon associated with the virtual panel 320. The visual anchor 324 may have an opacity that enables a clear visual effect to the first user 360. When the first user 360 ends the interaction with the second user 370 and wants to bring back the virtual panel 320, the first user 360 may focus his/her eyes on the visual anchor 324. The system 300 may detect that the first user 360 is looking at the visual anchor 324 and adjust the opacity of the virtual panel 320 to make it visible. In particular embodiments, the virtual anchor may be displayed only when the associated virtual panel has been made transparent or translucent. In particular embodiments, the virtual anchor may be displayed both when the associated virtual panel is visible and when the associated virtual panel is made transparent. As an example and not by way of limitation, the visual anchor 318 associated with the virtual panel 310 may be displayed (e.g., at a corner of the virtual panel 310) when the virtual panel 310 is visible to the first user 360. As another example, the visual anchor 332 associated with the virtual panel 332 may be transparent or translucent when the virtual panel 330 is visible to the first user 360; [0046]: FIG. 3B illustrates an example usage of an artificial reality system 300 by a user 360 watching a TV 380 behind virtual display panels. The user 360 may use the virtual panels (e.g., 310, 320, 330, 340) for reading or working activities. The user 360 may occasionally watch a TV 380 which is partially behind the virtual panels 320 and partially behind the virtual panel 310. The system 300 may determine the vergence distance and gazing point of the user 360 using an eye tracking system. As soon as the user 360 moves his eyes from the virtual panels to the TV 380, the system 300 may detect that as an indication of the user 360 to look through the virtual panels (e.g., 310, 320) that interfere with the view of the user 360. For example, the system 300 may determine that the vergence distance of the user 360 is beyond the virtual panels (e.g., 310, 320) for a threshold distance and the virtual panels 310 and 320 are at least partially within the view of the user looking at the TV 380. The system 300 may change the opacity of the virtual panels 310 and 320 into transparent or translucent to allow the user to see through them. When the virtual panels 310 and 320 become transparent or translucent, the associated visual anchors 318 and 324 may be displayed at the corners of the transparent virtual panel 310 and 320, respectively. When the user 360 moves his eyes from the TV 380 back to the virtual panel 310 or 320, the user 360 just needs to focus on his eyes on the visual anchor 318 or 324. The system 300 may detect that the user is looking at the visual anchor 318 or 324 and change the corresponding panel 310 or 320 back to visible. The virtual panels 330 and 340 may or may not change since they are not interfering with the user view for watching the TV 380; [0047]: FIG. 4 illustrates an example usage of an artificial reality system 400 by a first user 402 walking and interacting with a second user 408. The artificial reality system 400 may display a number of virtual panels (e.g., 412, 414, 416) to a first user 402 wearing the headset 404. The first user 402 may be walking on a street and using the headset 404 for navigating purpose. The virtual panels 412, 414, and 416 may display map information to the first user 402. The first user 402 may stop walking to look at the map information displayed on the virtual panels 412, 414, and 416. The virtual panels 412, 414, and 416 may be opaque to allow the first user 402 to have a clear view of the displayed information. The virtual panels 412, 414, and 416 may block the first user 402 from see through them. The system 300 may automatically change the opacity of the virtual panels 412, 414, and 416 when the user needs to look through them). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Terrano’s knowledge of making the background layer of the display content transparent as taught and modify the system of Wong and Terre because such a system dismisses the interfering virtual to allow the user to see the environment and avoid running into hazardous objects ([0035]). Claims 34 and 40 are similar in scope to claim 26, and therefore the examiner provides similar rationale to reject these claims. Response to Arguments Applicant’s arguments with respect to claim(s) 21-40 have been considered but are moot because the new ground of rejection does not rely on the same combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Response to the argument that Wong does not disclose while the portion of the multimedia content causing displaying the portion of the multimedia content with the second opacity is moved out of the FOV of the at least one camera, and restoring the opacity of the portion of the multimedia content based on identifying, while the portion of the multimedia content with the second opacity, that the external object being moved out of the FOV of the at least one camera. See page 13-15 of Applicant’s Remarks filed on 12/22/2025. Wong and further in view of Terre teaches the above limitations. Especially, Terre teaches while the portion of the multimedia content is displayed with the second opacity (when a non-user or a person enters the field of view of the user, the opacity of the virtual content is adjusted (reduced) such that the user can at least partially see the person entering his/her field of view; [0303]: Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0306]: if a person is detected entering the field of view, one or more of the following extended reality display parameter changes might occur, based on rules that, upon detection of the person, cause the adjustment to occur: change an opacity level of the display so that the person becomes at least partially visible; [0317]: a default rule may be to adjust the opacity of the extended reality display to a 50% setting such that the user may perceive the change in the physical environment; [0038]: if a person approaches the wearer, the rule may cause the at least one adjustable extended reality display parameter to change (e.g., adjusting the opacity of the virtually displayed content and/or reducing the size of at least one virtual screen associated with the virtually displayed content) so that the wearer can see the non-user), identify whether the external object is moved out of the FOV of the at least one camera (the wearable extended reality appliance is configured to detect changes to the physical environment around the user such as when a person enters or exits the field of view of the imaging sensor of a user’s wearable extended reality appliance; [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0210]: the at least one image sensor of the wearable extended reality appliance may continuously or periodically monitor the scenes in the field of view of the at least one image sensor; [0294]: the wearable extended reality appliance may be configured to detect changes to the physical environment around the user and to automatically adjust one or more display parameters of an extended reality display being provided by the wearable extended reality appliance, thereby heightening the user's awareness of the environmental changes. For example, a change to the physical environment may include another person (i.e., a non-user) walking toward or into the field of view of the wearable extended reality appliance; [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring); and based on the external object being identified while the portion of the multimedia content is displayed with the second opacity as moved out of the FOV of the at least one camera, restore the opacity of the portion of the multimedia content to the first opacity (when a person exits the user’s field of view, ongoing change in the physical environment of the user is detected to no longer occur, and the opacity of the virtual content that was reduced when the environmental change first occurred (i.e., when the person entered into the user’s field of view) is automatically returned to the prior state (original opacity at which the virtual content was displayed prior to the change in environment of the user); [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring). Response to the argument that Wong fails to disclose the condition associated with the restoration of opacity is that the physical object (or external object) is moved out of the field of view of the camera while the opacity of multimedia content is reduced. See page 15 of Applicant’s Remarks filed on 12/22/2025. Wong and further in view of Terre teaches the above limitations. Terre teaches when a person enters the field of view of the user, the opacity of the displayed virtual content is adjusted (reduced) such that the user can at least partially see the person entering his/her field of view, and when the person exits the user’s field of view, ongoing change in the physical environment of the user is detected to no longer occur, and the opacity of the virtual content that was reduced when the environmental change first occurred (i.e., when the person entered into the user’s field of view) is automatically returned to the prior state (original opacity at which the virtual content was displayed prior to the change in environment of the user). Especially, Terre teaches while the portion of the multimedia content is displayed with the second opacity (when a non-user or a person enters the field of view of the user, the opacity of the virtual content is adjusted (reduced) such that the user can at least partially see the person entering his/her field of view; [0303]: Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0306]: if a person is detected entering the field of view, one or more of the following extended reality display parameter changes might occur, based on rules that, upon detection of the person, cause the adjustment to occur: change an opacity level of the display so that the person becomes at least partially visible; [0317]: a default rule may be to adjust the opacity of the extended reality display to a 50% setting such that the user may perceive the change in the physical environment; [0038]: if a person approaches the wearer, the rule may cause the at least one adjustable extended reality display parameter to change (e.g., adjusting the opacity of the virtually displayed content and/or reducing the size of at least one virtual screen associated with the virtually displayed content) so that the wearer can see the non-user), identify whether the external object is moved out of the FOV of the at least one camera (the wearable extended reality appliance is configured to detect changes to the physical environment around the user such as when a person enters or exits the field of view of the imaging sensor of a user’s wearable extended reality appliance; [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0210]: the at least one image sensor of the wearable extended reality appliance may continuously or periodically monitor the scenes in the field of view of the at least one image sensor; [0294]: the wearable extended reality appliance may be configured to detect changes to the physical environment around the user and to automatically adjust one or more display parameters of an extended reality display being provided by the wearable extended reality appliance, thereby heightening the user's awareness of the environmental changes. For example, a change to the physical environment may include another person (i.e., a non-user) walking toward or into the field of view of the wearable extended reality appliance; [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring); and based on the external object being identified while the portion of the multimedia content is displayed with the second opacity as moved out of the FOV of the at least one camera, restore the opacity of the portion of the multimedia content to the first opacity (when a person exits the user’s field of view, ongoing change in the physical environment of the user is detected to no longer occur, and the opacity of the virtual content that was reduced when the environmental change first occurred (i.e., when the person entered into the user’s field of view) is automatically returned to the prior state (original opacity at which the virtual content was displayed prior to the change in environment of the user); [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring). Response to the argument that Wong fails to disclose based on the external object being identified in the FOV of the at least one camera while displaying the multimedia content with the first opacity, reduce an opacity of a portion of the multimedia content displayed on the displays from the first opacity to a second opacity, for allowing the user to view the external object through the portion of the multimedia content with the second opacity; while the portion of the multimedia content is displayed with the second opacity, identify whether the external object is moved out of the FOV of the at least one camera; and based on the external object being identified while the portion of the multimedia content is displayed with the second opacity as moved out of the FOV of the at least one camera, restore the opacity of the portion of the multimedia content to the first opacity. See page 15 of Applicant’s Remarks filed on 12/22/2025. Wong and further in view of Terre teaches the above limitations. Especially, Wong teaches based on the external object being identified in the FOV of the at least one camera ([0096]: some embodiments may utilize data from video cameras. As a specific example, an HMD may include front-facing cameras, which may be configured to capture images that extend beyond the field of view provided in the view region. Then, an integral or remotely attached computing device may employ image processing techniques to determine that a portion of the captured image represents a physical object and further estimate the distance between the camera and the object; [0131]: a physical object is detected in front of the HMD but slightly left of the field of view's center) while displaying the multimedia content with the first opacity (as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency), reduce an opacity of a portion of the multimedia content (only a portion of the virtual object that is directly in front of a detected physical object is made transparent) displayed on the displays from the first opacity to a second opacity (as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency; as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; making virtual objects at least partially transparent inherently implies that the transparency/opacity of the virtual objects is changed from a first transparency/opacity to a second transparency/opacity), for allowing the user to view the external object through the portion of the multimedia content with the second opacity (as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; fig. 6 step 608; fig. 7G; [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque); while the portion of the multimedia content is displayed with the second opacity, identify the external object moved beyond a threshold distance (as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; the system determines that the detected object, such as a cyclist, displayed a higher transparency (or reduced opacity) has turned a corner within a threshold distance of the HMD; [0033]: The object detection procedure may run continuously as a background process on the wearable computer or it may run only when activated by the wearer; [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; [0106]: In some embodiments, distance determination or collision avoidance actions using distance as a basis for activation may only apply to physical objects sufficiently overlaid by the HMD. For example, a system may determine that a physical object is near to an HMD, but that view of the object is not overlaid by the HMD. In this example, a collision-avoidance action that de-emphasizes virtual objects displayed to the HMD would not provide a less obstructed view of the physical object; [0107]: Some embodiments may use determination of the relative movement of an object as a basis for activating a collision-avoidance action. For instance, some exemplary embodiments may be configured to activate a collision-avoidance action only in response to objects that are sufficiently approaching the display. In such cases, determined distance may still be used as a secondary basis for activating a collision-avoidance action, but only after the condition of determined relative movement is satisfied. Some embodiments may include setting a relative velocity threshold in which physical objects determined to be moving sufficiently towards the HMD with a velocity higher than a threshold velocity may be the basis for activating a collision-avoidance action. Likewise, if an object is determined to be accelerating towards the HMD at greater than a threshold rate of acceleration, an exemplary wearable computer may responsively initiate a collision-avoidance action. Other exemplary movement patterns may also be used; [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; [0132]: Some exemplary procedures may include steps for de-activating the collision-avoidance action in response to receiving indication that the physical object is no longer a hazard or as a result of user-input. Exemplary collision-avoidance actions may therefore include procedures to move the virtual objects, which occupied the view region before activation of the collision-avoidance action, back to their original locations in the view region. Further, such procedures may also include re-establishing the size and opacity of the virtual objects. For example, an exemplary de-activation procedure may include storing the original position and content of the virtual objects in the view region before a collision-avoidance action is activated. Then, in response to the collision-avoidance action being de-activated, the procedure may move the original content of virtual objects back to their original positions in the view region; [0133]: Subsequent to initiating the collision-avoidance action, an exemplary embodiment may also include techniques for de-activating the collision-avoidance action in response to receiving second data from the sensors. For example, a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object. The wearer, alerted by the collision-avoidance action, may adjust course to avoid the stationary object. Then, the system may determine that the object no longer has a relative movement directed at the HMD and responsively de-activate the collision-avoidance action. As another example, the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist and activate a collision-avoidance action to alert the wearer. Subsequently, the cyclist may move beyond the threshold distance from the HMD and responsively the system may de-activate the collision-avoidance action. Many other exemplary de-activation procedures may be utilized); and based on the external object being identified while the portion of the multimedia content is displayed with the second opacity as moved beyond the threshold distance, restore the opacity of the portion of the multimedia content to the first opacity (as shown in fig. 7A, virtual objects 706 and 708 are displayed with a first opacity/transparency; as shown in fig 7G, virtual objects 706 and 708 are displayed with a second opacity/transparency; the system determines that the detected object, such as a cyclist, displayed a higher transparency (or reduced opacity) has turned a corner within a threshold distance of the HMD, the system de-activates the collision avoidance action and re-establishes the transparency/opacity of the virtual objects to their original state or value; [0033]: The object detection procedure may run continuously as a background process on the wearable computer or it may run only when activated by the wearer; [0089]: More specifically, method 600 involves the wearable computer displaying a user-interface, which includes a view region and at least one content region that is located outside of the view region, on a substantially transparent display of an HMD, as shown by block 602. Initially, the state of the user-interface is such that the view region substantially fills the field of view of the HMD, and the at least one content region is not fully visible in the field of view. In this initial state, the wearable computer displays one or more virtual objects in the view region, as shown by block 604. The wearable computer then uses data from one or more first sensors as a basis for determining a distance between the HMD and a physical object, as shown by block 606. After determining the distance between the HMD and the physical object, the wearable computer may use the determined distance as basis for initiating a collision-avoidance action that includes de-emphasizing at least one of the displayed virtual objects so as to provide a less-obstructed view of the physical object through the transparent display of the HMD, as shown by block 608; [0106]: In some embodiments, distance determination or collision avoidance actions using distance as a basis for activation may only apply to physical objects sufficiently overlaid by the HMD. For example, a system may determine that a physical object is near to an HMD, but that view of the object is not overlaid by the HMD. In this example, a collision-avoidance action that de-emphasizes virtual objects displayed to the HMD would not provide a less obstructed view of the physical object; [0107]: Some embodiments may use determination of the relative movement of an object as a basis for activating a collision-avoidance action. For instance, some exemplary embodiments may be configured to activate a collision-avoidance action only in response to objects that are sufficiently approaching the display. In such cases, determined distance may still be used as a secondary basis for activating a collision-avoidance action, but only after the condition of determined relative movement is satisfied. Some embodiments may include setting a relative velocity threshold in which physical objects determined to be moving sufficiently towards the HMD with a velocity higher than a threshold velocity may be the basis for activating a collision-avoidance action. Likewise, if an object is determined to be accelerating towards the HMD at greater than a threshold rate of acceleration, an exemplary wearable computer may responsively initiate a collision-avoidance action. Other exemplary movement patterns may also be used; [0114]: De-emphasizing virtual objects may include, for example, making virtual objects at least partially transparent; [0125]: collision-avoidance action in which virtual objects 706 and 708 are made at least partially transparent so that physical object 710 may be more easily visible through the transparent display of the HMD. An exemplary embodiment may apply transparency only to objects that are determined to be sufficiently obscuring the physical object or, alternatively, to all virtual objects within the view region (as depicted in FIG. 7G). Additionally, some embodiments may be configured to make only a portion of each virtual object transparent. For example, if a single virtual object covers the entire screen, an exemplary technique may cause a window of the virtual object to become transparent directly in front of a detected physical object while leaving the remainder of the virtual object substantially opaque; [0132]: Some exemplary procedures may include steps for de-activating the collision-avoidance action in response to receiving indication that the physical object is no longer a hazard or as a result of user-input. Exemplary collision-avoidance actions may therefore include procedures to move the virtual objects, which occupied the view region before activation of the collision-avoidance action, back to their original locations in the view region. Further, such procedures may also include re-establishing the size and opacity of the virtual objects. For example, an exemplary de-activation procedure may include storing the original position and content of the virtual objects in the view region before a collision-avoidance action is activated. Then, in response to the collision-avoidance action being de-activated, the procedure may move the original content of virtual objects back to their original positions in the view region; [0133]: Subsequent to initiating the collision-avoidance action, an exemplary embodiment may also include techniques for de-activating the collision-avoidance action in response to receiving second data from the sensors. For example, a system may activate a collision-avoidance action in response to determining that a stationary physical object has a relative movement pointed directly at the wearer of an HMD because the wearer is walking towards the stationary object. The wearer, alerted by the collision-avoidance action, may adjust course to avoid the stationary object. Then, the system may determine that the object no longer has a relative movement directed at the HMD and responsively de-activate the collision-avoidance action. As another example, the wearer of an HMD may be standing still when a cyclist suddenly turns a corner within a threshold distance of the HMD. An exemplary system may detect the nearby cyclist and activate a collision-avoidance action to alert the wearer. Subsequently, the cyclist may move beyond the threshold distance from the HMD and responsively the system may de-activate the collision-avoidance action. Many other exemplary de-activation procedures may be utilized). Further, Terre teaches while the portion of the multimedia content is displayed with the second opacity (when a non-user or a person enters the field of view of the user, the opacity of the virtual content is adjusted (reduced) such that the user can at least partially see the person entering his/her field of view; [0303]: Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0306]: if a person is detected entering the field of view, one or more of the following extended reality display parameter changes might occur, based on rules that, upon detection of the person, cause the adjustment to occur: change an opacity level of the display so that the person becomes at least partially visible; [0317]: a default rule may be to adjust the opacity of the extended reality display to a 50% setting such that the user may perceive the change in the physical environment; [0038]: if a person approaches the wearer, the rule may cause the at least one adjustable extended reality display parameter to change (e.g., adjusting the opacity of the virtually displayed content and/or reducing the size of at least one virtual screen associated with the virtually displayed content) so that the wearer can see the non-user), identify whether the external object is moved out of the FOV of the at least one camera (the wearable extended reality appliance is configured to detect changes to the physical environment around the user such as when a person enters or exits the field of view of the imaging sensor of a user’s wearable extended reality appliance; [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0210]: the at least one image sensor of the wearable extended reality appliance may continuously or periodically monitor the scenes in the field of view of the at least one image sensor; [0294]: the wearable extended reality appliance may be configured to detect changes to the physical environment around the user and to automatically adjust one or more display parameters of an extended reality display being provided by the wearable extended reality appliance, thereby heightening the user's awareness of the environmental changes. For example, a change to the physical environment may include another person (i.e., a non-user) walking toward or into the field of view of the wearable extended reality appliance; [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring); and based on the external object being identified while the portion of the multimedia content is displayed with the second opacity as moved out of the FOV of the at least one camera, restore the opacity of the portion of the multimedia content to the first opacity (when a person exits the user’s field of view, ongoing change in the physical environment of the user is detected to no longer occur, and the opacity of the virtual content that was reduced when the environmental change first occurred (i.e., when the person entered into the user’s field of view) is automatically returned to the prior state (original opacity at which the virtual content was displayed prior to the change in environment of the user); [0007]: receiving image data captured by at least one image sensor of a wearable extended reality appliance, the image data including representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0188]: The at least one image sensor of the wearable extended reality appliance may have a field of view. A field of view may refer to a spatial extent that may be observed or detected at any given moment; [0189]: The image data captured by the at least one image sensor of the wearable extended reality appliance may include representations of a plurality of physical objects in a field of view associated with the at least one image sensor of the wearable extended reality appliance; [0303]: detecting in the image data a specific environmental change unrelated to the virtually displayed content. An environmental change may include a change to the physical environment around the user of the wearable extended reality appliance … Another example of an environmental change may include an object moving into a field of view of the image sensor (e.g., a person walking in front of the user); [0311]: the adjusted extended reality display parameters may include permanent changes to the extended reality display (and/or to the extended reality environment), meaning that the adjusted extended reality display parameters remain in effect until the extended reality display parameters are again adjusted, either by another environmental change or by a manual action of the user to adjust the extended reality display parameters; [0312]: In some embodiments, the adjusted extended reality display parameters may be in effect while the environmental change is ongoing, and the extended reality display (and/or the extended reality environment) may return to a prior state when the environmental change is no longer occurring. For example, if a person walks in front of the user, the brightness of the extended reality display may be dimmed such that the user can see the person and when the person exits the user's field of view, the extended reality display may return to a prior brightness setting. In some embodiments, the extended reality display (and/or the extended reality environment) may automatically return to the prior state when the environmental change is no longer occurring). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Abersfelder et al. (US 5646614) describes that when during a rearward approach of the vehicle 10, the obstacle 16 or 17' moves out of the field of view of the camera 11, it is detected as not collision-dangerous. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JWALANT B AMIN whose telephone number is (571)272-2455. The examiner can normally be reached Monday-Friday 10am - 630pm CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JWALANT AMIN/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Apr 19, 2024
Application Filed
Nov 27, 2024
Response after Non-Final Action
Sep 22, 2025
Non-Final Rejection — §103
Dec 04, 2025
Applicant Interview (Telephonic)
Dec 04, 2025
Examiner Interview Summary
Dec 22, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597091
COMPUTER-IMPLEMENTED METHOD, APPARATUS, SYSTEM AND COMPUTER PROGRAM FOR CONTROLLING A SIGHTEDNESS IMPAIRMENT OF A SUBJECT
2y 5m to grant Granted Apr 07, 2026
Patent 12592020
TRACKING SYSTEM, TRACKING METHOD, AND SELF-TRACKING TRACKER
2y 5m to grant Granted Mar 31, 2026
Patent 12585324
PROCESSOR, IMAGE PROCESSING DEVICE, GLASSES-TYPE INFORMATION DISPLAY DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12585130
LUMINANCE-AWARE UNINTRUSIVE RECTIFICATION OF DEPTH PERCEPTION IN EXTENDED REALITY FOR REDUCING EYE STRAIN
2y 5m to grant Granted Mar 24, 2026
Patent 12579571
METHOD FOR IMPROVING AESTHETIC APPEARANCE OF RETAILER GRAPHICAL USER INTERFACE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
94%
With Interview (+15.3%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 631 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month