Prosecution Insights
Last updated: April 19, 2026
Application No. 18/786,219

WEARABLE ELECTRONIC DEVICE FOR DISPLAYING VIRTUAL OBJECT AND METHOD FOR CONTROLLING THE SAME

Non-Final OA §103
Filed
Jul 26, 2024
Examiner
LIU, GORDON G
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
556 granted / 673 resolved
+20.6% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
29 currently pending
Career history
702
Total Applications
across all art units

Statute-Specific Performance

§101
6.7%
-33.3% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
3.0%
-37.0% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 673 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending under this Office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 8-10, 12-14, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rober, etc. (US 20180089900 A1) in view of Sachdev, etc. (US 20130235351 A1), further in view of Rakshit, etc. (US 20200126276 A1). Regarding claim 1, Rober teaches that a wearable electronic device (See Rober: Figs. 6-7, and [0051], “FIG. 6 illustrates a VR system in a vehicle, according to some embodiments. FIG. 6 shows a VR system as shown in FIG. 1 that includes HMDs 692; however, note that a similar configuration may implement a VR system as shown in FIG. 2. As shown in FIG. 6, a vehicle 600 (which may be, but is not necessarily, an autonomous vehicle) may include a VR controller 610, vehicle systems 626 (e.g., vehicle control systems such as throttle, braking, steering, and active suspension systems, as well as navigation, HVAC and AV systems), internal and external sensors (e.g., LiDAR for depth mapping, video cameras for internal or external views, IMUs, localization systems, etc.). The vehicle 600 may include one or more seats for passengers 690. In this example, a forward-facing and rear-facing seat are used for illustrative purposes. Passenger 690A may be sitting in a rear-facing seat, while passenger 690B may be sitting in a forward-facing seat. Passengers 690A and 690B are wearing respective HMDs 692A and 692B”) comprising: a camera (See Rober: Figs. 6-7, and [0058], “As shown in FIG. 7, VR controller 710 may receive various inputs (e.g., localization, acceleration, braking, steering, motion, orientation direction, video, depth maps, etc.) from vehicle internal and external sensors and control systems 702. Vehicle internal and external sensors may include, but are not limited to, depth cameras (e.g., LiDAR), video cameras, inertial-measurement units (IMUs)”); a display (See Rober: Figs. 6-7, and [0055], “Note that a HMD may include two projectors 724, one for each eye, that display or project virtual content to two displays 726, e.g. two screens in a near-eye VR system or reflective lenses in a direct retinal projector system”); at least one processor, comprising processing circuitry (See Rober: Figs. 6-7, and [0054], “FIG. 7 is a block diagram illustrating components of a VR system in a vehicle, according to some embodiments. As shown in FIG. 7, a vehicle 700 (which may be, but is not necessarily, an autonomous vehicle) may include a VR controller 710. VR controller 710 may include one or more processors 712. Processor(s) 712 may include one or more of various types of processors, CPUs, image signal processors (ISPs), graphics processing units (GPUs), coder/decoders (codecs), memory, and/or other components for processing inputs from various sources to generate VR content and other output signals”); and memory storing instructions that, when executed by the at least one processor individually or collectively, cause the wearable electronic device (See Rober: Figs. 6-7, and [0054], “VR controller 710 may also include memory 713 that may, for example, store program instructions executable by processor(s) 712 to perform the functionalities of the VR controller 710 to process inputs from various sources and generate VR content and other output signals, as well as data that may be used by the program instructions. VR controller 710 may also include interfaces 714 to various vehicle systems, external sources 790, VR projection device(s) 720, and passenger's user device(s) 792. The interfaces 714 may include wired and/or wireless connections to the various components”) to: identify a displayable area corresponding to an external display included in an image of a space acquired through the camera (See Rober: Figs. 6-7, and [0051], “FIG. 6 illustrates a VR system in a vehicle, according to some embodiments. FIG. 6 shows a VR system as shown in FIG. 1 that includes HMDs 692; however, note that a similar configuration may implement a VR system as shown in FIG. 2. As shown in FIG. 6, a vehicle 600 (which may be, but is not necessarily, an autonomous vehicle) may include a VR controller 610, vehicle systems 626 (e.g., vehicle control systems such as throttle, braking, steering, and active suspension systems, as well as navigation, HVAC and AV systems), internal and external sensors (e.g., LiDAR for depth mapping, video cameras for internal or external views, IMUs, localization systems, etc.). The vehicle 600 may include one or more seats for passengers 690. In this example, a forward-facing and rear-facing seat are used for illustrative purposes. Passenger 690A may be sitting in a rear-facing seat, while passenger 690B may be sitting in a forward-facing seat. Passengers 690A and 690B are wearing respective HMDs 692A and 692B”; and Fig. 2, and [0045], “FIG. 2 illustrates a VR system that projects VR content to a window of a vehicle for viewing by passengers, according to some embodiments. In these embodiments, a VR system 200 in a vehicle includes a VR controller 210 (e.g., mounted under the dash) and a projector 220 system configured to project virtual content onto a window 208 (e.g., the windshield of the vehicle)”. Note that the display of the computers and the vehicle windows on with image projected are mapped to the displayable area, and the windshield may be first area of the displayable area.); identify, based on at least a portion of the external display being excluded from the displayable area due to movement of the external display, a first area of the displayable area, in which a portion of the external display is disposed (See Rober: Fig. 2, and [0045], “FIG. 2 illustrates a VR system that projects VR content to a window of a vehicle for viewing by passengers, according to some embodiments. In these embodiments, a VR system 200 in a vehicle includes a VR controller 210 (e.g., mounted under the dash) and a projector 220 system configured to project virtual content onto a window 208 (e.g., the windshield of the vehicle)”. Note that the vehicle windows on with image projected are mapped to the displayable area, and the windshield may be first area of the displayable area), and a second area of the displayable area which corresponds to a remaining area excluding the first area; and control the display to display a virtual screen on the second area. However, Rober fails to explicitly disclose that identify, based on at least a portion of the external display being excluded from the displayable area due to movement of the external display, a second area of the displayable area which corresponds to a remaining area excluding the first area; and control the display to display a virtual screen on the second area. However, Sachdev teaches that identify, a second area of the displayable area which corresponds to a remaining area excluding the first area (See Sachdev: Figs. 4-7, and [0040], “The image 412 can cover most any portion of the vehicle 300. As shown in FIG. 4, the image 412 can cover any or all of the ceiling 304, a second, or B pillar (2), a third, or C pillar (3), and the rear panel 306. The image 412 can also be generated and projected to cover any trim or other intermediate material, such as material present between windows and pillars and between windows and the ceiling, thereby rendering the image and window views, together, contiguous, as shown in FIG. 4”. Note that the ceiling, pillars, rear panel, etc. are mapped to the second display area of the displayable area while the windshield is mapped to the first display area of the displayable area); and control the display to display a virtual screen on the second area (See Sachdev: Figs. 4-7, and [0051], “FIG. 7 shows another view of the interior of the vehicle 300. In this embodiment, the image 412 forms a virtual sunroof”; and [0052], “As shown in FIG. 7, the image 412 can include various details of an actual sunroof, including simulated sunroof trim 702 and, when the virtual sunroof is "closed," simulation of glass 704--e.g., streak marks and reflection marks at the simulated glass material that would be due to sun light, building lights, street lights, etc.”. Note that the virtual sunroof is mapped to the virtual screen, and the image details 412 of the virtual sunroof is mapped to control the display to display a virtual screen on the second area). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Rober to have a identify, based on at least a portion of the external display being excluded from the displayable area due to movement of the external display, a second area of the displayable area which corresponds to a remaining area excluding the first area; and control the display to display a virtual screen on the second area as taught by Sachdev in order to provide a virtual sunroof display which accurately represents a rendition of the environment above the vehicle in real time (See Sachdev: Figs. 7-8, and [0037], “In one embodiment, it is preferred that the image 412 provide a real-time representation of the actual environment above the vehicle 300, such as by displaying data received from one or more imaging devices recording the actual environment above the vehicle. The imaging device system is described further below in connection with FIG. 8”). Rober teaches a method and system that may generate virtual and mixed reality content, and project the VR content to the windshield of the vehicle for the user in the vehicle; while Sachdev teaches a system and method that may generate virtual content and project it on to any portion of the interior of the vehicle, including sunroof, pillars, rear panels, etc. Therefore, it is obvious to one of ordinary skill in the art to modify Rober by Sachdev to generate and project the virtual/mixed content to the first display area and second display area or any portion of the vehicle interior so the users in the vehicle can view the outside environment as needed. The motivation to modify Rober by Sachdev is “Use of known technique to improve similar devices (methods, or products) in the same way”. However, Rober, modified by Sachdev, fails to explicitly disclose that based on at least a portion of the external display being excluded from the displayable area due to movement of the external display. However, Rakshit teaches that based on at least a portion of the external display being excluded from the displayable area due to movement of the external display (See Rakshit: Figs. 2-5, and [0072], “Depending on the interior and exterior noise, driver 308 may not notice the presence of ambulance 310. As depicted, automotive computer 312 displays ambulance icon 314 on driver window 306 using heads a display 300 to provide an augmented reality display to driver 308. Ambulance icon 314 is an example of an implementation for visual indicator 234 shown in block form in FIG. 2”; [0079], “In this illustrative example, police officer icon 402 is displayed by heads up display 300 on windshield 304 of automobile 302. Police officer icon 402 is an example of an implementation of visual indicator 234 shown in block form in FIG. 2. This display of police officer icon 402 on the live view seen through windshield 304 provides an augmented reality display that allows driver to focus on the live view without turning focus to a display device located within automobile 302”; and [0084], “In this illustrative example, young people icon 506 to is displayed in association with young people 500 on windshield 304 by a heads up display 300. Young people icon 506 is an example of an implementation of visual indicator 234 shown in block form in FIG. 2. The location of young people icon 506 is selected to indicate the location of young people 500”. Note that as the vehicle is moving, and the external display is moving as well, the display area on which the guidance information is projected is changed from the driver side window, to the driver side windshield or the passenger side windshield, i.e., the driver window (the second display area)is excluded (separated by a pillar) from the windshield (the first display area)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Rober to have based on at least a portion of the external display being excluded from the displayable area due to movement of the external display as taught by Rakshit in order to enable to detect sounds of an exterior environment around vehicle (See Rakshit: Fig.1, and [0039], “In this illustrative example, automobile 112 includes microphones that enable detecting sounds from an exterior environment around automobile 112. These microphones generate sound data that is analyzed for presence of an object of interest. In this illustrative example, ambulance 130 generates sound 132 from a siren on ambulance 130. Sound 132 is detected by microphones on automobile 112. As another example, police officer 134 generates sound 136 that is detected by microphones on automobile 112. Sound 136 from police officer 134 can be instructions such as directing traffic for ambulance 130”). Rober teaches a method and system that may generate virtual and mixed reality content, and project the VR content to the windshield of the vehicle for the user in the vehicle; while Rakshit teaches a system and method that may generate virtual content and project the virtual content on to a second display area based on the exterior sound detected by the sensors with the second display area being separated from the first display area. Therefore, it is obvious to one of ordinary skill in the art to modify Rober by Rakshit to generate and project the virtual content to a second display area, different from the first display area, based on the movement of the vehicle and the display and the point of interest (object) detected by the sensor. The motivation to modify Rober by Rakshit is “Use of known technique to improve similar devices (methods, or products) in the same way”. Regarding claim 2, Rober, Sachdev, and Rakshit teach all the features with respect to claim 1 as outlined above. Further, Rober and Rakshit teach that the wearable electronic device of claim 1, further comprising at least one sensor, wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to: identify the movement of the external display, based on at least one of sensor data acquired through the at least one sensor and/or the image acquired through the camera (See Rober: Fig. 7, and [0055], “”In some embodiments, a VR projection device 720 may also include an IMU 728 for detecting motion and orientation of the VR projection device 720 (e.g., HMD). In some embodiments, a VR projection device 720 may include or couple to personal audio output devices 725 such as headphones or earbuds. If the VR projection device 720 is a HMD, audio output 725 may be integrated in the HMD”; and [0058], “As shown in FIG. 7, VR controller 710 may receive various inputs (e.g., localization, acceleration, braking, steering, motion, orientation direction, video, depth maps, etc.) from vehicle internal and external sensors and control systems 702. Vehicle internal and external sensors may include, but are not limited to, depth cameras (e.g., LiDAR), video cameras, inertial-measurement units (IMUs). Vehicle control systems may include, but are not limited to throttle control, braking, steering, navigation, and active suspension systems. VR controller 710 may also obtain inputs (e.g., video and/or audio) from one or more vehicle AV systems 706”. Note that various sensors are used to detect position, motion, etc., and the motion of the vehicles Is mapped to the movement of the external displays, i.e., the vehicle windows, sunroof, rear panels, etc.); and identify the first area and the second area, based on the movement of the external display (See Rakshit: Figs. 2-5, and [0072], “Depending on the interior and exterior noise, driver 308 may not notice the presence of ambulance 310. As depicted, automotive computer 312 displays ambulance icon 314 on driver window 306 using heads a display 300 to provide an augmented reality display to driver 308. Ambulance icon 314 is an example of an implementation for visual indicator 234 shown in block form in FIG. 2”; and [0079], “In this illustrative example, police officer icon 402 is displayed by heads up display 300 on windshield 304 of automobile 302. Police officer icon 402 is an example of an implementation of visual indicator 234 shown in block form in FIG. 2. This display of police officer icon 402 on the live view seen through windshield 304 provides an augmented reality display that allows driver to focus on the live view without turning focus to a display device located within automobile 302”. Note that as the vehicle is moving, the display area on which the guidance information is projected is changed from the driver side window, to the driver side windshield the driver window, the windshield is mapped to the first display area, and the driver side window is mapped to the second display area). Regarding claim 3, Rober, Sachdev, and Rakshit teach all the features with respect to claim 1 as outlined above. Further, Rober and Rakshit teach that the wearable electronic device of claim 1, further comprising a communication module comprising communication circuitry, wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to: receive information related to the movement of the external display from an external electronic device which is configured to control the external display and connected to the wearable electronic device through at least the communication module (See Rober: Figs. 1 and 7, and [0044], “The passenger 190 may wear the HMD 112 while, for example, working on a user device 192 (e.g., a notebook or laptop computer). Controller 110 and HMD 112 may be communicatively coupled via a wired (e.g., the user may plug the HMD 112 into a port (e.g., USB port) on the seat or console) or wireless (e.g., Bluetooth) connection. Controller 110 and user device 192 may also be communicatively coupled via a wired (e.g., the user may plug the user device 192 into a port (e.g., USB port) on the seat or console) or wireless (e.g., Bluetooth) connection”; and [0055], “In some embodiments, a VR projection device 720 may also include memory 723 that may, for example, store program instructions executable by processor(s) 722 to perform the functionalities of the VR projection device 720 to connect to, communicate with, and process inputs from VR controller 710, as well as data that may be used by the program instructions. In some embodiments, a VR projection device 720 may also include an IMU 728 for detecting motion and orientation of the VR projection device 720 (e.g., HMD). In some embodiments, a VR projection device 720 may include or couple to personal audio output devices 725 such as headphones or earbuds. If the VR projection device 720 is a HMD, audio output 725 may be integrated in the HMD”. Note that the HMD and controller is connected to the laptop, and the IMU senses the motion of the vehicle and HMD, and this is mapped to the current cited limitation of receiving movement information of the external display, the vehicle windows); and identify the first area and the second area, based on the information related to the movement of the external display (See Rakshit: Figs. 2-5, and [0072], “Depending on the interior and exterior noise, driver 308 may not notice the presence of ambulance 310. As depicted, automotive computer 312 displays ambulance icon 314 on driver window 306 using heads a display 300 to provide an augmented reality display to driver 308. Ambulance icon 314 is an example of an implementation for visual indicator 234 shown in block form in FIG. 2”; and [0079], “In this illustrative example, police officer icon 402 is displayed by heads up display 300 on windshield 304 of automobile 302. Police officer icon 402 is an example of an implementation of visual indicator 234 shown in block form in FIG. 2. This display of police officer icon 402 on the live view seen through windshield 304 provides an augmented reality display that allows driver to focus on the live view without turning focus to a display device located within automobile 302”. Note that as the vehicle is moving, the display area on which the guidance information is projected is changed from the driver side window, to the driver side windshield the driver window, the windshield is mapped to the first display area, and the driver side window is mapped to the second display area). Regarding claim 8, Rober, Sachdev, and Rakshit teach all the features with respect to claim 1 as outlined above. Further, Sachdev teaches that the wearable electronic device of claim 1, wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to display information related to an external environment on the second area, based on movement of the external display which is displaying the first content (See Sachdev: Figs. 5-6, and [0049], “As examples of actual objects that can be viewed by way of the image 412 displayed on the ceiling 304 [please show ceiling like in FIG. 4] and A pillar (1), FIG. 6 shows a person 4 and a traffic light 5, on the respective surfaces of the vehicle 300, whom and which were not readily discernable without the virtual features of the present technology, as can be seen by previous FIG. 5”. Note that the person and traffic light are mapped to the information related to the external environment, and the appearance of the persona and the traffic light is related to the vehicle position which is also related to the movement of the vehicle and the windows of the vehicle, the external objects such as person or traffic light are dynamically changed as the vehicle is driving down the road). Regarding claim 9, Rober, Sachdev, and Rakshit teach all the features with respect to claim 8 as outlined above. Further, Rakshit teaches that the wearable electronic device of claim 8, further comprising at least one sensor, wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to display virtual information related to the external environment on the second area, based on detection of a user's gaze with respect to the second area through the at least one sensor (See Rakshit: Figs. 3-4, and 11, and [0107], “Step 1100 can be implemented using currently available eye tracking processes. The point of gaze can be correlated to a location on the windshield. That location can be compared to the location of the visual indicator to determine whether the driver is focused in the visual indicator. This process can be implemented using optical eye tracking processes that measure eye motion with a camera or some other optical sensor”; [0108], “If the driver has focused on the visual indicator for more than threshold amount of time, sound sources controlled by the automotive computer in the cabin of the automobile are reduced (step 1102). Next, a determination is made as to whether the sound is from an emergency response vehicle (step 1104). If the sound is from an emergency response vehicle, the process plays the sound for the emergency response vehicle within the cabin of the vehicle (step 1106). The process terminates thereafter”; and [0072], “Depending on the interior and exterior noise, driver 308 may not notice the presence of ambulance 310. As depicted, automotive computer 312 displays ambulance icon 314 on driver window 306 using heads a display 300 to provide an augmented reality display to driver 308. Ambulance icon 314 is an example of an implementation for visual indicator 234 shown in block form in FIG. 2”. Note that gaze time is detected, and the ambulance icon is displayed on the driver side window, the second display area, and this is mapped to this claim cited limitation). Regarding claim 10, Rober, Sachdev, and Rakshit teach all the features with respect to claim 1 as outlined above. Further, Sachdev teaches that the wearable electronic device of claim 1, wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to control brightness of the virtual screen of the second area, based on brightness of the first area (See Sachdev: Fig. 10, and [0119], “As provided above, the controller 1000, and more particularly the processor 1002 executing the instructions 1010 could be configured to perform acts including: imaging device control, signal processing, data storage, image processing--e.g., cropping, adjusting size, skewing, brightness, and colors, and the like”). Regarding claim 12, Rober, Sachdev, and Rakshit teach all the features with respect to claim 1 as outlined above. Further, Rober, Sachdev, and Rakshit teach that a method for controlling a wearable electronic device, the method (See Rober: Figs. 6-7, and [0051], “FIG. 6 illustrates a VR system in a vehicle, according to some embodiments. FIG. 6 shows a VR system as shown in FIG. 1 that includes HMDs 692; however, note that a similar configuration may implement a VR system as shown in FIG. 2. As shown in FIG. 6, a vehicle 600 (which may be, but is not necessarily, an autonomous vehicle) may include a VR controller 610, vehicle systems 626 (e.g., vehicle control systems such as throttle, braking, steering, and active suspension systems, as well as navigation, HVAC and AV systems), internal and external sensors (e.g., LiDAR for depth mapping, video cameras for internal or external views, IMUs, localization systems, etc.). The vehicle 600 may include one or more seats for passengers 690. In this example, a forward-facing and rear-facing seat are used for illustrative purposes. Passenger 690A may be sitting in a rear-facing seat, while passenger 690B may be sitting in a forward-facing seat. Passengers 690A and 690B are wearing respective HMDs 692A and 692B”) comprising: identifying a displayable area corresponding to an external display included in an image of a space acquired through a camera of the wearable electronic (See Rober: Figs. 6-7, and [0051], “FIG. 6 illustrates a VR system in a vehicle, according to some embodiments. FIG. 6 shows a VR system as shown in FIG. 1 that includes HMDs 692; however, note that a similar configuration may implement a VR system as shown in FIG. 2. As shown in FIG. 6, a vehicle 600 (which may be, but is not necessarily, an autonomous vehicle) may include a VR controller 610, vehicle systems 626 (e.g., vehicle control systems such as throttle, braking, steering, and active suspension systems, as well as navigation, HVAC and AV systems), internal and external sensors (e.g., LiDAR for depth mapping, video cameras for internal or external views, IMUs, localization systems, etc.). The vehicle 600 may include one or more seats for passengers 690. In this example, a forward-facing and rear-facing seat are used for illustrative purposes. Passenger 690A may be sitting in a rear-facing seat, while passenger 690B may be sitting in a forward-facing seat. Passengers 690A and 690B are wearing respective HMDs 692A and 692B”; and Fig. 2, and [0045], “FIG. 2 illustrates a VR system that projects VR content to a window of a vehicle for viewing by passengers, according to some embodiments. In these embodiments, a VR system 200 in a vehicle includes a VR controller 210 (e.g., mounted under the dash) and a projector 220 system configured to project virtual content onto a window 208 (e.g., the windshield of the vehicle)”. Note that the display of the computers and the vehicle windows on with image projected are mapped to the displayable area, and the windshield may be first area of the displayable area.); identifying, based on at least a portion of the external display being excluded from the displayable area due to movement of the external display (See Rakshit: Figs. 2-5, and [0072], “Depending on the interior and exterior noise, driver 308 may not notice the presence of ambulance 310. As depicted, automotive computer 312 displays ambulance icon 314 on driver window 306 using heads a display 300 to provide an augmented reality display to driver 308. Ambulance icon 314 is an example of an implementation for visual indicator 234 shown in block form in FIG. 2”; [0079, “In this illustrative example, police officer icon 402 is displayed by heads up display 300 on windshield 304 of automobile 302. Police officer icon 402 is an example of an implementation of visual indicator 234 shown in block form in FIG. 2. This display of police officer icon 402 on the live view seen through windshield 304 provides an augmented reality display that allows driver to focus on the live view without turning focus to a display device located within automobile 302”; and [0084], “In this illustrative example, young people icon 506 to is displayed in association with young people 500 on windshield 304 by a heads up display 300. Young people icon 506 is an example of an implementation of visual indicator 234 shown in block form in FIG. 2. The location of young people icon 506 is selected to indicate the location of young people 500”. Note that as the vehicle is moving, and the external display is moving as well, the display area on which the guidance information is projected is changed from the driver side window, to the driver side windshield or the passenger side windshield, i.e., the driver window (the second display area)is excluded (separated by a pillar) from the windshield (the first display area)), a first area of the displayable area, in which a portion of the external display is disposed (See Rober: Fig. 2, and [0045], “FIG. 2 illustrates a VR system that projects VR content to a window of a vehicle for viewing by passengers, according to some embodiments. In these embodiments, a VR system 200 in a vehicle includes a VR controller 210 (e.g., mounted under the dash) and a projector 220 system configured to project virtual content onto a window 208 (e.g., the windshield of the vehicle)”. Note that the vehicle windows on with image projected are mapped to the displayable area, and the windshield may be first area of the displayable area), and a second area of the displayable area which corresponds to a remaining area excluding the first area ; (See Sachdev: Figs. 4-7, and [0040], “The image 412 can cover most any portion of the vehicle 300. As shown in FIG. 4, the image 412 can cover any or all of the ceiling 304, a second, or B pillar (2), a third, or C pillar (3), and the rear panel 306. The image 412 can also be generated and projected to cover any trim or other intermediate material, such as material present between windows and pillars and between windows and the ceiling, thereby rendering the image and window views, together, contiguous, as shown in FIG. 4”. Note that the ceiling, pillars, rear panel, etc. are mapped to the second display area of the displayable area while the windshield is mapped to the first display area of the displayable area) and displaying a virtual screen on the second area (See Sachdev: Figs. 4-7, and [0051], “FIG. 7 shows another view of the interior of the vehicle 300. In this embodiment, the image 412 forms a virtual sunroof”; and [0052], “As shown in FIG. 7, the image 412 can include various details of an actual sunroof, including simulated sunroof trim 702 and, when the virtual sunroof is "closed," simulation of glass 704--e.g., streak marks and reflection marks at the simulated glass material that would be due to sun light, building lights, street lights, etc.”. Note that the virtual sunroof is mapped to the virtual screen, and the image details 412 of the virtual sunroof is mapped to control the display to display a virtual screen on the second area). Regarding claim 13, Rober, Sachdev, and Rakshit teach all the features with respect to claim 12 as outlined above. Further, Rober and Rakshit teach that the method of claim 12, wherein the identifying of the first area and the second area corresponding to the remaining area excluding the first area among the displayable area comprising: identifying the movement of the external display based on at least one of sensor data acquired through at least one sensor of the wearable electronic device or the image acquired through the camera (See Rober: Fig. 7, and [0055], “”In some embodiments, a VR projection device 720 may also include an IMU 728 for detecting motion and orientation of the VR projection device 720 (e.g., HMD). In some embodiments, a VR projection device 720 may include or couple to personal audio output devices 725 such as headphones or earbuds. If the VR projection device 720 is a HMD, audio output 725 may be integrated in the HMD”; and [0058], “As shown in FIG. 7, VR controller 710 may receive various inputs (e.g., localization, acceleration, braking, steering, motion, orientation direction, video, depth maps, etc.) from vehicle internal and external sensors and control systems 702. Vehicle internal and external sensors may include, but are not limited to, depth cameras (e.g., LiDAR), video cameras, inertial-measurement units (IMUs). Vehicle control systems may include, but are not limited to throttle control, braking, steering, navigation, and active suspension systems. VR controller 710 may also obtain inputs (e.g., video and/or audio) from one or more vehicle AV systems 706”. Note that various sensors are used to detect position, motion, etc., and the motion of the vehicles Is mapped to the movement of the external displays, i.e., the vehicle windows, sunroof, rear panels, etc.); and identifying the first area and the second area based on the movement of the external display (See Rakshit: Figs. 2-5, and [0072], “Depending on the interior and exterior noise, driver 308 may not notice the presence of ambulance 310. As depicted, automotive computer 312 displays ambulance icon 314 on driver window 306 using heads a display 300 to provide an augmented reality display to driver 308. Ambulance icon 314 is an example of an implementation for visual indicator 234 shown in block form in FIG. 2”; and [0079], “In this illustrative example, police officer icon 402 is displayed by heads up display 300 on windshield 304 of automobile 302. Police officer icon 402 is an example of an implementation of visual indicator 234 shown in block form in FIG. 2. This display of police officer icon 402 on the live view seen through windshield 304 provides an augmented reality display that allows driver to focus on the live view without turning focus to a display device located within automobile 302”. Note that as the vehicle is moving, the display area on which the guidance information is projected is changed from the driver side window, to the driver side windshield the driver window, the windshield is mapped to the first display area, and the driver side window is mapped to the second display area). Regarding claim 14, Rober, Sachdev, and Rakshit teach all the features with respect to claim 12 as outlined above. Further, Rober and Rakshit teach that the method of claim 12, wherein the identifying of the first area and the second area corresponding to the remaining area excluding the first area among the displayable area comprising: receiving information related to the movement of the external display from an external electronic device which is configured to control the external display (See Rober: Figs. 1 and 7, and [0044], “The passenger 190 may wear the HMD 112 while, for example, working on a user device 192 (e.g., a notebook or laptop computer). Controller 110 and HMD 112 may be communicatively coupled via a wired (e.g., the user may plug the HMD 112 into a port (e.g., USB port) on the seat or console) or wireless (e.g., Bluetooth) connection. Controller 110 and user device 192 may also be communicatively coupled via a wired (e.g., the user may plug the user device 192 into a port (e.g., USB port) on the seat or console) or wireless (e.g., Bluetooth) connection”; and [0055], “In some embodiments, a VR projection device 720 may also include memory 723 that may, for example, store program instructions executable by processor(s) 722 to perform the functionalities of the VR projection device 720 to connect to, communicate with, and process inputs from VR controller 710, as well as data that may be used by the program instructions. In some embodiments, a VR projection device 720 may also include an IMU 728 for detecting motion and orientation of the VR projection device 720 (e.g., HMD). In some embodiments, a VR projection device 720 may include or couple to personal audio output devices 725 such as headphones or earbuds. If the VR projection device 720 is a HMD, audio output 725 may be integrated in the HMD”. Note that the HMD and controller is connected to the laptop, and the IMU senses the motion of the vehicle and HMD, and this is mapped to the current cited limitation of receiving movement information of the external display, the vehicle windows); and identifying the first area and the second area based on the information related to the movement of the external display (See Rakshit: Figs. 2-5, and [0072], “Depending on the interior and exterior noise, driver 308 may not notice the presence of ambulance 310. As depicted, automotive computer 312 displays ambulance icon 314 on driver window 306 using heads a display 300 to provide an augmented reality display to driver 308. Ambulance icon 314 is an example of an implementation for visual indicator 234 shown in block form in FIG. 2”; and [0079], “In this illustrative example, police officer icon 402 is displayed by heads up display 300 on windshield 304 of automobile 302. Police officer icon 402 is an example of an implementation of visual indicator 234 shown in block form in FIG. 2. This display of police officer icon 402 on the live view seen through windshield 304 provides an augmented reality display that allows driver to focus on the live view without turning focus to a display device located within automobile 302”. Note that as the vehicle is moving, the display area on which the guidance information is projected is changed from the driver side window, to the driver side windshield the driver window, the windshield is mapped to the first display area, and the driver side window is mapped to the second display area). Regarding claim 18, Rober, Sachdev, and Rakshit teach all the features with respect to claim 12 as outlined above. Further, Sachdev teaches that the method of claim 12, wherein the displaying of the virtual screen on the second area comprising: displaying information related to an external environment on the second area, based on movement of the external display which is displaying the first content (See Sachdev: Figs. 5-6, and [0049], “As examples of actual objects that can be viewed by way of the image 412 displayed on the ceiling 304 [please show ceiling like in FIG. 4] and A pillar (1), FIG. 6 shows a person 4 and a traffic light 5, on the respective surfaces of the vehicle 300, whom and which were not readily discernable without the virtual features of the present technology, as can be seen by previous FIG. 5”. Note that the person and traffic light are mapped to the information related to the external environment, and the appearance of the persona and the traffic light is related to the vehicle position which is also related to the movement of the vehicle and the windows of the vehicle, the external objects such as person or traffic light are dynamically changed as the vehicle is driving down the road). Regarding claim 19, Rober, Sachdev, and Rakshit teach all the features with respect to claim 18 as outlined above. Further, Rakshit teaches that the method of claim 18, further comprising displaying virtual information related to the external environment on the second area, based on detection of a user's gaze with respect to the second area through at least one sensor of the wearable electronic device (See Rakshit: Figs. 3-4, and 11, and [0107], “Step 1100 can be implemented using currently available eye tracking processes. The point of gaze can be correlated to a location on the windshield. That location can be compared to the location of the visual indicator to determine whether the driver is focused in the visual indicator. This process can be implemented using optical eye tracking processes that measure eye motion with a camera or some other optical sensor”; [0108], “If the driver has focused on the visual indicator for more than threshold amount of time, sound sources controlled by the automotive computer in the cabin of the automobile are reduced (step 1102). Next, a determination is made as to whether the sound is from an emergency response vehicle (step 1104). If the sound is from an emergency response vehicle, the process plays the sound for the emergency response vehicle within the cabin of the vehicle (step 1106). The process terminates thereafter”; and [0072], “Depending on the interior and exterior noise, driver 308 may not notice the presence of ambulance 310. As depicted, automotive computer 312 displays ambulance icon 314 on driver window 306 using heads a display 300 to provide an augmented reality display to driver 308. Ambulance icon 314 is an example of an implementation for visual indicator 234 shown in block form in FIG. 2”. Note that gaze time is detected, and the ambulance icon is displayed on the driver side window, the second display area, and this is mapped to this claim cited limitation). Regarding claim 20, Rober, Sachdev, and Rakshit teach all the features with respect to claim 1 as outlined above. Further, Rober, Sachdev, and Rakshit teach that a non-transitory computer readable recording medium storing at least one program, wherein the at least one program stores instructions in the medium configured to cause a wearable electronic device (See Rober: Figs. 6-7, and [0051], “FIG. 6 illustrates a VR system in a vehicle, according to some embodiments. FIG. 6 shows a VR system as shown in FIG. 1 that includes HMDs 692; however, note that a similar configuration may implement a VR system as shown in FIG. 2. As shown in FIG. 6, a vehicle 600 (which may be, but is not necessarily, an autonomous vehicle) may include a VR controller 610, vehicle systems 626 (e.g., vehicle control systems such as throttle, braking, steering, and active suspension systems, as well as navigation, HVAC and AV systems), internal and external sensors (e.g., LiDAR for depth mapping, video cameras for internal or external views, IMUs, localization systems, etc.). The vehicle 600 may include one or more seats for passengers 690. In this example, a forward-facing and rear-facing seat are used for illustrative purposes. Passenger 690A may be sitting in a rear-facing seat, while passenger 690B may be sitting in a forward-facing seat. Passengers 690A and 690B are wearing respective HMDs 692A and 692B”; and [0054], “As shown in FIG. 7, a vehicle 700 (which may be, but is not necessarily, an autonomous vehicle) may include a VR controller 710. VR controller 710 may include one or more processors 712. Processor(s) 712 may include one or more of various types of processors, CPUs, image signal processors (ISPs), graphics processing units (GPUs), coder/decoders (codecs), memory, and/or other components for processing inputs from various sources to generate VR content and other output signals. VR controller 710 may also include memory 713 that may, for example, store program instructions executable by processor(s) 712 to perform the functionalities of the VR controller 710 to process inputs from various sources and generate VR content and other output signals, as well as data that may be used by the program instructions. VR controller 710 may also include interfaces 714 to various vehicle systems, external sources 790, VR projection device(s) 720, and passenger's user device(s) 792. The interfaces 714 may include wired and/or wireless connections to the various components”) to: identify a displayable area corresponding to an external display included in an image of a space through a camera of the wearable electronic (See Rober: Figs. 6-7, and [0051], “FIG. 6 illustrates a VR system in a vehicle, according to some embodiments. FIG. 6 shows a VR system as shown in FIG. 1 that includes HMDs 692; however, note that a similar configuration may implement a VR system as shown in FIG. 2. As shown in FIG. 6, a vehicle 600 (which may be, but is not necessarily, an autonomous vehicle) may include a VR controller 610, vehicle systems 626 (e.g., vehicle control systems such as throttle, braking, steering, and active suspension systems, as well as navigation, HVAC and AV systems), internal and external sensors (e.g., LiDAR for depth mapping, video cameras for internal or external views, IMUs, localization systems, etc.). The vehicle 600 may include one or more seats for passengers 690. In this example, a forward-facing and rear-facing seat are used for illustrative purposes. Passenger 690A may be sitting in a rear-facing seat, while passenger 690B may be sitting in a forward-facing seat. Passengers 690A and 690B are wearing respective HMDs 692A and 692B”; and Fig. 2, and [0045], “FIG. 2 illustrates a VR system that projects VR content to a window of a vehicle for viewing by passengers, according to some embodiments. In these embodiments, a VR system 200 in a vehicle includes a VR controller 210 (e.g., mounted under the dash) and a projector 220 system configured to project virtual content onto a window 208 (e.g., the windshield of the vehicle)”. Note that the display of the computers and the vehicle windows on with image projected are mapped to the displayable area, and the windshield may be first area of the displayable area.); identify, based on at least a portion of the external display being excluded from the displayable area due to movement of the external display (See Rakshit: Figs. 2-5, and [0072], “Depending on the interior and exterior noise, driver 308 may not notice the presence of ambulance 310. As depicted, automotive computer 312 displays ambulance icon 314 on driver window 306 using heads a display 300 to provide an augmented reality display to driver 308. Ambulance icon 314 is an example of an implementation for visual indicator 234 shown in block form in FIG. 2”; [0079, “In this illustrative example, police officer icon 402 is displayed by heads up display 300 on windshield 304 of automobile 302. Police officer icon 402 is an example of an implementation of visual indicator 234 shown in block form in FIG. 2. This display of police officer icon 402 on the live view seen through windshield 304 provides an augmented reality display that allows driver to focus on the live view without turning focus to a display device located within automobile 302”; and [0084], “In this illustrative example, young people icon 506 to is displayed in association with young people 500 on windshield 304 by a heads up display 300. Young people icon 506 is an example of an implementation of visual indicator 234 shown in block form in FIG. 2. The location of young people icon 506 is selected to indicate the location of young people 500”. Note that as the vehicle is moving, and the external display is moving as well, the display area on which the guidance information is projected is changed from the driver side window, to the driver side windshield or the passenger side windshield, i.e., the driver window (the second display area)is excluded (separated by a pillar) from the windshield (the first display area)), a first area of the displayable area, in which a portion of the external display is disposed (See Rober: Fig. 2, and [0045], “FIG. 2 illustrates a VR system that projects VR content to a window of a vehicle for viewing by passengers, according to some embodiments. In these embodiments, a VR system 200 in a vehicle includes a VR controller 210 (e.g., mounted under the dash) and a projector 220 system configured to project virtual content onto a window 208 (e.g., the windshield of the vehicle)”. Note that the vehicle windows on with image projected are mapped to the displayable area, and the windshield may be first area of the displayable area) and a second area of the displayable area which corresponds to a remaining area excluding the first area (See Sachdev: Figs. 4-7, and [0040], “The image 412 can cover most any portion of the vehicle 300. As shown in FIG. 4, the image 412 can cover any or all of the ceiling 304, a second, or B pillar (2), a third, or C pillar (3), and the rear panel 306. The image 412 can also be generated and projected to cover any trim or other intermediate material, such as material present between windows and pillars and between windows and the ceiling, thereby rendering the image and window views, together, contiguous, as shown in FIG. 4”. Note that the ceiling, pillars, rear panel, etc. are mapped to the second display area of the displayable area while the windshield is mapped to the first display area of the displayable area); and control a display of the electronic device to display a virtual screen on the second area (See Sachdev: Figs. 4-7, and [0051], “FIG. 7 shows another view of the interior of the vehicle 300. In this embodiment, the image 412 forms a virtual sunroof”; and [0052], “As shown in FIG. 7, the image 412 can include various details of an actual sunroof, including simulated sunroof trim 702 and, when the virtual sunroof is "closed," simulation of glass 704--e.g., streak marks and reflection marks at the simulated glass material that would be due to sun light, building lights, street lights, etc.”. Note that the virtual sunroof is mapped to the virtual screen, and the image details 412 of the virtual sunroof is mapped to control the display to display a virtual screen on the second area). Claims 4-7, 11, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Rober, etc. (US 20180089900 A1) in view of Sachdev, etc. (US 20130235351 A1), further in view of Rakshit, etc. (US 20200126276 A1), and Lee, etc. (US 20230228588 A1). Regarding claim 4, Rober, Sachdev, and Rakshit teach all the features with respect to claim 3 as outlined above. However, Rober, modified by Sachdev and Rakshit, fails to explicitly disclose that the wearable electronic device of claim 3, wherein the external display corresponds to at least one of a window of a vehicle, a sunroof, a window, or a rollable display, and the external electronic device is at least one of a control device of the vehicle, comprising processing circuitry, configured to control the window of the vehicle or the sunroof, an external electronic device comprising processing circuitry configured to control the window, or an external electronic device comprising processing circuitry configured to control the rollable display. However, Lee teaches that the wearable electronic device of claim 3, wherein the external display corresponds to at least one of a window of a vehicle, a sunroof, a window, or a rollable display, and the external electronic device is at least one of a control device of the vehicle, comprising processing circuitry, configured to control the window of the vehicle or the sunroof, an external electronic device comprising processing circuitry configured to control the window, or an external electronic device comprising processing circuitry configured to control the rollable display (See Lee: Fig. 15, and [0616], “For example, as illustrated in FIG. 15, when a screen size (display region size) of the rollable display 2510 is changed by external manipulation, the processor 830 may receive screen size change information from the rollable display 2510”; and [0167], “The communication apparatus 400 is an apparatus for performing communication with an external device. Here, the external device may be another vehicle, a mobile terminal or a server”. Note that the external device may be another vehicle, a server or a mobile terminal, and it is used to control the vehicle, including the control of the rollable display, and that is mapped to this claimed limitation of “control the rollable display”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Rober to have the wearable electronic device of claim 3, wherein the external display corresponds to at least one of a window of a vehicle, a sunroof, a window, or a rollable display, and the external electronic device is at least one of a control device of the vehicle, comprising processing circuitry, configured to control the window of the vehicle or the sunroof, an external electronic device comprising processing circuitry configured to control the window, or an external electronic device comprising processing circuitry configured to control the rollable display as taught by Lee in order to provide improved image quality to the driver by a flexible display (See Lee: Fig.29, and [0789], “The flexible display 2400 may be easily bent or curved, and according to an embodiment, the back plate 2530 in close contact with the flexible display 2400 may be provided on a rear surface of the flexible display 2400, thereby securing a plane of the flexible display 2400 that is not bent or curved the back plate 2530 due to the back plate 2530 supporting the flexible display 2400. Accordingly, the flexible display 2400 may provide an image of improved quality to a driver or the like”). Rober teaches a method and system that may generate virtual and mixed reality content, and project the VR content to the windshield of the vehicle for the user in the vehicle; while Lee teaches a system and method that may provide a rollable display for the vehicle with various control mode including using an external device to control various components of the vehicle. Therefore, it is obvious to one of ordinary skill in the art to modify Rober by Lee to control the vehicle components using an external control device. The motivation to modify Rober by Lee is “Use of known technique to improve similar devices (methods, or products) in the same way”. Regarding claim 5, Rober, Sachdev, Rakshit, and Lee teach all the features with respect to claim 4 as outlined above. Further, Rober teaches that the wearable electronic device of claim 4, wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to: identify that the wearable electronic device is located in the vehicle through the communication module (See Rober: Fig. 1, and [0038], “In some embodiments, virtual views of real or fictional people may be integrated into the virtual experience provided by the VR system. For example, a virtual representation of an author or talk show host may appear to be sitting in the seat next to the passenger; the virtual author may be reading one of their books to the passenger, or the virtual talk show host may be hosting their show from the seat next to the passenger, with their voices provided through the audio system. As another example, the passenger may experience riding on a flatbed truck with a band playing a gig on the flatbed, with the band's music provide through the audio system”); receive 3D data of the vehicle from the control device; and identify the displayable area by further using the 3D data (See Rober: Fig. 1, and [0043], “FIG. 1 illustrates a virtual reality (VR) system including a head mounted device (HMD) that may be used by passengers in vehicles, according to some embodiments. In these embodiments, a VR system 100 in a vehicle includes a VR controller 110 (e.g., mounted under the dash) and a VR headset (HMD 112). HMD 112 may implement any of various types of virtual reality projection technologies. For example, HMD 112 may be a near-eye VR system that projects left and right images on screens in front of the user 190's eyes that are viewed by the passenger 190, such as DLP (digital light processing), LCD (liquid crystal display) and LCoS (liquid crystal on silicon) technology VR systems. As another example, HMD 112 may be a direct retinal projector system that scans left and right images, pixel by pixel, to the passenger 190's eyes. To scan the images, left and right projectors generate beams that are directed to left and right reflective components (e.g., ellipsoid mirrors, or holographic combiners) located in front of the user 190's eyes; the reflective components direct the beams to the user's eyes. To create a three-dimensional (3D) effect, virtual content 116 at different depths or distances in the 3D virtual view 114 are shifted left or right in the two images as a function of the triangulation of distance, with nearer objects shifted more than more distant object”). Regarding claim 6, Rober, Sachdev, and Rakshit teach all the features with respect to claim 1 as outlined above. Further, Lee teaches that the wearable electronic device of claim 1, further comprising a communication module comprising communication circuitry, wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to: receive a first content displayed on the external display from the external electronic device which is configured to control the external display and connected to the wearable electronic device through at least the communication module (See Lee: Figs. 23-27, and [0030], “In an embodiment, while the rollable display displays map information in a display region having a first size, the processor may control the rollable display to be enlarged to a second size larger than the first size when an event occurs at a point out of the map information displayed in the display region having the first size”; and [0674], “As illustrated in (a) of FIG. 23, while the rollable display displays map information in a display region having a first size, an event may occur at a point out of the map information displayed in the display region having the first size”. Note that the primary art Rober has the wearable device for displaying content on the window/windshield/screen, and the event occurred in the map displayed in the first area is triggering the screen size enlarging control operation to change the display to a second size, the enlarged portion of the display area is mapped to the second display area now); and display, on the second area, a portion corresponding to the second area among the received first content (See Lee: Fig. 23, and [0682], “Under the control of the processor 830 of the route provision apparatus, the display region of the rollable display 2510 may be extended as illustrated in (b) of FIG. 23”; and [0683], “The processor 830 may receive map information corresponding to the extended region from the server (or memory) to display the received map information, and display information 2300 corresponding to the event in the extended region”. Note that the map 1710 and the event 2300 are displayed on the extended portion of the display, and this is mapped to display, on the second area, a portion corresponding to the second area among the received first content). Regarding claim 7, Rober, Sachdev, and Rakshit teach all the features with respect to claim 1 as outlined above. Further, Lee teaches that the wearable electronic device of claim 1, further comprising a communication module comprising communication circuitry, wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to: receive a first content displayed on the external display from the external electronic device which is configured to control the external display and connected to the wearable electronic device through at least the communication module (See Lee: Figs. 23-27, and [0030], “In an embodiment, while the rollable display displays map information in a display region having a first size, the processor may control the rollable display to be enlarged to a second size larger than the first size when an event occurs at a point out of the map information displayed in the display region having the first size”; and [0685], “In this case, since an event has occurred at the rear of the vehicle, the processor 830 may extend the rollable display in a downward direction, display map information corresponding to the rear of the vehicle in the extended region 1700 so as to display event information occurred in the rear of the vehicle in the extended region, and reflect and display the event information on map information corresponding to the rear of the vehicle as illustrated in (c) of FIG. 24”. Note that the primary art Rober has the wearable device for displaying content on the window/windshield/screen, and the event occurred in the map displayed in the first area is triggering the screen size enlarging control operation to change the display to a second size, the enlarged portion of the display area is mapped to the second display area now); and display the received first content on the displayable area (See Lee: Fig. 24, and [0686], “For another example, even when an event occurs from the rear of the vehicle, the processor 830 may enlarge the rollable display in a preset direction (upper end), move the map information that has been previously displayed to be included in the extended region, and display map information and event information corresponding to the rear of the vehicle in a lower portion that becomes empty as a result of the movement”. Note that the shifting of the map previously display in the first area is mapped to the first content displayed in the second displayable area). Regarding claim 11, Rober, Sachdev, and Rakshit teach all the features with respect to claim 1 as outlined above. Further, Sachdev and Lee teach that the wearable electronic device of claim 1, wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to display, as a virtual screen (See Sachdev: Fig.4, and [0035], “A virtual convertible effect is formed by presenting to vehicle occupants an image 412 in, on, and/or in front of one or more parts of the interior 302 of the vehicle 300, including the ceiling 304. An example image 412 is shown in FIG. 4. The image 412 may be formed in a variety of ways without departing from the scope of the present technology. As will be described below in further detail, ways of creating the image 412 include projection onto a surface, displaying from a screen, and holography--e.g., displaying a holographic image in front of the surface”. Note that the virtual sunroof on which the image 412 is projected is mapped to the virtual screen), another content different from a content previously displayed on the external display on at least a partial area including the first area among the displayable area, based on a size of the first area being less than the configured size (See Lee: Fig. 23. And [0700], “For another example, as illustrated in (b) of FIG. 27, when the vehicle is performing platooning (when the vehicle is platooning regardless of whether it is a preceding vehicle or a following vehicle of the platooning), the processor 830 may control the rollable display 2510 to be enlarged to have a second size larger than the first size”; and [0702], “Furthermore, as illustrated in (b) of FIG. 27, the processor 830 may further display a graphic object (notification information) 2600 indicating that platooning is in progress on the rollable display 2510”. Note that the platooning indicator displayed on the enlarged displayed area and overlaid on the first content, i.e., the original map displayed in the first display area, is mapped to the current claim cited limitation of “another content different from a content previously displayed on the external display on at least a partial area including the first area among the displayable area”). Regarding claim 15, Rober, Sachdev, and Rakshit teach all the features with respect to claim 14 as outlined above. Further, Lee teaches that the method of claim 14, wherein the external display corresponds to at least one of a window of a vehicle, a sunroof of a vehicle, a window, or a rollable display, and the external electronic device is at least one of a control device of the vehicle configured to control the window of a vehicle and/or the sunroof of the vehicle, an external electronic device configured to control the window, or an external electronic device configured to control the rollable display (See Lee: Fig. 15, and [0616], “For example, as illustrated in FIG. 15, when a screen size (display region size) of the rollable display 2510 is changed by external manipulation, the processor 830 may receive screen size change information from the rollable display 2510”; and [0167], “The communication apparatus 400 is an apparatus for performing communication with an external device. Here, the external device may be another vehicle, a mobile terminal or a server”. Note that the external device may be another vehicle, a server or a mobile terminal, and it is used to control the vehicle, including the control of the rollable display, and that is mapped to this claimed limitation of “control the rollable display”). Regarding claim 16, Rober, Sachdev, Rakshit, and Lee teach all the features with respect to claim 15 as outlined above. Further, Rober teaches that the method of claim 15, wherein the identifying of the displayable area comprising: identifying that the wearable electronic device is located in the vehicle through a communication module, comprising communication circuitry, of the wearable electronic device (See Rober: Fig. 1, and [0038], “In some embodiments, virtual views of real or fictional people may be integrated into the virtual experience provided by the VR system. For example, a virtual representation of an author or talk show host may appear to be sitting in the seat next to the passenger; the virtual author may be reading one of their books to the passenger, or the virtual talk show host may be hosting their show from the seat next to the passenger, with their voices provided through the audio system. As another example, the passenger may experience riding on a flatbed truck with a band playing a gig on the flatbed, with the band's music provide through the audio system”); receiving 3D data of the vehicle from the control device; and identifying the displayable area by further using the 3D data (See Rober: Fig. 1, and [0043], “FIG. 1 illustrates a virtual reality (VR) system including a head mounted device (HMD) that may be used by passengers in vehicles, according to some embodiments. In these embodiments, a VR system 100 in a vehicle includes a VR controller 110 (e.g., mounted under the dash) and a VR headset (HMD 112). HMD 112 may implement any of various types of virtual reality projection technologies. For example, HMD 112 may be a near-eye VR system that projects left and right images on screens in front of the user 190's eyes that are viewed by the passenger 190, such as DLP (digital light processing), LCD (liquid crystal display) and LCoS (liquid crystal on silicon) technology VR systems. As another example, HMD 112 may be a direct retinal projector system that scans left and right images, pixel by pixel, to the passenger 190's eyes. To scan the images, left and right projectors generate beams that are directed to left and right reflective components (e.g., ellipsoid mirrors, or holographic combiners) located in front of the user 190's eyes; the reflective components direct the beams to the user's eyes. To create a three-dimensional (3D) effect, virtual content 116 at different depths or distances in the 3D virtual view 114 are shifted left or right in the two images as a function of the triangulation of distance, with nearer objects shifted more than more distant object”). Regarding claim 17, Rober, Sachdev, and Rakshit teach all the features with respect to claim 12 as outlined above. Further, Lee teaches that the method of claim 12, wherein the displaying of the virtual screen on the second area comprising: receiving a first content being displayed on the external display from the external electronic device which is configured to control the external display (See Lee: Figs. 23-27, and [0030], “In an embodiment, while the rollable display displays map information in a display region having a first size, the processor may control the rollable display to be enlarged to a second size larger than the first size when an event occurs at a point out of the map information displayed in the display region having the first size”; and [0674], “As illustrated in (a) of FIG. 23, while the rollable display displays map information in a display region having a first size, an event may occur at a point out of the map information displayed in the display region having the first size”. Note that the primary art Rober has the wearable device for displaying content on the window/windshield/screen, and the event occurred in the map displayed in the first area is triggering the screen size enlarging control operation to change the display to a second size, the enlarged portion of the display area is mapped to the second display area now); and displaying, on the second area, a portion corresponding to the second area among the received first content (See Lee: Fig. 23, and [0682], “Under the control of the processor 830 of the route provision apparatus, the display region of the rollable display 2510 may be extended as illustrated in (b) of FIG. 23”; and [0683], “The processor 830 may receive map information corresponding to the extended region from the server (or memory) to display the received map information, and display information 2300 corresponding to the event in the extended region”. Note that the map 1710 and the event 2300 are displayed on the extended portion of the display, and this is mapped to display, on the second area, a portion corresponding to the second area among the received first content). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GORDON G LIU whose telephone number is (571)270-0382. The examiner can normally be reached Monday - Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona E Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GORDON G LIU/Primary Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Jul 26, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §103
Apr 13, 2026
Examiner Interview Summary
Apr 13, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602846
GENERATING REALISTIC MACHINE LEARNING-BASED PRODUCT IMAGES FOR ONLINE CATALOGS
2y 5m to grant Granted Apr 14, 2026
Patent 12602840
IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602871
MESH TOPOLOGY GENERATION USING PARALLEL PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12592022
INTEGRATION CACHE FOR THREE-DIMENSIONAL (3D) RECONSTRUCTION
2y 5m to grant Granted Mar 31, 2026
Patent 12586330
DISPLAYING A VIRTUAL OBJECT IN A REAL-LIFE SCENE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
98%
With Interview (+15.1%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 673 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month