Prosecution Insights
Last updated: April 19, 2026
Application No. 18/607,240

METHOD FOR ASCERTAINING A CONFIGURATION OF A USER-STATE-DEPENDENT OUTPUT OF INFORMATION FOR A USER OF AN AR DEVICE, AND AR DEVICE

Final Rejection §101§103
Filed
Mar 15, 2024
Examiner
AMIN, JWALANT B
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Robert Bosch GmbH
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
94%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
500 granted / 631 resolved
+17.2% vs TC avg
Strong +15% interview lift
Without
With
+15.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
14 currently pending
Career history
645
Total Applications
across all art units

Statute-Specific Performance

§101
13.4%
-26.6% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 631 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 15 is objected to because of the following informalities: On lines 1-3, the limitation “the device is a pair of AR glasses or as a head-up display or has a pair of AR glasses or has a head-up display” appears to be incorrect. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 8, 11-18, and 24-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doken (US 2023/0237797), in view of Aoki (US 2023/0204961), and further in view of Momeyer et al. (US 2012/0158520, hereinafter Momeyer). Regarding claim 1, Doken teaches a method for generating (abstract: an augmented reality scene associated with the potentially hazardous condition associated with the object may be generated for presentation, at a user device), in an augmented reality (AR) device ([0029]: AR head-mounted display (HMD)), state-dependent AR display of a combination of safety-relevant (notification 302, fig. 3A) and non-safety-relevant (buy option 308, fig. 3A) information for a user of the AR device (abstract: In response to determining that the hazardous condition may occur, an augmented reality scene associated with the potentially hazardous condition associated with the object may be generated for presentation, at a user device), the method comprising the following steps: obtaining, with the sensor system ([0029]: an image sensor, ultrasonic sensor, radar sensor, LED sensor, LIDAR sensor, or any other suitable sensor, or any combination thereof, to detect and classify objects in environment ), first data (locations of one or more object in environment 100, [0028]) about at least one object in an indirect and/or direct environment of the user (fig. 13 step 1302; [0028]: a smart home management system (SHMS) may be configured to identify locations of one or more objects in environment 100 and determine respective classifications of the identified one or more objects; [0029]: one or more sensors of user device 104 may be used to ascertain a location of an object by outputting a light or radio wave signal, and measuring a time for a return signal to be detected and/or measuring an intensity of the returned signal, and/or performing image processing on images captured by the image sensor of environment 100. In some embodiments, the SHMS may be configured to receive input from user 102 identifying a location and/or classification of a particular object; [0110]: At 1302, the SHMS may identify, using a sensor, a location of an object in an environment. For example, user 102 of FIG. 1 may utilize one or more of user device 104, 105, 106 to scan his or her surroundings in environment 100, e.g., the home of user 102, and capture images of one or more objects in environment 100, which may be used to determine object locations within the environment. In some embodiments, wireless signal characteristics may be used to identify locations of objects in environment 100. The SHMS may generate a 3D map of environment 100 specifying locations of objects and/or locations of users in environment 100. In some embodiments, the user may be requested by the SHMS to scan his or her surroundings in environment 100, e.g., during a home inspection or at any other suitable time); obtaining, with the sensor system ([0036]: gyroscopes, accelerometers, cameras), second data (location/movement/speed/biometric/heart rate data associated with a user of the user device), wherein the second data about the user ([0036]: the SHMS may determine a current location of user 102 within environment 100 … the SHMS may track the movements of the user with, e.g., gyroscopes, accelerometers, cameras, etc., in combination with control circuitry; [0045]: biometric data and/or any suitable user information indicative of a current state of user 102, and/or identify characteristics of the user based on real-time observations via one or more sensors; [0094]: At 1102, the SHMS may collect location data associated with a user device (e.g., a wearable camera or any other suitable device) of a vulnerable human (e.g., child 204 of FIG. 2) in a particular environment (e.g., environment 200 of FIG. 2). In some embodiments, the SHMS may continuously track and store locations of the users within the particular environment, and generate a map (e.g., an AR home cloud 3D Map) indicating locations of objects and users within the environment; [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device or the SHMS detects an elevated heart rate (e.g., detected by smart watch or other user device)); ascertaining a safety relevance (determining occurrence of a hazardous condition) of the at least one object to the user based on the first data and/or the second data ([0006]: determining that the hazardous condition may occur, based on the combination, comprises determining that a database stores an indication that the combination of the identified characteristic and the object is indicative that the hazardous condition may occur; [0035]: biometric data or facial recognition techniques may be employed; [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device or the SHMS detects an elevated heart rate (e.g., detected by smart watch or other user device), which may indicate that the user is busy and thus the user's cognitive state may not be sufficiently alert to pay attention to hazards and unsecured conditions. In some embodiments, such changed status may trigger the hazard AR warning system to turn active for the user associated with the changed status; [0096]: At 1104, the SHMS may compare the location of the user device determined at 1102 to location data of a hazardous object. For example, the SHMS may determine whether a user device (e.g., a wearable camera or any other suitable device) of user 204 is proximate to object 208 (e.g., within a predefined threshold, such as one foot, or any other suitable distance) or whether object 208 of FIG. 2 is within a field of view of a user device of user 204; [0112]: the object may be an IoT device that communicates with the SHMS to indicate that a user is in proximity to the object, e.g., based on wireless communication with a nearby user device that is typically carried or worn by a user; [0114]: At 1310, the SHMS may determine whether a hazardous condition may occur, based on a combination of the location of the object, the classification of the object, and the identity of the human in proximity with the object); ascertaining a state of the user (level of distraction) based on the second data ([0006]: the characteristic corresponds to one or more of an age of the human, and a level of distraction of the human; [0045]: the SHMS may reference the user profile of a particular, user which may store demographic data and/or biometric data and/or any suitable user information indicative of a current state of user 102, and/or identify characteristics of the user based on real-time observations via one or more sensors. For example, if the SHMS determines that a first user is elderly (or is child) and/or physically unfit (e.g., having mobility issues, hearing or vision impairments, etc.), and/or cognitively in decline, and/or in an angry or stressed-out state, the SHMS may be more likely to determine that a particular scenario poses a larger risk to the first user as compared to a second user who is relatively young and/or physically fit and/or in a good mood, since the first user may be more distracted or disoriented or otherwise less able to avoid the potentially hazardous condition as compared to the second user; [0046]: a distraction level of a user may be based on any suitable factor, e.g., a level of physical exertion for a current activity being performed by the user may be taken into account by the SHMS; [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device or the SHMS detects an elevated heart rate (e.g., detected by smart watch or other user device), which may indicate that the user is busy and thus the user's cognitive state may not be sufficiently alert to pay attention to hazards and unsecured conditions); determining the safety-relevant information based on the least the first data (based on the user’s location in proximity to a potentially hazardous object, the system determines a potential risk and provides an indication or warning related to the potential risk; [0039]: the SHMS may, in response to determining that the potentially hazardous condition may occur, generate for presentation, at a user device (e.g., at least one of user device 104, 105, 106 or any other suitable device or any combination thereof), an indication associated with the potentially hazardous condition that may occur; [0043]: the SHMS may determine that, based on the proximity of television 116 to puddle 120, there is a potential risk of an electrical accident or electrical shock to user 102 if puddle 120 were to expand or otherwise contact television 116. In response to such a determination, the SHMS may provide indication 111 at user device 104 (and/or any other suitable user device) indicating the risk of an electrical accident or shock to user 102; [0045]: the SHMS may assign certain weights to certain vulnerability states, which may be used in determining whether to present an augmented reality scene associated with the potentially hazardous condition, and/or a manner in which to present the scene. For example, the first user in the example above, determined by the SHMS to be more vulnerable, may be presented a more urgent alert than the second user, determined by the SHMS to less vulnerable. In some embodiments, a manner of outputting an alert may depend on cognitive and/or physical capabilities/limitations indicated in a user profile, e.g., providing the alert in a language that matches a language understood by a particular user as indicated in the user's profile, providing audio-based or image-based alerts in favor of text if a user has a low literacy level as indicated in the user's profile, etc.; [0046]: if user 102 is determined by the SHMS to be on a telephone call while near an oven that is in operation, or user 102 is determined by the SHMS to be climbing the stairs, the SHMS may determine that user 102 is distracted, and may provide a warning regarding a potentially hazardous condition based on this determination; [0049]: notification 206 may correspond to a camera feed of user 204 in proximity to object 208, and/or an augmented reality scene providing a warning or indication of how a hazardous condition might occur; [0051]: As shown in FIG. 3A, the SHMS may cause user device 105 to generate notification 302 for presentation. Notification 302 may comprise a recommended corrective action, e.g., “Secure loose rugs with double-faced tape or slip-resistant backing,” associated with object 304. Object 304 may be a rug present in an environment of a user, and notification 302 may be provided by the SHMS in response to a user performing a walkthrough of an environment containing object 304 and/or capturing an image or otherwise determining the presence of object 304 and/or monitoring actions of users in the environment, e.g., determining that a user almost slipped on rug 304. In some embodiments, object 304 depicted in notification 302 may match the appearance of the object within the environment and may be augmented with object 306 (e.g., a slip-resistant backing for object 304) that is not currently present in the environment. In some embodiments, notification 302 may be provided in response to determining that the potentially hazardous condition is not proximate to the user or in a field of view of the user (e.g., object 304 is no longer detected in a camera feed). In some embodiments, notification 302 may comprise a buy option 308, which when selected may enable the user to navigate to a website or application that enables the user to purchase object 306 associated with the corrective action; [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device or the SHMS detects an elevated heart rate (e.g., detected by smart watch or other user device), which may indicate that the user is busy and thus the user's cognitive state may not be sufficiently alert to pay attention to hazards and unsecured conditions. In some embodiments, such changed status may trigger the hazard AR warning system to turn active for the user associated with the changed status; [0100]: At 1112, the SHMS may issue a household hazard AR warning to the user device (e.g., a user device of the vulnerable user) in any suitable form, e.g., image, audio, haptic, or any combination thereof. The warning may comprise an augmented reality scene that demonstrates what an accident may look like if it occurred, and/or instructions to avoid interacting with the potentially hazardous object. In some embodiments, if the SHMS, e.g., via computer vision, detects a vulnerable user close to, and a human body part (e.g., hand, arm, leg, etc.) next to, the hazardous object within the field-of-view of the user, the SHMS may immediately jump to a hazard AR warning level of a high urgency); and generating (warning regarding a potentially hazardous condition), in the AR device, a display of the safety-relevant and non-safety-relevant information depending on the ascertained safety relevance and the ascertained state of the user ([0045]: the SHMS may assign certain weights to certain vulnerability states, which may be used in determining whether to present an augmented reality scene associated with the potentially hazardous condition, and/or a manner in which to present the scene. For example, the first user in the example above, determined by the SHMS to be more vulnerable, may be presented a more urgent alert than the second user, determined by the SHMS to less vulnerable. In some embodiments, a manner of outputting an alert may depend on cognitive and/or physical capabilities/limitations indicated in a user profile, e.g., providing the alert in a language that matches a language understood by a particular user as indicated in the user's profile, providing audio-based or image-based alerts in favor of text if a user has a low literacy level as indicated in the user's profile, etc.; [0046]: if user 102 is determined by the SHMS to be on a telephone call while near an oven that is in operation, or user 102 is determined by the SHMS to be climbing the stairs, the SHMS may determine that user 102 is distracted, and may provide a warning regarding a potentially hazardous condition based on this determination; [0051]: As shown in FIG. 3A, the SHMS may cause user device 105 to generate notification 302 for presentation. Notification 302 may comprise a recommended corrective action, e.g., “Secure loose rugs with double-faced tape or slip-resistant backing,” associated with object 304. Object 304 may be a rug present in an environment of a user, and notification 302 may be provided by the SHMS in response to a user performing a walkthrough of an environment containing object 304 and/or capturing an image or otherwise determining the presence of object 304 and/or monitoring actions of users in the environment, e.g., determining that a user almost slipped on rug 304. In some embodiments, object 304 depicted in notification 302 may match the appearance of the object within the environment and may be augmented with object 306 (e.g., a slip-resistant backing for object 304) that is not currently present in the environment. In some embodiments, notification 302 may be provided in response to determining that the potentially hazardous condition is not proximate to the user or in a field of view of the user (e.g., object 304 is no longer detected in a camera feed). In some embodiments, notification 302 may comprise a buy option 308, which when selected may enable the user to navigate to a website or application that enables the user to purchase object 306 associated with the corrective action; [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device or the SHMS detects an elevated heart rate (e.g., detected by smart watch or other user device), which may indicate that the user is busy and thus the user's cognitive state may not be sufficiently alert to pay attention to hazards and unsecured conditions. In some embodiments, such changed status may trigger the hazard AR warning system to turn active for the user associated with the changed status; [0100]: At 1112, the SHMS may issue a household hazard AR warning to the user device (e.g., a user device of the vulnerable user) in any suitable form, e.g., image, audio, haptic, or any combination thereof. The warning may comprise an augmented reality scene that demonstrates what an accident may look like if it occurred, and/or instructions to avoid interacting with the potentially hazardous object. In some embodiments, if the SHMS, e.g., via computer vision, detects a vulnerable user close to, and a human body part (e.g., hand, arm, leg, etc.) next to, the hazardous object within the field-of-view of the user, the SHMS may immediately jump to a hazard AR warning level of a high urgency). Doken does not explicitly teach the generating includes: (I) selecting a particular balance between the display of the safety-relevant information and the display of the non-safety-relevant information based on a classification of a role of the user as determined from the second data and/or (II) selecting a degree to which non-safety-relevant information is displayed together with safety-relevant information based on (i) a metric of urgency of the safety- relevant information as determined from the first data and/or the second data, and/or (ii) an alertness of the user as determined from the second data. Aoki, in a similar field of endeavor, teaches the generating includes: (I) selecting a particular balance between the display of the safety-relevant information (for a driver, no information is displayed in the display restriction area where the object such as a road, another vehicle, a pedestrian, etc. is present, and generated content specific for the driver (such as messages “watch out for pedestrians”, vehicle speed display content, etc.) and other shared information is displayed outside the display distraction area; [0044]: when a distance between the own vehicle and the pedestrian or the other vehicle detected by the peripheral monitoring sensor included in the in-vehicle sensor group 52 is equal to or less than a predetermined value, the content providing device 58 may generate content such as “watch out for pedestrians!” as content for the driver, for example; [0045]: for example, when the own vehicle is approaching a point for turning at an intersection in a route to a destination on which the own vehicle is traveling, the content providing device 58 may generate content such as “right turn ahead!” as content for the driver, for example. Further, for example, when the own vehicle is to arrive at the destination soon, the content providing device 58 may generate content such as “arriving at destination soon!” as content for the driver, for example; [0052]: In step 108, with respect to the content providing device 58 of the vehicle-side system 50, the control unit 46 requests the content for the driver and the content to be shared to be displayed on the AR glasses 12 of the driver and acquires the information of the content from the content providing device 58; [0054]: As shown in FIG. 7, the display restriction area 74 is set to be an area in which an object (for example, a road, another vehicle, a pedestrian, etc.) that the driver frequently sees during driving is present, within the range of the driver's actual field of view. Thus, by setting the display position of the content outside the display restriction area 74, vehicle speed display content 76, building information display content 78, and other vehicle information display content 80 shown in FIG. 7 as an example can be suppressed from being a hinderance of driving by the driver; [0059]: As a result, as shown in FIG. 7, the AR glasses 12 of the driver is in a display state in which the vehicle speed display content 76, the building information display content 78, the other vehicle information display content 80, and the like are displayed as the virtual images) and the display of the non-safety-relevant information (building information display content and other vehicle information display content is functionally analogous to non-safety-relevant information; for a passenger, other information can be displayed in any area along with displaying the objects such as a road, another vehicle, a pedestrian, etc.; [0060]: In step 118, with respect to the content providing device 58 of the vehicle-side system 50, the control unit 46 requests the content for the fellow passenger and the content to be shared to be displayed on the AR glasses 12 of the fellow passenger and acquires the information of the content from the content providing device 58; [0061]: In step 120, the control unit 46 extracts one piece of content information from the content information acquired in step 118. In step 122, the control unit 46 causes the display unit 26 to display the content from which the information has been extracted in step 110 on the AR glasses 12 as the virtual image. In the AR glasses 12 for the fellow passenger, since the wearer (fellow passenger) does not drive the vehicle, the display restriction area 74 is not set in the present embodiment, and the display position of the content is set independently of the display restriction area 74 and the display of the content is not simplified) based on a classification of a role of the user as determined from the second data (the wearer of the AR glasses seated in a vehicle is determined to be a driver or the passenger by determining whether an image region corresponding to a steering wheel, a meter, or the like exists at a predetermined position in the peripheral captured image acquired in step 100; fig. 5 step 106; [0040]: The content providing device 58 generates content to be displayed as a virtual image on the AR glasses 12 based on the information collected from the in-vehicle sensor group 52 and the like. In addition, the content providing device 58 sets the purpose (whether it is for the driver, for the fellow passenger, or for sharing) for the generated content. The content for the driver is the content to be displayed only on the AR glasses 12 worn by the driver. The content for the fellow passenger is the content to be displayed only on the AR glasses 12 worn by the fellow passenger. The content to be shared is content to be displayed on the AR glasses 12 worn by the driver and the AR glasses 12 worn by the fellow passenger; [0041]: To give an example of the content generated by the content providing device 58, the content providing device 58 acquires the vehicle speed from the vehicle speed sensor included in the in-vehicle sensor group 52, and generates the vehicle speed content that displays the acquired vehicle speed as the content for the driver. Further, since the vehicle speed is highly important information for the driver, the content providing device 58 sets a relatively high priority as the display priority for the vehicle speed display content; [0042]: Further, for example, the content providing device 58 acquires the current position and orientation of the own vehicle from the GNSS sensor included in the in-vehicle sensor group 52, and collates the acquired current position and orientation with map information to identify information on buildings present in front of the own vehicle. Then, the content providing device 58 generates the building information display content that displays the information of the identified building as, for example, content to be shared. Since the information on the building present in front of the own vehicle is less important to the driver than the vehicle speed, the content providing device 58 sets lower priority of the vehicle speed display content than the building information display content as the display priority; [0043]: Further, for example, the content providing device 58 acquires information on another vehicle existing around the own vehicle from the peripheral monitoring sensor included in the in-vehicle sensor group 52, and performs pattern matching for an image of the other vehicle acquired from the camera included in the in-vehicle sensor group 52 so as to identify a vehicle name of the other vehicle existing around the own vehicle. Then, the content providing device 58 generates the other vehicle information display content that displays the vehicle name of the identified other vehicle as content to be shared, for example. Since the information on the other vehicle present around the own vehicle is less important to the driver than the vehicle speed, the content providing device 58 sets lower priority of the vehicle speed display content than the other vehicle information display content as the display priority; [0051]: In the next step 106, the control unit 46 determines whether the wearer of the AR glasses 12 is a driver seated in the driver's seat. The determination in step 106 can be performed, for example, by determining whether an image region corresponding to a steering wheel, a meter, or the like exists at a predetermined position in the peripheral captured image acquired in step 100. When the determination in step 106 is affirmative, the process proceeds to step 108; [0060]: On the other hand, in step 106, when the wearer of the AR glasses 12 is the fellow passenger, the determination in step 106 is denied and the process proceeds to step 118; [0063]: As a result, the AR glasses 12 of the fellow passenger is in a state in which building information display content 82, other vehicle information display content 84, and the like are displayed as the virtual images, as shown in FIG. 8 as an example. As is clear from comparing FIG. 8 with FIG. 7, in the present embodiment, the position, the number, and the type of the virtual image (content) to be displayed are set by the AR glasses 12 of the driver and the AR glasses 12 of the fellow passenger are made to be different). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Aoki’s knowledge of displaying specific content for a driver and a passenger, respectively, and modify the process of Doken because such a process enhances the user’s experience by making the display suitable for the case where the wearer of the AR glasses is a driver and the case where the wearer is the fellow passenger ([0066]). Momeyer teaches selecting a degree to which non-safety-relevant information (additional data, fig. 2B; [0005]: FIG. 2B illustrates an advertisement similar to that shown in FIG. 2A, but with additional information that may be provided if the user's attention level is high or if there is a low level of motion of the mobile platform) is displayed together with safety-relevant information (advertisement, fig. 2A; [0004]: FIG. 2A illustrates an advertisement that may be provided if the user's attention level is low or if there is a high level of motion of the mobile platform) based on (attention level, [0004] and [0005]) as determined from the second data ([0010]: the content of advertisements appearing on the mobile platform may be altered to be suitable for the motion of the mobile platform and/or attention level of the user. For example, a low level of motion or a higher attention level of the user suggests that the user is more likely to pay attention to an advertisement and can absorb additional information in the advertisement. Conversely, a high level of motion or a lower attention level of the user indicates that the user is less likely to pay attention to an advertisement and will absorb less information; [0018]: Further data from biorhythm sensors and galvanic skin response sensors may be collected and used to determine the users emotional state and mood, as advertisement content is better absorbed and retained when the subject in an emotional state (positive or negative); [0020]: The activity of the user may be determined using the collected sensor data. For example, using GPS data, it can be determined if the mobile platform 100 is relatively stationary or moving in a car. Similarly, motion sensors, such as accelerometers, gyroscopes, and compass, can indicate whether the user is walking or stationary, standing or seated, as well whether the user is reading the device (in which case the mobile platform 100 would be held relatively stationary) or taking quick glances at the device, indicated by quick movement of the device to a reading position and then return to the original position. Additionally, proximity sensors, which may be a light sensor, capacitive sensors, resistive sensors, etc. may be used to determine if the user is talking into the mobile platform 100 with the mobile platform 100 held up to the user's head, where the user cannot see the screen; or to determine if the mobile platform 100 is in a purse or pocket. An ambient light detector may be used, e.g., to determine whether it is daytime/nighttime or if the mobile platform is inside/outside, which may be used adjust the brightness or colors of the advertisement. A higher state level indicates that the device context is one in which the user will be more likely to pay attention to an advertisement. Of course, other or additional activities may be associated with different states (S); [0022]: The mobile platform 100 may provide the sensor data to the server 150 via network 108 and the server 150 may then determine the attention level A. Alternatively, the mobile platform 100 may process the sensor data to determine the attention level A, and simply provide the attention level A to the server 150. Based on the attention level A, the server 150 may provide an advertisement to the mobile platform 100). The combination of Doken and Aoki contains a “base” process of displaying safety and non-safety relevant information to the user which the claimed invention can be seen as an “improvement’ in that the degree to which the non-safety-relevant information is displayed together with safety-relevant information is based on the attention level of the user. Momeyer contains a known technique of using a user’s attention level to determine the degree to which the additional information (non-safety-relevant information) is displayed together with advertisement (safety-relevant information; fig. 2A, fig. 2B, abstract, [0004], [0005], [0010], [0018], [0020] and [0022]) that is applicable to the “base” process. Momeyer’s known technique of using a user’s attention level to determine the degree to which the additional information (non-safety-relevant information) is displayed together with advertisement (safety-relevant information, fig. 2A, fig. 2B, abstract, [0004], [0005], [0010], [0018], [0020] and [0022]) would have been recognized by one skilled in the art as applicable to the “base” process of the combination of Doken and Aoki, and the results would have been predictable and resulted in displaying advertisement and related additional information based on the attention level of a user to provide an appropriate amount of information to the user which results in an improved process. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 2, the combination of Doken, Aoki and Momeyer teaches the method according to claim 1, wherein the AR device is a pair of AR glasses (Doken - smart glasses 105/AR glasses, fig. 1 and [0029]; Doken - AR device (e.g., smart glasses), [0034]; Aoki – AR glasses 12, fig. 2, [0027]). Regarding claim 3, the combination of Doken, Aoki and Momeyer teaches wherein the second data are specific to a use of a vehicle (Aoki - [0037]: The in-vehicle sensor group 52 includes a vehicle speed sensor that detects a vehicle speed of the vehicle, an acceleration sensor that detects an acceleration of the vehicle, and a global navigation satellite system (GNSS) sensor that detects a current position of the vehicle. Further, the in-vehicle sensor group 52 includes a peripheral monitoring sensor that detects and monitors an object such as a pedestrian or another vehicle existing around the vehicle by a radar or the like, a camera that photographs the surroundings of the own vehicle, and the like). Regarding claim 8, the combination of Doken, Aoki and Momeyer teaches the method according to claim 1, wherein the safety-relevant information is output in the generating in response to a determination that a safety relevance of the at least one object exceeds a determined or determinable value (object posing relatively higher risk to the user is associated with a greater threshold distance; Doken - [0037]: the threshold for proximity may vary based on a type of identified object. For example, an object posing a relatively higher risk to user 102 may be associated with a greater threshold distance and an object posing a relatively lower risk to user 102 may be associated with a lower threshold distance, to give user 102 more time to react to a warning associated with the object posing a relatively higher risk to user 102. As another example, characteristics of the user may impact a proximity threshold set by the SHMS. For example, the SHMS detecting a toddler being within the same room as an identified object (e.g., a knife) may trigger a warning to parents of the toddler and/or the toddler, whereas the SHMS detecting an adult, such as user 102, may set a lower proximity threshold (e.g., do not provide a warning until the user is within five feet of the identified object); Doken - [0038]: The SHMS may determine that a hazardous condition may occur, based on a location of object 120, a classification of the identified object as a puddle; Doken - [0099]: At 1110, the SHMS may compare an image of the potentially hazardous object (e.g., captured by a user device of user 204 of FIG. 2) with related images of the object from the object library (e.g., stored in the data structure of FIG. 4). For example, object 208, which is a stove, may be compared to images of the stove previously captured and stored by the SHMS or generic images of a stove, to determine whether there is a match. Any suitable image processing algorithms and/or machine learning techniques may be used in classifying the object depicted in the image. In some embodiments, the potentially hazardous object and the hazardous image situation may be captured simultaneously, e.g., by the (wearable) camera/smart glasses, and the SHMS may perform a search and comparison of the image(s) to the trigger kept on the SHMS). Regarding claim 11, the combination of Doken, Aoki and Momeyer teaches the method according to claim 1, further comprising continuously transforming the at least one safety-relevant object in the environment of the user into a coordinate system of the user and/or of the AR device (Doken - [0032]: In some embodiments, a Cartesian coordinate plane is used to identify a position of an object in environment 100, with the position recorded as (X, Y) coordinates on the plane. In some embodiments, the coordinates may include a coordinate in the Z-axis, to identify the position of each identified object in 3D space, based on images captured using 3D sensors and any other suitable depth-sensing technology. In some embodiments, coordinates may be normalized to allow for comparison to coordinates stored at the database in association with corresponding objects. As an example, the SHMS may specify that an origin of the coordinate system is considered to be a corner of a room within or corresponding to environment 100, and the position of the object may correspond to the coordinates of the center of the object or one or more other portions of the object; Doken - [0034]: the SHMS may generate a data structure for a current field of view of the user, including object identifiers associated with objects in environment 100, and such data structure may include coordinates representing the position of the field of view and objects in environment 100. A field of view may be understood as a portion of environment 100 that is presented to user 102 at a given time via a display (e.g., an angle in a 360-degree sphere environment) when the user is at a particular location in environment 100 and has oriented a user device in a particular direction in environment 100. The field of view may comprise a pair of 2D images to create a stereoscopic view in the case of a VR device; in the case of an AR device (e.g., smart glasses), the field of view may comprise 3D or 2D images, which may include a mix of real objects and virtual objects overlaid on top of the real objects using the AR device (e.g., for smart glasses, a picture captured with a camera and content added by the smart glasses)). Regarding claim 12, the combination of Doken, Aoki and Momeyer teaches the method according to claim 1, wherein the first data specify: (i) a type and/or nature of the at least one object (Doken - [0030]: For example, lamp 118, or any other suitable object, may be an Internet of Things (IoT) device equipped with sensors (e.g., a camera or image sensor, a microphone, or any other suitable sensors or any combination thereof) or other circuitry (e.g., wireless communication circuitry) to indicate to the SHMS a location of object 118 within environment 100 and/or an indication that object 118 is of a particular type (e.g., a lamp or any other suitable household appliance). For example, such IoT devices may communicate with the SHMS via the Internet or directly, e.g., via short-range wireless communication or a wired connection, such as, for example, by transmitting identifiers indicative of a type of the object (e.g., whether the device is a chair, table, robot vacuum, exercise equipment, thermostat, security camera, lighting system, dishwasher, or any other suitable device, or any combination thereof) and/or an orientation and location of the object; Doken - [0032]: the SHMS may determine that object 118 corresponds to a lamp (and/or a particular type of lamp) based on a similarity between the extracted information and stored information; Doken - [0033]: the SHMS may utilize one or more machine learning models to localize and/or classify objects in environment 100. For example, the machine learning model may output a value, a vector, a range of values, any suitable numeric representation of classifications of objects, or any combination thereof. The machine learning model may output one or more classifications and associated confidence values, where the classifications may be any categories into which objects may be classified or characterized), and/or (ii) an instantaneous distance between the user and the at least one object (Doken - [0036]: the SHMS may determine that a human (e.g., user 102) is in proximity to one or more of objects 112, 114, 118, 120 based on one or more of such objects being in a field of view of a camera of one or more of user device 104, user device 105 and user device 106 … user 102 may be considered in proximity to an object if the comparison indicates that the location of user 102 and one of the objects is the same or is within a threshold distance (e.g., five feet, or any other suitable distance, or any combination thereof)), and/or (iii) an instantaneous velocity of the at least one object, and/or a predicted trajectory of the object in the environment of the user. Regarding claim 13, the combination of Doken, Aoki and Momeyer teaches the method according to claim 1, wherein the second data specify: (i) an instantaneous position of the user in the user’s environment (Doken - [0036]: the SHMS may determine a current location of user 102 within environment 100 … the SHMS may track the movements of the user with, e.g., gyroscopes, accelerometers, cameras, etc., in combination with control circuitry; Doken - [0094]: At 1102, the SHMS may collect location data associated with a user device (e.g., a wearable camera or any other suitable device) of a vulnerable human (e.g., child 204 of FIG. 2) in a particular environment (e.g., environment 200 of FIG. 2). In some embodiments, the SHMS may continuously track and store locations of the users within the particular environment), and/or (ii) an instantaneous velocity of the user in the user’s environment (Doken - [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device), and/or (iii) a predicted trajectory of the user in the user’s environment. Regarding claim 14, Doken teaches a device (a smart home management system, [0004]) comprising: a sensor system ([0029]: The SHMS may utilize any suitable number and types of sensors to determine information related to the objects in environment 100, e.g., an image sensor, ultrasonic sensor, radar sensor, LED sensor, LIDAR sensor, or any other suitable sensor, or any combination thereof, to detect and classify objects in environment 100 … one or more sensors of user device 104 may be used to ascertain a location of an object by outputting a light or radio wave signal, and measuring a time for a return signal to be detected and/or measuring an intensity of the returned signal, and/or performing image processing on images captured by the image sensor of environment 100; [0036]: gyroscopes, accelerometers, cameras); an augmented reality (AR) display ([0029]: AR glasses, AR head-mounted display (HMD), virtual reality (VR) HMD); a processing system including at least one processor ([0076] and [0092]), wherein, for generating state-dependent AR display of a combination of safety-relevant (notification 302, fig. 3A) and non-safety-relevant (buy option 308, fig. 3A) information for a user of the AR device (abstract: In response to determining that the hazardous condition may occur, an augmented reality scene associated with the potentially hazardous condition associated with the object may be generated for presentation, at a user device), the processing system is configured to: obtain, with the sensor system ([0029]: an image sensor, ultrasonic sensor, radar sensor, LED sensor, LIDAR sensor, or any other suitable sensor, or any combination thereof, to detect and classify objects in environment ), first data (locations of one or more object in environment 100, [0028]), first data (locations of one or more object in environment 100, [0028]) about at least one object in an indirect and/or direct environment of the user (fig. 13 step 1302; [0028]: a smart home management system (SHMS) may be configured to identify locations of one or more objects in environment 100 and determine respective classifications of the identified one or more objects; [0029]: one or more sensors of user device 104 may be used to ascertain a location of an object by outputting a light or radio wave signal, and measuring a time for a return signal to be detected and/or measuring an intensity of the returned signal, and/or performing image processing on images captured by the image sensor of environment 100. In some embodiments, the SHMS may be configured to receive input from user 102 identifying a location and/or classification of a particular object; [0110]: At 1302, the SHMS may identify, using a sensor, a location of an object in an environment. For example, user 102 of FIG. 1 may utilize one or more of user device 104, 105, 106 to scan his or her surroundings in environment 100, e.g., the home of user 102, and capture images of one or more objects in environment 100, which may be used to determine object locations within the environment. In some embodiments, wireless signal characteristics may be used to identify locations of objects in environment 100. The SHMS may generate a 3D map of environment 100 specifying locations of objects and/or locations of users in environment 100. In some embodiments, the user may be requested by the SHMS to scan his or her surroundings in environment 100, e.g., during a home inspection or at any other suitable time); obtain, with the sensor system ([0036]: gyroscopes, accelerometers, cameras), second data (location/movement/speed/biometric/heart rate data associated with a user of the user device), about the user ([0036]: the SHMS may determine a current location of user 102 within environment 100 … the SHMS may track the movements of the user with, e.g., gyroscopes, accelerometers, cameras, etc., in combination with control circuitry; [0045]: biometric data and/or any suitable user information indicative of a current state of user 102, and/or identify characteristics of the user based on real-time observations via one or more sensors; [0094]: At 1102, the SHMS may collect location data associated with a user device (e.g., a wearable camera or any other suitable device) of a vulnerable human (e.g., child 204 of FIG. 2) in a particular environment (e.g., environment 200 of FIG. 2). In some embodiments, the SHMS may continuously track and store locations of the users within the particular environment, and generate a map (e.g., an AR home cloud 3D Map) indicating locations of objects and users within the environment; [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device or the SHMS detects an elevated heart rate (e.g., detected by smart watch or other user device)); ascertain a safety relevance (determining occurrence of a hazardous condition) of the at least one object to the user based on the first data and/or the second data ([0006]: determining that the hazardous condition may occur, based on the combination, comprises determining that a database stores an indication that the combination of the identified characteristic and the object is indicative that the hazardous condition may occur; [0035]: biometric data or facial recognition techniques may be employed; [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device or the SHMS detects an elevated heart rate (e.g., detected by smart watch or other user device), which may indicate that the user is busy and thus the user's cognitive state may not be sufficiently alert to pay attention to hazards and unsecured conditions. In some embodiments, such changed status may trigger the hazard AR warning system to turn active for the user associated with the changed status; [0096]: At 1104, the SHMS may compare the location of the user device determined at 1102 to location data of a hazardous object. For example, the SHMS may determine whether a user device (e.g., a wearable camera or any other suitable device) of user 204 is proximate to object 208 (e.g., within a predefined threshold, such as one foot, or any other suitable distance) or whether object 208 of FIG. 2 is within a field of view of a user device of user 204; [0112]: the object may be an IoT device that communicates with the SHMS to indicate that a user is in proximity to the object, e.g., based on wireless communication with a nearby user device that is typically carried or worn by a user; [0114]: At 1310, the SHMS may determine whether a hazardous condition may occur, based on a combination of the location of the object, the classification of the object, and the identity of the human in proximity with the object); ascertain a state of the user (level of distraction) based on the second data ([0006]: the characteristic corresponds to one or more of an age of the human, and a level of distraction of the human; [0045]: the SHMS may reference the user profile of a particular, user which may store demographic data and/or biometric data and/or any suitable user information indicative of a current state of user 102, and/or identify characteristics of the user based on real-time observations via one or more sensors. For example, if the SHMS determines that a first user is elderly (or is child) and/or physically unfit (e.g., having mobility issues, hearing or vision impairments, etc.), and/or cognitively in decline, and/or in an angry or stressed-out state, the SHMS may be more likely to determine that a particular scenario poses a larger risk to the first user as compared to a second user who is relatively young and/or physically fit and/or in a good mood, since the first user may be more distracted or disoriented or otherwise less able to avoid the potentially hazardous condition as compared to the second user; [0046]: a distraction level of a user may be based on any suitable factor, e.g., a level of physical exertion for a current activity being performed by the user may be taken into account by the SHMS; [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device or the SHMS detects an elevated heart rate (e.g., detected by smart watch or other user device), which may indicate that the user is busy and thus the user's cognitive state may not be sufficiently alert to pay attention to hazards and unsecured conditions); determine the safety-relevant information based on the least the first data (based on the user’s location in proximity to a potentially hazardous object, the system determines a potential risk and provides an indication or warning related to the potential risk; [0039]: the SHMS may, in response to determining that the potentially hazardous condition may occur, generate for presentation, at a user device (e.g., at least one of user device 104, 105, 106 or any other suitable device or any combination thereof), an indication associated with the potentially hazardous condition that may occur; [0043]: the SHMS may determine that, based on the proximity of television 116 to puddle 120, there is a potential risk of an electrical accident or electrical shock to user 102 if puddle 120 were to expand or otherwise contact television 116. In response to such a determination, the SHMS may provide indication 111 at user device 104 (and/or any other suitable user device) indicating the risk of an electrical accident or shock to user 102; [0045]: the SHMS may assign certain weights to certain vulnerability states, which may be used in determining whether to present an augmented reality scene associated with the potentially hazardous condition, and/or a manner in which to present the scene. For example, the first user in the example above, determined by the SHMS to be more vulnerable, may be presented a more urgent alert than the second user, determined by the SHMS to less vulnerable. In some embodiments, a manner of outputting an alert may depend on cognitive and/or physical capabilities/limitations indicated in a user profile, e.g., providing the alert in a language that matches a language understood by a particular user as indicated in the user's profile, providing audio-based or image-based alerts in favor of text if a user has a low literacy level as indicated in the user's profile, etc.; [0046]: if user 102 is determined by the SHMS to be on a telephone call while near an oven that is in operation, or user 102 is determined by the SHMS to be climbing the stairs, the SHMS may determine that user 102 is distracted, and may provide a warning regarding a potentially hazardous condition based on this determination; [0049]: notification 206 may correspond to a camera feed of user 204 in proximity to object 208, and/or an augmented reality scene providing a warning or indication of how a hazardous condition might occur; [0051]: As shown in FIG. 3A, the SHMS may cause user device 105 to generate notification 302 for presentation. Notification 302 may comprise a recommended corrective action, e.g., “Secure loose rugs with double-faced tape or slip-resistant backing,” associated with object 304. Object 304 may be a rug present in an environment of a user, and notification 302 may be provided by the SHMS in response to a user performing a walkthrough of an environment containing object 304 and/or capturing an image or otherwise determining the presence of object 304 and/or monitoring actions of users in the environment, e.g., determining that a user almost slipped on rug 304. In some embodiments, object 304 depicted in notification 302 may match the appearance of the object within the environment and may be augmented with object 306 (e.g., a slip-resistant backing for object 304) that is not currently present in the environment. In some embodiments, notification 302 may be provided in response to determining that the potentially hazardous condition is not proximate to the user or in a field of view of the user (e.g., object 304 is no longer detected in a camera feed). In some embodiments, notification 302 may comprise a buy option 308, which when selected may enable the user to navigate to a website or application that enables the user to purchase object 306 associated with the corrective action; [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device or the SHMS detects an elevated heart rate (e.g., detected by smart watch or other user device), which may indicate that the user is busy and thus the user's cognitive state may not be sufficiently alert to pay attention to hazards and unsecured conditions. In some embodiments, such changed status may trigger the hazard AR warning system to turn active for the user associated with the changed status; [0100]: At 1112, the SHMS may issue a household hazard AR warning to the user device (e.g., a user device of the vulnerable user) in any suitable form, e.g., image, audio, haptic, or any combination thereof. The warning may comprise an augmented reality scene that demonstrates what an accident may look like if it occurred, and/or instructions to avoid interacting with the potentially hazardous object. In some embodiments, if the SHMS, e.g., via computer vision, detects a vulnerable user close to, and a human body part (e.g., hand, arm, leg, etc.) next to, the hazardous object within the field-of-view of the user, the SHMS may immediately jump to a hazard AR warning level of a high urgency); and render, in the AR device, a display of the safety-relevant and non-safety-relevant information depending on the ascertained safety relevance and the ascertained state of the user ([0045]: the SHMS may assign certain weights to certain vulnerability states, which may be used in determining whether to present an augmented reality scene associated with the potentially hazardous condition, and/or a manner in which to present the scene. For example, the first user in the example above, determined by the SHMS to be more vulnerable, may be presented a more urgent alert than the second user, determined by the SHMS to less vulnerable. In some embodiments, a manner of outputting an alert may depend on cognitive and/or physical capabilities/limitations indicated in a user profile, e.g., providing the alert in a language that matches a language understood by a particular user as indicated in the user's profile, providing audio-based or image-based alerts in favor of text if a user has a low literacy level as indicated in the user's profile, etc.; [0046]: if user 102 is determined by the SHMS to be on a telephone call while near an oven that is in operation, or user 102 is determined by the SHMS to be climbing the stairs, the SHMS may determine that user 102 is distracted, and may provide a warning regarding a potentially hazardous condition based on this determination; [0051]: As shown in FIG. 3A, the SHMS may cause user device 105 to generate notification 302 for presentation. Notification 302 may comprise a recommended corrective action, e.g., “Secure loose rugs with double-faced tape or slip-resistant backing,” associated with object 304. Object 304 may be a rug present in an environment of a user, and notification 302 may be provided by the SHMS in response to a user performing a walkthrough of an environment containing object 304 and/or capturing an image or otherwise determining the presence of object 304 and/or monitoring actions of users in the environment, e.g., determining that a user almost slipped on rug 304. In some embodiments, object 304 depicted in notification 302 may match the appearance of the object within the environment and may be augmented with object 306 (e.g., a slip-resistant backing for object 304) that is not currently present in the environment. In some embodiments, notification 302 may be provided in response to determining that the potentially hazardous condition is not proximate to the user or in a field of view of the user (e.g., object 304 is no longer detected in a camera feed). In some embodiments, notification 302 may comprise a buy option 308, which when selected may enable the user to navigate to a website or application that enables the user to purchase object 306 associated with the corrective action; [0095]: if the user is moving at an accelerated speed (such as running, walking up/down steps), which may be detected by an accelerometer or gyrometer or other sensor by way of a user device or the SHMS detects an elevated heart rate (e.g., detected by smart watch or other user device), which may indicate that the user is busy and thus the user's cognitive state may not be sufficiently alert to pay attention to hazards and unsecured conditions. In some embodiments, such changed status may trigger the hazard AR warning system to turn active for the user associated with the changed status; [0100]: At 1112, the SHMS may issue a household hazard AR warning to the user device (e.g., a user device of the vulnerable user) in any suitable form, e.g., image, audio, haptic, or any combination thereof. The warning may comprise an augmented reality scene that demonstrates what an accident may look like if it occurred, and/or instructions to avoid interacting with the potentially hazardous object. In some embodiments, if the SHMS, e.g., via computer vision, detects a vulnerable user close to, and a human body part (e.g., hand, arm, leg, etc.) next to, the hazardous object within the field-of-view of the user, the SHMS may immediately jump to a hazard AR warning level of a high urgency). Doken does not explicitly teach the rendering includes: (I) selecting a particular balance between the display of the safety-relevant information and the display of the non-safety-relevant information based on a classification of a role of the user as determined from the second data and/or (II) selecting a degree to which non-safety-relevant information is displayed together with safety-relevant information based on (i) a metric of urgency of the safety- relevant information as determined from the first data and/or the second data, and/or (ii) an alertness of the user as determined from the second data. Aoki, in a similar field of endeavor, teaches the rendering includes: (I) selecting a particular balance between the display of the safety-relevant information (for a driver, no information is displayed in the display restriction area where the object such as a road, another vehicle, a pedestrian, etc. is present, and generated content specific for the driver (such as messages “watch out for pedestrians”, vehicle speed display content, etc.) and other shared information is displayed outside the display distraction area; [0044]: when a distance between the own vehicle and the pedestrian or the other vehicle detected by the peripheral monitoring sensor included in the in-vehicle sensor group 52 is equal to or less than a predetermined value, the content providing device 58 may generate content such as “watch out for pedestrians!” as content for the driver, for example; [0045]: for example, when the own vehicle is approaching a point for turning at an intersection in a route to a destination on which the own vehicle is traveling, the content providing device 58 may generate content such as “right turn ahead!” as content for the driver, for example. Further, for example, when the own vehicle is to arrive at the destination soon, the content providing device 58 may generate content such as “arriving at destination soon!” as content for the driver, for example; [0052]: In step 108, with respect to the content providing device 58 of the vehicle-side system 50, the control unit 46 requests the content for the driver and the content to be shared to be displayed on the AR glasses 12 of the driver and acquires the information of the content from the content providing device 58; [0054]: As shown in FIG. 7, the display restriction area 74 is set to be an area in which an object (for example, a road, another vehicle, a pedestrian, etc.) that the driver frequently sees during driving is present, within the range of the driver's actual field of view. Thus, by setting the display position of the content outside the display restriction area 74, vehicle speed display content 76, building information display content 78, and other vehicle information display content 80 shown in FIG. 7 as an example can be suppressed from being a hinderance of driving by the driver; [0059]: As a result, as shown in FIG. 7, the AR glasses 12 of the driver is in a display state in which the vehicle speed display content 76, the building information display content 78, the other vehicle information display content 80, and the like are displayed as the virtual images) and the display of the non-safety-relevant information (building information display content and other vehicle information display content is functionally analogous to non-safety-relevant information; for a passenger, other information can be displayed in any area along with displaying the objects such as a road, another vehicle, a pedestrian, etc.; [0060]: In step 118, with respect to the content providing device 58 of the vehicle-side system 50, the control unit 46 requests the content for the fellow passenger and the content to be shared to be displayed on the AR glasses 12 of the fellow passenger and acquires the information of the content from the content providing device 58; [0061]: In step 120, the control unit 46 extracts one piece of content information from the content information acquired in step 118. In step 122, the control unit 46 causes the display unit 26 to display the content from which the information has been extracted in step 110 on the AR glasses 12 as the virtual image. In the AR glasses 12 for the fellow passenger, since the wearer (fellow passenger) does not drive the vehicle, the display restriction area 74 is not set in the present embodiment, and the display position of the content is set independently of the display restriction area 74 and the display of the content is not simplified) based on a classification of a role of the user as determined from the second data (the wearer of the AR glasses seated in a vehicle is determined to be a driver or the passenger by determining whether an image region corresponding to a steering wheel, a meter, or the like exists at a predetermined position in the peripheral captured image acquired in step 100; fig. 5 step 106; [0040]: The content providing device 58 generates content to be displayed as a virtual image on the AR glasses 12 based on the information collected from the in-vehicle sensor group 52 and the like. In addition, the content providing device 58 sets the purpose (whether it is for the driver, for the fellow passenger, or for sharing) for the generated content. The content for the driver is the content to be displayed only on the AR glasses 12 worn by the driver. The content for the fellow passenger is the content to be displayed only on the AR glasses 12 worn by the fellow passenger. The content to be shared is content to be displayed on the AR glasses 12 worn by the driver and the AR glasses 12 worn by the fellow passenger; [0041]: To give an example of the content generated by the content providing device 58, the content providing device 58 acquires the vehicle speed from the vehicle speed sensor included in the in-vehicle sensor group 52, and generates the vehicle speed content that displays the acquired vehicle speed as the content for the driver. Further, since the vehicle speed is highly important information for the driver, the content providing device 58 sets a relatively high priority as the display priority for the vehicle speed display content; [0042]: Further, for example, the content providing device 58 acquires the current position and orientation of the own vehicle from the GNSS sensor included in the in-vehicle sensor group 52, and collates the acquired current position and orientation with map information to identify information on buildings present in front of the own vehicle. Then, the content providing device 58 generates the building information display content that displays the information of the identified building as, for example, content to be shared. Since the information on the building present in front of the own vehicle is less important to the driver than the vehicle speed, the content providing device 58 sets lower priority of the vehicle speed display content than the building information display content as the display priority; [0043]: Further, for example, the content providing device 58 acquires information on another vehicle existing around the own vehicle from the peripheral monitoring sensor included in the in-vehicle sensor group 52, and performs pattern matching for an image of the other vehicle acquired from the camera included in the in-vehicle sensor group 52 so as to identify a vehicle name of the other vehicle existing around the own vehicle. Then, the content providing device 58 generates the other vehicle information display content that displays the vehicle name of the identified other vehicle as content to be shared, for example. Since the information on the other vehicle present around the own vehicle is less important to the driver than the vehicle speed, the content providing device 58 sets lower priority of the vehicle speed display content than the other vehicle information display content as the display priority; [0051]: In the next step 106, the control unit 46 determines whether the wearer of the AR glasses 12 is a driver seated in the driver's seat. The determination in step 106 can be performed, for example, by determining whether an image region corresponding to a steering wheel, a meter, or the like exists at a predetermined position in the peripheral captured image acquired in step 100. When the determination in step 106 is affirmative, the process proceeds to step 108; [0060]: On the other hand, in step 106, when the wearer of the AR glasses 12 is the fellow passenger, the determination in step 106 is denied and the process proceeds to step 118; [0063]: As a result, the AR glasses 12 of the fellow passenger is in a state in which building information display content 82, other vehicle information display content 84, and the like are displayed as the virtual images, as shown in FIG. 8 as an example. As is clear from comparing FIG. 8 with FIG. 7, in the present embodiment, the position, the number, and the type of the virtual image (content) to be displayed are set by the AR glasses 12 of the driver and the AR glasses 12 of the fellow passenger are made to be different). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Aoki’s knowledge of displaying specific content for a driver and a passenger, respectively, and modify the system of Doken because such a process enhances the user’s experience by making the display suitable for the case where the wearer of the AR glasses is a driver and the case where the wearer is the fellow passenger ([0066]). Momeyer teaches selecting a degree to which non-safety-relevant information (additional data, fig. 2B; [0005]: FIG. 2B illustrates an advertisement similar to that shown in FIG. 2A, but with additional information that may be provided if the user's attention level is high or if there is a low level of motion of the mobile platform) is displayed together with safety-relevant information (advertisement, fig. 2A; [0004]: FIG. 2A illustrates an advertisement that may be provided if the user's attention level is low or if there is a high level of motion of the mobile platform) based on (attention level, [0004] and [0005]) as determined from the second data ([0010]: the content of advertisements appearing on the mobile platform may be altered to be suitable for the motion of the mobile platform and/or attention level of the user. For example, a low level of motion or a higher attention level of the user suggests that the user is more likely to pay attention to an advertisement and can absorb additional information in the advertisement. Conversely, a high level of motion or a lower attention level of the user indicates that the user is less likely to pay attention to an advertisement and will absorb less information; [0018]: Further data from biorhythm sensors and galvanic skin response sensors may be collected and used to determine the users emotional state and mood, as advertisement content is better absorbed and retained when the subject in an emotional state (positive or negative); [0020]: The activity of the user may be determined using the collected sensor data. For example, using GPS data, it can be determined if the mobile platform 100 is relatively stationary or moving in a car. Similarly, motion sensors, such as accelerometers, gyroscopes, and compass, can indicate whether the user is walking or stationary, standing or seated, as well whether the user is reading the device (in which case the mobile platform 100 would be held relatively stationary) or taking quick glances at the device, indicated by quick movement of the device to a reading position and then return to the original position. Additionally, proximity sensors, which may be a light sensor, capacitive sensors, resistive sensors, etc. may be used to determine if the user is talking into the mobile platform 100 with the mobile platform 100 held up to the user's head, where the user cannot see the screen; or to determine if the mobile platform 100 is in a purse or pocket. An ambient light detector may be used, e.g., to determine whether it is daytime/nighttime or if the mobile platform is inside/outside, which may be used adjust the brightness or colors of the advertisement. A higher state level indicates that the device context is one in which the user will be more likely to pay attention to an advertisement. Of course, other or additional activities may be associated with different states (S); [0022]: The mobile platform 100 may provide the sensor data to the server 150 via network 108 and the server 150 may then determine the attention level A. Alternatively, the mobile platform 100 may process the sensor data to determine the attention level A, and simply provide the attention level A to the server 150. Based on the attention level A, the server 150 may provide an advertisement to the mobile platform 100). The combination of Doken and Aoki contains a “base” system of displaying safety and non-safety relevant information to the user which the claimed invention can be seen as an “improvement’ in that the degree to which the non-safety-relevant information is displayed together with safety-relevant information is based on the attention level of the user. Momeyer contains a known technique of using a user’s attention level to determine the degree to which the additional information (non-safety-relevant information) is displayed together with advertisement (safety-relevant information; fig. 2A, fig. 2B, abstract, [0004], [0005], [0010], [0018], [0020] and [0022]) that is applicable to the “base” system. Momeyer’s known technique of using a user’s attention level to determine the degree to which the additional information (non-safety-relevant information) is displayed together with advertisement (safety-relevant information, fig. 2A, fig. 2B, abstract, [0004], [0005], [0010], [0018], [0020] and [0022]) would have been recognized by one skilled in the art as applicable to the “base” system of the combination of Doken and Aoki, and the results would have been predictable and resulted in displaying advertisement and related additional information based on the attention level of a user to provide an appropriate amount of information to the user which results in an improved system. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 15, the combination of Doken, Aoki and Momeyer teaches the device according to claim 14, wherein the device is a pair of AR glasses (Doken - smart glasses 105/AR glasses, fig. 1 and [0029]; Doken - AR device (e.g., smart glasses), [0034]; Aoki – AR glasses 12, fig. 2) or as a head-up display or has a pair of AR glasses (Doken - smart glasses 105/AR glasses, fig. 1 and [0029]; Doken - AR device (e.g., smart glasses), [0034]) or has a head-up display (Aoki – [0036]: a head up display). Claim 16 is similar in scope to claim 14 and therefore the examiner provides similar rationale to reject these claim. Moreover, Doken teaches a non-transitory computer-readable medium ([0084]) on which is stored a computer program including instructions for ascertaining a configuration of a user-state-dependent output of safety-relevant (notification 302, fig. 3A) and non-safety-relevant information (buy option 308, fig. 3A) for a user of an AR device (abstract: In response to determining that the hazardous condition may occur, an augmented reality scene associated with the potentially hazardous condition associated with the object may be generated for presentation, at a user device), the instructions, when executed by a computer ([0084]). Regarding claim 17, the combination of Doken, Aoki and Momeyer teaches the method according to claim 1, wherein the generating includes the selecting of the particular balance between the display of the safety-relevant information (Aoki - for a driver, no information is displayed in the display restriction area where the object such as a road, another vehicle, a pedestrian, etc. is present, and generated content specific for the driver (such as messages “watch out for pedestrians”, vehicle speed display content, etc.) and other shared information is displayed outside the display distraction area; Aoki - [0044]: when a distance between the own vehicle and the pedestrian or the other vehicle detected by the peripheral monitoring sensor included in the in-vehicle sensor group 52 is equal to or less than a predetermined value, the content providing device 58 may generate content such as “watch out for pedestrians!” as content for the driver, for example; Aoki - [0045]: for example, when the own vehicle is approaching a point for turning at an intersection in a route to a destination on which the own vehicle is traveling, the content providing device 58 may generate content such as “right turn ahead!” as content for the driver, for example. Further, for example, when the own vehicle is to arrive at the destination soon, the content providing device 58 may generate content such as “arriving at destination soon!” as content for the driver, for example; Aoki - [0052]: In step 108, with respect to the content providing device 58 of the vehicle-side system 50, the control unit 46 requests the content for the driver and the content to be shared to be displayed on the AR glasses 12 of the driver and acquires the information of the content from the content providing device 58; Aoki - [0054]: As shown in FIG. 7, the display restriction area 74 is set to be an area in which an object (for example, a road, another vehicle, a pedestrian, etc.) that the driver frequently sees during driving is present, within the range of the driver's actual field of view. Thus, by setting the display position of the content outside the display restriction area 74, vehicle speed display content 76, building information display content 78, and other vehicle information display content 80 shown in FIG. 7 as an example can be suppressed from being a hinderance of driving by the driver; Aoki - [0059]: As a result, as shown in FIG. 7, the AR glasses 12 of the driver is in a display state in which the vehicle speed display content 76, the building information display content 78, the other vehicle information display content 80, and the like are displayed as the virtual images) and the display of the non-safety-relevant information (Aoki - building information display content and other vehicle information display content is functionally analogous to non-safety-relevant information; for a passenger, other information can be displayed in any area along with displaying the objects such as a road, another vehicle, a pedestrian, etc.; Aoki - [0060]: In step 118, with respect to the content providing device 58 of the vehicle-side system 50, the control unit 46 requests the content for the fellow passenger and the content to be shared to be displayed on the AR glasses 12 of the fellow passenger and acquires the information of the content from the content providing device 58; Aoki - [0061]: In step 120, the control unit 46 extracts one piece of content information from the content information acquired in step 118. In step 122, the control unit 46 causes the display unit 26 to display the content from which the information has been extracted in step 110 on the AR glasses 12 as the virtual image. In the AR glasses 12 for the fellow passenger, since the wearer (fellow passenger) does not drive the vehicle, the display restriction area 74 is not set in the present embodiment, and the display position of the content is set independently of the display restriction area 74 and the display of the content is not simplified) based on the classification of the role of the user as determined from the second data (Aoki - the wearer of the AR glasses seated in a vehicle is determined to be a driver or the passenger by determining whether an image region corresponding to a steering wheel, a meter, or the like exists at a predetermined position in the peripheral captured image acquired in step 100; Aoki - fig. 5 step 106; Aoki - [0040]: The content providing device 58 generates content to be displayed as a virtual image on the AR glasses 12 based on the information collected from the in-vehicle sensor group 52 and the like. In addition, the content providing device 58 sets the purpose (whether it is for the driver, for the fellow passenger, or for sharing) for the generated content. The content for the driver is the content to be displayed only on the AR glasses 12 worn by the driver. The content for the fellow passenger is the content to be displayed only on the AR glasses 12 worn by the fellow passenger. The content to be shared is content to be displayed on the AR glasses 12 worn by the driver and the AR glasses 12 worn by the fellow passenger; Aoki - [0041]: To give an example of the content generated by the content providing device 58, the content providing device 58 acquires the vehicle speed from the vehicle speed sensor included in the in-vehicle sensor group 52, and generates the vehicle speed content that displays the acquired vehicle speed as the content for the driver. Further, since the vehicle speed is highly important information for the driver, the content providing device 58 sets a relatively high priority as the display priority for the vehicle speed display content; Aoki - [0042]: Further, for example, the content providing device 58 acquires the current position and orientation of the own vehicle from the GNSS sensor included in the in-vehicle sensor group 52, and collates the acquired current position and orientation with map information to identify information on buildings present in front of the own vehicle. Then, the content providing device 58 generates the building information display content that displays the information of the identified building as, for example, content to be shared. Since the information on the building present in front of the own vehicle is less important to the driver than the vehicle speed, the content providing device 58 sets lower priority of the vehicle speed display content than the building information display content as the display priority; Aoki - [0043]: Further, for example, the content providing device 58 acquires information on another vehicle existing around the own vehicle from the peripheral monitoring sensor included in the in-vehicle sensor group 52, and performs pattern matching for an image of the other vehicle acquired from the camera included in the in-vehicle sensor group 52 so as to identify a vehicle name of the other vehicle existing around the own vehicle. Then, the content providing device 58 generates the other vehicle information display content that displays the vehicle name of the identified other vehicle as content to be shared, for example. Since the information on the other vehicle present around the own vehicle is less important to the driver than the vehicle speed, the content providing device 58 sets lower priority of the vehicle speed display content than the other vehicle information display content as the display priority; Aoki - [0051]: In the next step 106, the control unit 46 determines whether the wearer of the AR glasses 12 is a driver seated in the driver's seat. The determination in step 106 can be performed, for example, by determining whether an image region corresponding to a steering wheel, a meter, or the like exists at a predetermined position in the peripheral captured image acquired in step 100. When the determination in step 106 is affirmative, the process proceeds to step 108; [0060]: On the other hand, in step 106, when the wearer of the AR glasses 12 is the fellow passenger, the determination in step 106 is denied and the process proceeds to step 118; Aoki - [0063]: As a result, the AR glasses 12 of the fellow passenger is in a state in which building information display content 82, other vehicle information display content 84, and the like are displayed as the virtual images, as shown in FIG. 8 as an example. As is clear from comparing FIG. 8 with FIG. 7, in the present embodiment, the position, the number, and the type of the virtual image (content) to be displayed are set by the AR glasses 12 of the driver and the AR glasses 12 of the fellow passenger are made to be different). Regarding claim 18, the combination of Doken, Aoki and Momeyer teaches the method according to claim 17, wherein the classifying of the role of the user is between a vehicle driver classification and a passenger classification (Aoki - the wearer of the AR glasses seated in a vehicle is determined to be a driver or the passenger by determining whether an image region corresponding to a steering wheel, a meter, or the like exists at a predetermined position in the peripheral captured image acquired in step 100; Aoki - fig. 5 step 106; Aoki - [0040]: The content providing device 58 generates content to be displayed as a virtual image on the AR glasses 12 based on the information collected from the in-vehicle sensor group 52 and the like. In addition, the content providing device 58 sets the purpose (whether it is for the driver, for the fellow passenger, or for sharing) for the generated content. The content for the driver is the content to be displayed only on the AR glasses 12 worn by the driver. The content for the fellow passenger is the content to be displayed only on the AR glasses 12 worn by the fellow passenger. The content to be shared is content to be displayed on the AR glasses 12 worn by the driver and the AR glasses 12 worn by the fellow passenger; Aoki - [0041]: To give an example of the content generated by the content providing device 58, the content providing device 58 acquires the vehicle speed from the vehicle speed sensor included in the in-vehicle sensor group 52, and generates the vehicle speed content that displays the acquired vehicle speed as the content for the driver. Further, since the vehicle speed is highly important information for the driver, the content providing device 58 sets a relatively high priority as the display priority for the vehicle speed display content; Aoki - [0042]: Further, for example, the content providing device 58 acquires the current position and orientation of the own vehicle from the GNSS sensor included in the in-vehicle sensor group 52, and collates the acquired current position and orientation with map information to identify information on buildings present in front of the own vehicle. Then, the content providing device 58 generates the building information display content that displays the information of the identified building as, for example, content to be shared. Since the information on the building present in front of the own vehicle is less important to the driver than the vehicle speed, the content providing device 58 sets lower priority of the vehicle speed display content than the building information display content as the display priority; Aoki - [0043]: Further, for example, the content providing device 58 acquires information on another vehicle existing around the own vehicle from the peripheral monitoring sensor included in the in-vehicle sensor group 52, and performs pattern matching for an image of the other vehicle acquired from the camera included in the in-vehicle sensor group 52 so as to identify a vehicle name of the other vehicle existing around the own vehicle. Then, the content providing device 58 generates the other vehicle information display content that displays the vehicle name of the identified other vehicle as content to be shared, for example. Since the information on the other vehicle present around the own vehicle is less important to the driver than the vehicle speed, the content providing device 58 sets lower priority of the vehicle speed display content than the other vehicle information display content as the display priority; Aoki - [0051]: In the next step 106, the control unit 46 determines whether the wearer of the AR glasses 12 is a driver seated in the driver's seat. The determination in step 106 can be performed, for example, by determining whether an image region corresponding to a steering wheel, a meter, or the like exists at a predetermined position in the peripheral captured image acquired in step 100. When the determination in step 106 is affirmative, the process proceeds to step 108; [0060]: On the other hand, in step 106, when the wearer of the AR glasses 12 is the fellow passenger, the determination in step 106 is denied and the process proceeds to step 118; Aoki - [0063]: As a result, the AR glasses 12 of the fellow passenger is in a state in which building information display content 82, other vehicle information display content 84, and the like are displayed as the virtual images, as shown in FIG. 8 as an example. As is clear from comparing FIG. 8 with FIG. 7, in the present embodiment, the position, the number, and the type of the virtual image (content) to be displayed are set by the AR glasses 12 of the driver and the AR glasses 12 of the fellow passenger are made to be different), and wherein, when the user is classified as the vehicle driver, an amount or prominence of the non-safety-relevant information is reduced (Aoki - for a driver, no information is displayed in the display restriction area where the object such as a road, another vehicle, a pedestrian, etc. is present, and generated content specific for the driver (such as messages “watch out for pedestrians”, vehicle speed display content, etc.) and other shared information is displayed outside the display distraction area and thereby reducing their prominence as shown in fig. 7; Aoki - [0044]: when a distance between the own vehicle and the pedestrian or the other vehicle detected by the peripheral monitoring sensor included in the in-vehicle sensor group 52 is equal to or less than a predetermined value, the content providing device 58 may generate content such as “watch out for pedestrians!” as content for the driver, for example; Aoki - [0045]: for example, when the own vehicle is approaching a point for turning at an intersection in a route to a destination on which the own vehicle is traveling, the content providing device 58 may generate content such as “right turn ahead!” as content for the driver, for example. Further, for example, when the own vehicle is to arrive at the destination soon, the content providing device 58 may generate content such as “arriving at destination soon!” as content for the driver, for example; Aoki - [0052]: In step 108, with respect to the content providing device 58 of the vehicle-side system 50, the control unit 46 requests the content for the driver and the content to be shared to be displayed on the AR glasses 12 of the driver and acquires the information of the content from the content providing device 58; Aoki - [0054]: As shown in FIG. 7, the display restriction area 74 is set to be an area in which an object (for example, a road, another vehicle, a pedestrian, etc.) that the driver frequently sees during driving is present, within the range of the driver's actual field of view. Thus, by setting the display position of the content outside the display restriction area 74, vehicle speed display content 76, building information display content 78, and other vehicle information display content 80 shown in FIG. 7 as an example can be suppressed from being a hinderance of driving by the driver; Aoki - [0059]: As a result, as shown in FIG. 7, the AR glasses 12 of the driver is in a display state in which the vehicle speed display content 76, the building information display content 78, the other vehicle information display content 80, and the like are displayed as the virtual images), and when the user is classified as the passenger, the amount or prominence of the non-safety-relevant information is rendered without the reduction (Aoki - building information display content and other vehicle information display content is functionally analogous to non-safety-relevant information; for a passenger, other information can be displayed in any area along with displaying the objects such as a road, another vehicle, a pedestrian, etc. without reducing their prominence as shown in fig. 8; Aoki - [0060]: In step 118, with respect to the content providing device 58 of the vehicle-side system 50, the control unit 46 requests the content for the fellow passenger and the content to be shared to be displayed on the AR glasses 12 of the fellow passenger and acquires the information of the content from the content providing device 58; Aoki - [0061]: In step 120, the control unit 46 extracts one piece of content information from the content information acquired in step 118. In step 122, the control unit 46 causes the display unit 26 to display the content from which the information has been extracted in step 110 on the AR glasses 12 as the virtual image. In the AR glasses 12 for the fellow passenger, since the wearer (fellow passenger) does not drive the vehicle, the display restriction area 74 is not set in the present embodiment, and the display position of the content is set independently of the display restriction area 74 and the display of the content is not simplified). Regarding claim 24, the combination of Doken, Aoki and Momeyer teaches the method according to claim 1, wherein the generating includes the selecting of the degree to which non-safety-relevant information (Momeyer - additional data, fig. 2B; Momeyer - [0005]: FIG. 2B illustrates an advertisement similar to that shown in FIG. 2A, but with additional information that may be provided if the user's attention level is high or if there is a low level of motion of the mobile platform) is displayed together with safety-relevant information (Momeyer - advertisement, fig. 2A; Momeyer - [0004]: FIG. 2A illustrates an advertisement that may be provided if the user's attention level is low or if there is a high level of motion of the mobile platform) based on the alertness of the user as determined from the second data (Momeyer - attention level, [0004] and [0005]) as determined from the second data (Momeyer - [0010]: the content of advertisements appearing on the mobile platform may be altered to be suitable for the motion of the mobile platform and/or attention level of the user. For example, a low level of motion or a higher attention level of the user suggests that the user is more likely to pay attention to an advertisement and can absorb additional information in the advertisement. Conversely, a high level of motion or a lower attention level of the user indicates that the user is less likely to pay attention to an advertisement and will absorb less information; Momeyer - [0018]: Further data from biorhythm sensors and galvanic skin response sensors may be collected and used to determine the users emotional state and mood, as advertisement content is better absorbed and retained when the subject in an emotional state (positive or negative); Momeyer - [0020]: The activity of the user may be determined using the collected sensor data. For example, using GPS data, it can be determined if the mobile platform 100 is relatively stationary or moving in a car. Similarly, motion sensors, such as accelerometers, gyroscopes, and compass, can indicate whether the user is walking or stationary, standing or seated, as well whether the user is reading the device (in which case the mobile platform 100 would be held relatively stationary) or taking quick glances at the device, indicated by quick movement of the device to a reading position and then return to the original position. Additionally, proximity sensors, which may be a light sensor, capacitive sensors, resistive sensors, etc. may be used to determine if the user is talking into the mobile platform 100 with the mobile platform 100 held up to the user's head, where the user cannot see the screen; or to determine if the mobile platform 100 is in a purse or pocket. An ambient light detector may be used, e.g., to determine whether it is daytime/nighttime or if the mobile platform is inside/outside, which may be used adjust the brightness or colors of the advertisement. A higher state level indicates that the device context is one in which the user will be more likely to pay attention to an advertisement. Of course, other or additional activities may be associated with different states (S); Momeyer - [0022]: The mobile platform 100 may provide the sensor data to the server 150 via network 108 and the server 150 may then determine the attention level A. Alternatively, the mobile platform 100 may process the sensor data to determine the attention level A, and simply provide the attention level A to the server 150. Based on the attention level A, the server 150 may provide an advertisement to the mobile platform 100). Regarding claim 24, the combination of Doken, Aoki and Momeyer teaches the method of claim 24, wherein the alertness of the user includes at least one of a degree of attention and a degree of fatigue, and wherein reduced attention or increased fatigue causes the non-safety-relevant information to be limited or deemphasized (Momeyer - advertisement, fig. 2A; Momeyer - [0004]: FIG. 2A illustrates an advertisement (without additional information) that may be provided if the user's attention level is low or if there is a high level of motion of the mobile platform) and the safety-relevant information to be prominently highlighted (Momeyer - additional data, fig. 2B; Momeyer - [0005]: FIG. 2B illustrates an advertisement similar to that shown in FIG. 2A, but with additional information that may be provided if the user's attention level is high or if there is a low level of motion of the mobile platform; as shown in fig. 2B, the advertisement is prominently highlighted compared to the additional information). Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doken, in view of Aoki, in view of Momeyer, and further in view of Cui e al. (US 2010/0253688, hereinafter Cui). Regarding claim 26, the combination of Doken, Aoki and Momeyer does not explicitly teach the method of claim 24, wherein determining the alertness of the user comprises detecting an instantaneous gaze direction, and, when the gaze direction indicates that the attention is not directed to the object, the generating includes increasing a prominence of the safety-relevant information. Cui teaches determining the alertness of the user comprises detecting an instantaneous gaze direction ([0172]: when the operator's gaze is elsewhere; [0183]: An operator's gaze location 252 is depicted, describing a point where the operator's eyes are apparently focused, for example, as a result of focusing upon distraction sign 254), and, when the gaze direction indicates that the attention is not directed to the object (user is looking at the distraction sign 254 and not at the vehicle 208; [0183]: In order to bring the operator's attention from the area of distraction sign 254 to the critical information of vehicle 208, a textual alert and accompanying arrow are displayed proximate to the operator's gaze location), the generating includes increasing a prominence of the safety-relevant information ([0172]: If a vehicle is backing out of a space to the left side of the visible field of view and is determined to be on a potential collision course with the host vehicle, and the operator's gaze is determined to be toward the right side of the visible field of view, a box can be placed around the offending vehicle, and a flashing arrow can be placed at the point of the operator's gaze, prompting the operator's attention to the box; [0183]: Indicating the identification of vehicle 208 as a threat, a vehicle indicator box 256 is displayed around vehicle 208 including a directional arrow indicating a relevant piece of information, such as direction of travel of the vehicle. Additionally, text 259 is displayed describing the threat condition). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Cui’s knowledge of increasing prominence of a critical information when the user’s attention is elsewhere as taught and modify the process of Doken, Aoki and Momeyer because such as process can quickly draw the user’s attention to the critical information ([0172] and [0183]). Claim(s) 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doken, in view of Aoki, in view of Momeyer, and further in view of Brannstrom et al. (EP 3575175, hereinafter Brannstrom). Regarding claim 28, the combination of Doken, Aoki and Momeyer does not explicitly teach the method of claim 24, wherein, when the object is obscured and thus not perceptible to the user while the user's alertness is below a threshold, the AR device outputs an acoustic indication to the user to direct attention to the object. Brannstrom teaches when the object is obscured (threat is occluded in an obscured area) and thus not perceptible to the user while the user's alertness is below a threshold (current level of attention of driver is below the set level of attention of the host-vehicle driver), the AR device outputs an acoustic indication to the user to direct attention to the object ([0018]: apply at least one hypothesis based on the determined other road users and the at least one particular feature, the at least one hypothesis related to at least one hypothetical threat that is occluded in an obscured area; set or estimate, based on the applied at least one hypothesis, a host-vehicle driver level of attention required to handle the at least one hypothetical threat and estimate a time until that host-vehicle driver level of attention will be required; derive, from one or more driver-monitoring sensors of the host-vehicle, a current host-vehicle driver level of attention; determine if the set or estimated required host-vehicle driver level of attention exceeds the current host-vehicle driver level of attention and if the estimated time until the estimated host-vehicle driver level of attention will be required is less than a threshold-time, and if so determined produce at least one of visual, acoustic and haptic information to a host-vehicle driver environment). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Brannstrom’s knowledge of providing acoustic information to the host-vehicle driver environment as taught and modify the process of Doken, Aoki and Momeyer because such a process uses acoustic information to notify the driver of a hypothetical threat and enhancing safety ([0018]). Claim(s) 19, 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doken, in view of Aoki, in view of Momeyer, and further in view of Menig et al. (2001/0012976, hereinafter Menig). Regarding claim 19, the combination of Doken, Aoki and Momeyer does not explicitly teach the method according to claim 1, wherein the generating includes the selecting of the degree to which non-safety-relevant information is displayed together with safety-relevant information based on the metric of urgency of the safety-relevant information as determined from the first data and/or the second data. Menig teaches wherein the generating includes the selecting of the degree to which non-safety-relevant information (fig. 9 default screen 910; the default screen displays the short term average fuel economy 912, a bar graph representing changes in fuel economy 914, and the odometer reading 916, fig. 9 and [0081]) is displayed together with safety-relevant information (collision detection warning messages displayed by object detection indicator 906 (triangle 906), fig. 9) based on the metric of urgency of the safety-relevant information (priority of collision detection warning messages is based on the distance between the object and the user’s truck compared to predetermined values for each stage of distance alerts) as determined from the first data and/or the second data (as shown in fig. 9, the collision warning system detects an object within a predetermined distance (e.g., 350 feet) from the front of the user’s truck and determines that the object does not represent a significant threat of collision, then a default screen 910 is displayed showing the object detection indicator triangle 906 combined with the displaying of average fuel economy 912, a bar graph representing changes in fuel economy 914, and the odometer reading 916; when the collision warning system detects the distance between the object and the user’s truck decreases lower than a certain predetermined value associated with each stage, the collision warning system determines a risk of collision and displays the object detection indicator triangle 906 without displaying the other information as shown in screens 900, 902 and 904 of fig. 9; fig. 9, where the size of triangle indicator 906 becomes larger with decreasing distance between the object and the user’s truck; [0033]: A programmed CPU on the CWS ECU 108 receives information about nearby objects from the front sensor 140 and side sensor 142, computes collision warning conditions, and communicates warnings to the ICU 100. Based on information from the front sensor 140, the CWS ECU 108 measures the range, distance, closing speed, and relative speed to vehicles and other objects in its field of view; [0080]: As the closing distance between the truck and the vehicle in front of it decreases, the message center displays progressively stronger visual warnings and generates corresponding auditory warnings. For example, the top three visual indicators 900-904 shown in FIG. 9 illustrate the display screen of the message center for first, second and third stage distance alerts from the collision warning system. As the closing time between the truck and the obstacle reaches predetermined values associated with each stage, the message center displays a progressively larger triangle and the words, "DANGER AHEAD." The message center also displays the large triangle alert 904 in response to warning messages associated with the detection of a stationary or slow moving object; [0081]: When the collision warning system detects an object within a predetermined distance (e.g., 350 feet), but this object does not represent a significant threat of collision, the message center displays a small triangle 906 in the default screen 910 of the message center. In other words, the visual indicator of the detection does not overwrite the current default screen, but instead is combined with it. In the example shown in FIG. 9, the default screen displays the short term average fuel economy 912, a bar graph representing changes in fuel economy 914, and the odometer reading 916. This default screen is merely one example of the type of normal operating condition data that may be displayed with the object detection indicator 906; [0087]: the message center integrates messages from a variety of different vehicle systems using a prioritization scheme. It also uses a prioritization scheme to integrate the messages from the collision warning system. In the current implementation, the priority rules for integrating collision warning messages are as follows. The warning messages for a stationary object, slow moving object, and the shortest monitored following distance (one second) are assigned the highest priority and override level 1 danger alerts. As such, the immediate external threat takes precedence over in-vehicle dangers. The level 1 danger alerts have the next highest priority and override collision warning alerts for following distances of two and three seconds and the creep alarm. The rational for ranking level 1 danger alerts ahead of these collision alerts is that severe in-vehicle dangers take precedence over less immediate external threats. Level two warning messages and level three caution messages may override collision warnings for two and three second following distances if those collision warnings have been displayed for at least fifteen seconds. The rationale is that the driver has most probably chosen a particular distance to the vehicle ahead and intends not to change the following distance. In this case, the driver is aware of the situation and a level two or level three message override the collision warning conditions; [0146]: The message center displays the set speed along with the "danger ahead" message as shown in screen 1212. However, as the urgency of the "danger ahead" message increases, the ICU removes the set speed and displays a larger triangle to emphasize the increase in danger as shown in screen 1214; [0147]: the message center displays progressively more intense warning messages such as the ones shown in screens 1220 and 1222 in FIG. 12 when the collision warning system detects that the following distance has fallen below predetermined thresholds such as headway values of one and two seconds). The combination of Doken, Aoki and Momeyer contains a “base” process of displaying safety and non-safety relevant information to the user based on attention level of the user and a role of the user which the claimed invention can be seen as an “improvement’ in that the degree to which the non-safety-relevant information is displayed together with the safety-relevant information is based on the metric of urgency of the safety-relevant information as determined from the first data and/or second data. Menig contains a known technique of displaying the non-safety-relevant information together with the safety-relevant information is based on the priority of the safety-relevant information as determined from the first data and/or second data (fig. 9, [0033], [0080], [0081], [0087], [0146], [0147]) that is applicable to the “base” process. Menig’s known technique of displaying the non-safety-relevant information together with the safety-relevant information is based on the priority of the safety-relevant information as determined from the first data and/or second data (fig. 9, [0033], [0080], [0081], [0087], [0146], [0147]) would have been recognized by one skilled in the art as applicable to the “base” process of the combination of Doken, Aoki and Momeyer, and the results would have been predictable and resulted in quickly conveying information about collision threats to the driver to enhance his/her safety which results in an improved process. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 21, the combination of Doken, Aoki, Momeyer and Menig teaches the method of claim 19, wherein selecting the degree includes spatially restricting non-safety-relevant information to subareas of the field of vision outside an object visibility area in which the user can visually detect the object (Doken - fig. 3A shows the buy option 308 is displayed in the field of view of the user where the rug is not present; Doken - [0051]: FIGS. 3A-3C show illustrative notifications comprising a recommended corrective action, in accordance with some embodiments of this disclosure. As shown in FIG. 3A, the SHMS may cause user device 105 to generate notification 302 for presentation. Notification 302 may comprise a recommended corrective action, e.g., “Secure loose rugs with double-faced tape or slip-resistant backing,” associated with object 304. Object 304 may be a rug present in an environment of a user, and notification 302 may be provided by the SHMS in response to a user performing a walkthrough of an environment containing object 304 and/or capturing an image or otherwise determining the presence of object 304 and/or monitoring actions of users in the environment, e.g., determining that a user almost slipped on rug 304. In some embodiments, object 304 depicted in notification 302 may match the appearance of the object within the environment and may be augmented with object 306 (e.g., a slip-resistant backing for object 304) that is not currently present in the environment. In some embodiments, notification 302 may be provided in response to determining that the potentially hazardous condition is not proximate to the user or in a field of view of the user (e.g., object 304 is no longer detected in a camera feed). In some embodiments, notification 302 may comprise a buy option 308, which when selected may enable the user to navigate to a website or application that enables the user to purchase object 306 associated with the corrective action) and/or in which no safety-relevant information is represented (Doken - fig. 3A shows the safety information notification 302 is displayed in a different region of the display than the buy option 308 is displayed). Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doken, in view of Aoki, in view of Momeyer, in view of Menig, and further in view of Grante (US 2025/0362658). Regarding claim 20, the combination of Doken, Aoki, Momeyer and Menig teaches the method of claim 19, wherein the metric of urgency includes at least one of a probability of a collision (Menig - [0007]: The collision warning alerts use a combination of visual and auditory warnings that grow progressively more intense as the degree of danger of a collision danger increases; Menig - threat of collision, [0081]), a time-to-collision (warning description based on 1, 2 or 3 second following distance between the vehicle in front and the user’s truck), a distance to the object (Menig - the CWS ECU 108 measures the range, distance, closing speed, and relative speed to vehicles and other objects in its field of view, [0033]; Menig - detecting an object within or less than a predetermined distance, [0080], [0081]), and a relative velocity between the user and the object (Menig - the CWS ECU 108 measures the range, distance, closing speed, and relative speed to vehicles and other objects in its field of view, [0033]). The combination of Doken, Aoki, Momeyer and Menig does not explicitly teach the metric of urgency includes a probability of a particular degree of severity of a consequence of an accident. Grante teaches the metric of urgency includes a probability of a particular degree of severity of a consequence of an accident ([0042]: the electronic device 200 further is associated to and/or includes audio circuitry and one or more speakers, a display unit and/or one or more tactile feedback generating device (not shown) to enable generating user feedback (e.g., warning signals to alert a user about a dangerous condition, such as for example alerting the first actor 10 (e.g., worker) illustrated in FIG. 1B about that the second actor 20 (e.g., vehicle) is in relative close proximity of the first actor); [0048]: the device 300 could trigger one or more safety action causing instructions to be sent to the electronic devices being associated to the actors between which a relative close proximity has been determined and wherein the instructions corresponding to the one or more safety action, when received can cause performing of an action (e.g., an action such as a warning signal and/or an emergency stop action) at the devices having received the instructions. In case instructions corresponding to an emergency action is sent from the device 300 and received at a respective electronic device (e.g., an electronic device 22 associated to a machine/vehicle 20, wherein the electronic device is configured as described for the device 200) the respective electronic device could send instructions to an electronic control unit of a machine/vehicle represented by the actor (e.g., actor 20 as illustrated in any of FIGS. 1B-1C) by means of the electronic device being configured communicatively coupled to the electronic control unit; [0137]: actor movement envelopes associated to different movement safety performance levels can be selected for use depending on predetermined assessment of a degree of severity for a consequence resulting from a particular conflict situation (e.g., impact of collision between mobile machines and/or impact of collision between a worker and a mobile machine)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Grante’s knowledge of using degree of severity of a consequence of an accident to select safety performance level as taught and modify the process of the combination of Doken, Aoki, Momeyer and Menig because such a process manages safety of the one or more actors present within an environment, achieving a high safety integrity level that can be proven to comply with applicable safety integrity level standards and enabling to automatically generate one or more safety actions with reduced risk of nuisance actions being generated while maintaining high safety standards ([0005], [0007]). Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doken, in view of Aoki, in view of Momeyer, in view of Menig, and further in view of Yamada (US 2008/0211647). Regarding claim 22, the combination of Doken, Aoki, Momeyer and Menig teaches the method of claim 19, wherein, for medium danger levels an amount or a prominence of the non-safety-relevant information that is rendered in the display is reduced (Menig - before the detection of the object within a predetermined distance of the user’s truck, the default screen displays the short term average fuel economy 912, a bar graph representing changes in fuel economy 914, and the odometer reading 916; further, as shown in fig. 9, when the collision warning system detects an object within a predetermined distance (e.g., 350 feet) from the front of the user’s truck and determines that the object does not represent a significant threat of collision, then a default screen 910 is displayed showing the object detection indicator triangle 906 combined with the displaying of average fuel economy 912, a bar graph representing changes in fuel economy 914, and the odometer reading 916; when the triangle indicator 906 is displayed along with the other information, the prominence of other information is reduced compared to the case when no triangle indicator 906 was displayed before the detection of the object within a predetermined distance; Menig - [0081]: When the collision warning system detects an object within a predetermined distance (e.g., 350 feet), but this object does not represent a significant threat of collision, the message center displays a small triangle 906 in the default screen 910 of the message center. In other words, the visual indicator of the detection does not overwrite the current default screen, but instead is combined with it. In the example shown in FIG. 9, the default screen displays the short term average fuel economy 912, a bar graph representing changes in fuel economy 914, and the odometer reading 916. This default screen is merely one example of the type of normal operating condition data that may be displayed with the object detection indicator 906.), and for high danger levels, the non-safety-relevant information is hidden from the display (Menig - when the collision warning system detects the distance between the object and the user’s truck decreases lower than a certain predetermined value associated with each stage, the collision warning system determines a risk of collision and displays the object detection indicator triangle 906 without displaying the other information as shown in screens 900, 902 and 904 of fig. 9; Menig – [0080]: FIG. 9 is diagram illustrating an implementation of the visual indicators for the collision warning system integrated into the message center. The current implementation of the message center displays five different visual indicators 900-908. As the closing distance between the truck and the vehicle in front of it decreases, the message center displays progressively stronger visual warnings and generates corresponding auditory warnings. For example, the top three visual indicators 900-904 shown in FIG. 9 illustrate the display screen of the message center for first, second and third stage distance alerts from the collision warning system. As the closing time between the truck and the obstacle reaches predetermined values associated with each stage, the message center displays a progressively larger triangle and the words, "DANGER AHEAD." The message center also displays the large triangle alert 904 in response to warning messages associated with the detection of a stationary or slow moving object.). The combination of Doken, Aoki, Momeyer and Menig does not explicitly teach for low danger levels, non-safety-relevant information and safety-relevant information are output equally. Yamada teaches for low danger levels, non-safety-relevant information and safety-relevant information are output equally (fig. 3 shows the warning message displayed when a low-risk situation is determined is presented in approximately equal area as the other information that is presented; [0041]: Similarly, in the case of low oil or oil deterioration, the monitoring control unit 14 first determines the situation to be a low-risk situation, starts monitoring the travel distance or the engine status; [0042]: In the case of a display mode for a low-risk situation, a warning pops up over other information being displayed on the display screen 21. However, it is possible for a passenger to cancel the warning display from the display screen 21. The warning is not redisplayed thereafter; [0045]: FIG. 3 is an exemplary diagram for explaining a warning that pops up over other information being displayed on a display screen. The display screen in FIG. 3 is originally shown to be displaying a music playback screen as selected by the passengers. However, when, e.g., the vehicle fuel falls below a specific amount (a low fuel situation), a warning pops up over the music playback screen. That is, the display of the music playback screen is interrupted by the warning display; [0046]: The warning informs the passengers that the vehicle is on low fuel as well as projects a possible travel distance in the current low fuel situation. Moreover, if the situation is determined to be a low-risk situation or a moderate-risk situation, an OK button is provided on the warning such that the passengers can touch the OK button to at least temporarily cancel the display after reading the warning). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Yamada’s knowledge of using equal area as the other displayed information to display warning messages in low-risk situation as taught and modify the process of the combination of Doken, Aoki, Momeyer and Menig because such a system informs the driver of an appropriate warning according to a risk level of a problematic situation of the vehicle and contributing to driving safety ([0002] and [0060]). Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doken, in view of Aoki, in view of Momeyer, in view of Menig, in view of Yamaguchi (US 2024/0001843), and further in view of Perdigon Rodriguez et al. (US 2017/0106277, hereinafter Rodriguez). Regarding claim 23, the combination of Doken, Aoki, Momeyer and Menig teaches the method of claim 19, wherein increasing urgency increases at least one of: (ii) a visual marking (highlighting) including bordering or shading in color of the object (Doken – [0003]: computing techniques may be used to equip emergency personnel (i.e., firefighters) with a thermal camera and augmented reality device that highlights objects in the field of view of the firefighter, such as in a smoke-filled building that the firefighter enters in a rescue effort), (iii) an acoustic indication including an indicative tone or a spoken cue (Menig - [0007]: he collision warning alerts use a combination of visual and auditory warnings that grow progressively more intense as the degree of danger of a collision danger increases), and (iv) a haptic indication (Doken – [0100], [0102], [0108]: in a high severity hazard scenario, the SHMS may additionally or alternatively take control of other connected devices in the environment having display and/or audio and/or haptic capabilities to enhance the warning, e.g., transmitting the audio warning via wireless or wired speakers, sound bars, smart assistant devices or any other suitable device or combination thereof, or pausing video sessions on display (connected TV, phone, tablet or any other suitable device or combination thereof) to broadcast the warning to a broader reach of users). The combination of Doken, Aoki, Momeyer and Menig does not explicitly teach increasing urgency includes an area of the field of vision reserved for safety-relevant information, and a haptic indication including a vibration pulse. Yamaguchi teaches increasing urgency includes an area (location in the display area 8) of the field of vision (forward field of view of the driver) reserved for safety-relevant (warning) information ([0050]: Specifically, as shown in FIG. 5, the display controller 30 determines, as the warning position, a location in the display area 8 that substantially overlaps the position of the warning object 50 in the forward field of view of the driver 2 in the left-right direction, and displays the warning indicator 51 at the warning position). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Yamaguchi’s knowledge of using a specific location in the field of view of the driver to display the warning indicator as taught and modify the process of the combination of Doken, Aoki, Momeyer and Menig because such a process encourages a viewer such as a vehicle driver to look ahead when an object to be notified to the viewer is detected in front of the vehicle ([0002]). Rodriguez teaches a haptic indication including a vibration pulse ([0036]: The communication control unit 16 may also allow the central control unit 20 to be connected to at least one impulse generator unit or feedback unit 19. The feedback unit 19 may be, for example, haptic sensors or motors that allow generation of a vibrating pulse that produces feedback from the system to the operator). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Rodriguez’ knowledge of using haptic sensor that generates a vibrating pulse as taught and modify the process of the combination of Doken, Aoki, Momeyer and Menig because such a process gives the operation a sense of immersion ([0036]). Allowable Subject Matter Claim 27 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 27, none of the cited prior art references of record teach either individually or in combination “… the selecting of the degree is subject to a tolerance range in which the AR device is operated without prioritization between safety-relevant and non-safety-relevant information, the tolerance range being user-definable.” Response to Arguments In response to the arguments presented on page 11-13 and amendment to the claims, rejection of claims 1-3 and 11-16 under 35 USC 101 has been withdrawn. Applicant’s arguments with respect to claim(s) 1, 14 and 16 have been considered but are moot because the new ground of rejection does not rely on the same combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Response to the argument that Doken fails to disclose a process in which the balance of multiple categories of displayed information is varied as a function of user-role classification. See page 9-10 of Applicant’s Remarks filed on 12/04/2025. Doken in view of Aoki teaches the above limitation. Especially, Aoki teaches the generating includes: (I) selecting a particular balance between the display of the safety-relevant information (for a driver, no information is displayed in the display restriction area where the object such as a road, another vehicle, a pedestrian, etc. is present, and generated content specific for the driver (such as messages “watch out for pedestrians”, vehicle speed display content, etc.) and other shared information is displayed outside the display distraction area; [0044]: when a distance between the own vehicle and the pedestrian or the other vehicle detected by the peripheral monitoring sensor included in the in-vehicle sensor group 52 is equal to or less than a predetermined value, the content providing device 58 may generate content such as “watch out for pedestrians!” as content for the driver, for example; [0045]: for example, when the own vehicle is approaching a point for turning at an intersection in a route to a destination on which the own vehicle is traveling, the content providing device 58 may generate content such as “right turn ahead!” as content for the driver, for example. Further, for example, when the own vehicle is to arrive at the destination soon, the content providing device 58 may generate content such as “arriving at destination soon!” as content for the driver, for example; [0052]: In step 108, with respect to the content providing device 58 of the vehicle-side system 50, the control unit 46 requests the content for the driver and the content to be shared to be displayed on the AR glasses 12 of the driver and acquires the information of the content from the content providing device 58; [0054]: As shown in FIG. 7, the display restriction area 74 is set to be an area in which an object (for example, a road, another vehicle, a pedestrian, etc.) that the driver frequently sees during driving is present, within the range of the driver's actual field of view. Thus, by setting the display position of the content outside the display restriction area 74, vehicle speed display content 76, building information display content 78, and other vehicle information display content 80 shown in FIG. 7 as an example can be suppressed from being a hinderance of driving by the driver; [0059]: As a result, as shown in FIG. 7, the AR glasses 12 of the driver is in a display state in which the vehicle speed display content 76, the building information display content 78, the other vehicle information display content 80, and the like are displayed as the virtual images) and the display of the non-safety-relevant information (building information display content and other vehicle information display content is functionally analogous to non-safety-relevant information; for a passenger, other information can be displayed in any area along with displaying the objects such as a road, another vehicle, a pedestrian, etc.; [0060]: In step 118, with respect to the content providing device 58 of the vehicle-side system 50, the control unit 46 requests the content for the fellow passenger and the content to be shared to be displayed on the AR glasses 12 of the fellow passenger and acquires the information of the content from the content providing device 58; [0061]: In step 120, the control unit 46 extracts one piece of content information from the content information acquired in step 118. In step 122, the control unit 46 causes the display unit 26 to display the content from which the information has been extracted in step 110 on the AR glasses 12 as the virtual image. In the AR glasses 12 for the fellow passenger, since the wearer (fellow passenger) does not drive the vehicle, the display restriction area 74 is not set in the present embodiment, and the display position of the content is set independently of the display restriction area 74 and the display of the content is not simplified) based on a classification of a role of the user as determined from the second data (the wearer of the AR glasses seated in a vehicle is determined to be a driver or the passenger by determining whether an image region corresponding to a steering wheel, a meter, or the like exists at a predetermined position in the peripheral captured image acquired in step 100; fig. 5 step 106; [0040]: The content providing device 58 generates content to be displayed as a virtual image on the AR glasses 12 based on the information collected from the in-vehicle sensor group 52 and the like. In addition, the content providing device 58 sets the purpose (whether it is for the driver, for the fellow passenger, or for sharing) for the generated content. The content for the driver is the content to be displayed only on the AR glasses 12 worn by the driver. The content for the fellow passenger is the content to be displayed only on the AR glasses 12 worn by the fellow passenger. The content to be shared is content to be displayed on the AR glasses 12 worn by the driver and the AR glasses 12 worn by the fellow passenger; [0041]: To give an example of the content generated by the content providing device 58, the content providing device 58 acquires the vehicle speed from the vehicle speed sensor included in the in-vehicle sensor group 52, and generates the vehicle speed content that displays the acquired vehicle speed as the content for the driver. Further, since the vehicle speed is highly important information for the driver, the content providing device 58 sets a relatively high priority as the display priority for the vehicle speed display content; [0042]: Further, for example, the content providing device 58 acquires the current position and orientation of the own vehicle from the GNSS sensor included in the in-vehicle sensor group 52, and collates the acquired current position and orientation with map information to identify information on buildings present in front of the own vehicle. Then, the content providing device 58 generates the building information display content that displays the information of the identified building as, for example, content to be shared. Since the information on the building present in front of the own vehicle is less important to the driver than the vehicle speed, the content providing device 58 sets lower priority of the vehicle speed display content than the building information display content as the display priority; [0043]: Further, for example, the content providing device 58 acquires information on another vehicle existing around the own vehicle from the peripheral monitoring sensor included in the in-vehicle sensor group 52, and performs pattern matching for an image of the other vehicle acquired from the camera included in the in-vehicle sensor group 52 so as to identify a vehicle name of the other vehicle existing around the own vehicle. Then, the content providing device 58 generates the other vehicle information display content that displays the vehicle name of the identified other vehicle as content to be shared, for example. Since the information on the other vehicle present around the own vehicle is less important to the driver than the vehicle speed, the content providing device 58 sets lower priority of the vehicle speed display content than the other vehicle information display content as the display priority; [0051]: In the next step 106, the control unit 46 determines whether the wearer of the AR glasses 12 is a driver seated in the driver's seat. The determination in step 106 can be performed, for example, by determining whether an image region corresponding to a steering wheel, a meter, or the like exists at a predetermined position in the peripheral captured image acquired in step 100. When the determination in step 106 is affirmative, the process proceeds to step 108; [0060]: On the other hand, in step 106, when the wearer of the AR glasses 12 is the fellow passenger, the determination in step 106 is denied and the process proceeds to step 118; [0063]: As a result, the AR glasses 12 of the fellow passenger is in a state in which building information display content 82, other vehicle information display content 84, and the like are displayed as the virtual images, as shown in FIG. 8 as an example. As is clear from comparing FIG. 8 with FIG. 7, in the present embodiment, the position, the number, and the type of the virtual image (content) to be displayed are set by the AR glasses 12 of the driver and the AR glasses 12 of the fellow passenger are made to be different). Response to the argument that Doken’s disclosure of generating an alert upon detecting danger cannot be considered to disclose the claimed sliding-scale prioritization of content according to a quantified metric of urgency. See page 10 of Applicant’s Remarks. Doken, in view of Aoki, in view of Momeyer and further in view of Menig teaches the claimed limitations. Especially, Menig teaches the generating includes the selecting of the degree to which non-safety-relevant information (fig. 9 default screen 910; the default screen displays the short term average fuel economy 912, a bar graph representing changes in fuel economy 914, and the odometer reading 916, fig. 9 and [0081]) is displayed together with safety-relevant information (collision detection warning messages displayed by object detection indicator 906 (triangle 906), fig. 9) based on the metric of urgency of the safety-relevant information (priority of collision detection warning messages is based on the distance between the object and the user’s truck compared to predetermined values for each stage of distance alerts) as determined from the first data and/or the second data (as shown in fig. 9, the collision warning system detects an object within a predetermined distance (e.g., 350 feet) from the front of the user’s truck and determines that the object does not represent a significant threat of collision, then a default screen 910 is displayed showing the object detection indicator triangle 906 combined with the displaying of average fuel economy 912, a bar graph representing changes in fuel economy 914, and the odometer reading 916; when the collision warning system detects the distance between the object and the user’s truck decreases lower than a certain predetermined value associated with each stage, the collision warning system determines a risk of collision and displays the object detection indicator triangle 906 without displaying the other information as shown in screens 900, 902 and 904 of fig. 9; fig. 9, where the size of triangle indicator 906 becomes larger with decreasing distance between the object and the user’s truck; [0033]: A programmed CPU on the CWS ECU 108 receives information about nearby objects from the front sensor 140 and side sensor 142, computes collision warning conditions, and communicates warnings to the ICU 100. Based on information from the front sensor 140, the CWS ECU 108 measures the range, distance, closing speed, and relative speed to vehicles and other objects in its field of view; [0080]: As the closing distance between the truck and the vehicle in front of it decreases, the message center displays progressively stronger visual warnings and generates corresponding auditory warnings. For example, the top three visual indicators 900-904 shown in FIG. 9 illustrate the display screen of the message center for first, second and third stage distance alerts from the collision warning system. As the closing time between the truck and the obstacle reaches predetermined values associated with each stage, the message center displays a progressively larger triangle and the words, "DANGER AHEAD." The message center also displays the large triangle alert 904 in response to warning messages associated with the detection of a stationary or slow moving object; [0081]: When the collision warning system detects an object within a predetermined distance (e.g., 350 feet), but this object does not represent a significant threat of collision, the message center displays a small triangle 906 in the default screen 910 of the message center. In other words, the visual indicator of the detection does not overwrite the current default screen, but instead is combined with it. In the example shown in FIG. 9, the default screen displays the short term average fuel economy 912, a bar graph representing changes in fuel economy 914, and the odometer reading 916. This default screen is merely one example of the type of normal operating condition data that may be displayed with the object detection indicator 906; [0087]: the message center integrates messages from a variety of different vehicle systems using a prioritization scheme. It also uses a prioritization scheme to integrate the messages from the collision warning system. In the current implementation, the priority rules for integrating collision warning messages are as follows. The warning messages for a stationary object, slow moving object, and the shortest monitored following distance (one second) are assigned the highest priority and override level 1 danger alerts. As such, the immediate external threat takes precedence over in-vehicle dangers. The level 1 danger alerts have the next highest priority and override collision warning alerts for following distances of two and three seconds and the creep alarm. The rational for ranking level 1 danger alerts ahead of these collision alerts is that severe in-vehicle dangers take precedence over less immediate external threats. Level two warning messages and level three caution messages may override collision warnings for two and three second following distances if those collision warnings have been displayed for at least fifteen seconds. The rationale is that the driver has most probably chosen a particular distance to the vehicle ahead and intends not to change the following distance. In this case, the driver is aware of the situation and a level two or level three message override the collision warning conditions; [0146]: The message center displays the set speed along with the "danger ahead" message as shown in screen 1212. However, as the urgency of the "danger ahead" message increases, the ICU removes the set speed and displays a larger triangle to emphasize the increase in danger as shown in screen 1214; [0147]: the message center displays progressively more intense warning messages such as the ones shown in screens 1220 and 1222 in FIG. 12 when the collision warning system detects that the following distance has fallen below predetermined thresholds such as headway values of one and two seconds). Response to the arguments that Doken does not teach monitoring user alertness, gaze, or physiological state to adjust display prioritization. See page 10 of Applicant’s Remarks. Doken, in view of Aoki and further in view of Momeyer teaches the claimed limitations. Especially, Momeyer teaches selecting a degree to which non-safety-relevant information (additional data, fig. 2B; [0005]: FIG. 2B illustrates an advertisement similar to that shown in FIG. 2A, but with additional information that may be provided if the user's attention level is high or if there is a low level of motion of the mobile platform) is displayed together with safety-relevant information (advertisement, fig. 2A; [0004]: FIG. 2A illustrates an advertisement that may be provided if the user's attention level is low or if there is a high level of motion of the mobile platform) based on an alertness of the user (attention level, [0004] and [0005]) as determined from the second data ([0010]: the content of advertisements appearing on the mobile platform may be altered to be suitable for the motion of the mobile platform and/or attention level of the user. For example, a low level of motion or a higher attention level of the user suggests that the user is more likely to pay attention to an advertisement and can absorb additional information in the advertisement. Conversely, a high level of motion or a lower attention level of the user indicates that the user is less likely to pay attention to an advertisement and will absorb less information; [0018]: Further data from biorhythm sensors and galvanic skin response sensors may be collected and used to determine the users emotional state and mood, as advertisement content is better absorbed and retained when the subject in an emotional state (positive or negative); [0020]: The activity of the user may be determined using the collected sensor data. For example, using GPS data, it can be determined if the mobile platform 100 is relatively stationary or moving in a car. Similarly, motion sensors, such as accelerometers, gyroscopes, and compass, can indicate whether the user is walking or stationary, standing or seated, as well whether the user is reading the device (in which case the mobile platform 100 would be held relatively stationary) or taking quick glances at the device, indicated by quick movement of the device to a reading position and then return to the original position. Additionally, proximity sensors, which may be a light sensor, capacitive sensors, resistive sensors, etc. may be used to determine if the user is talking into the mobile platform 100 with the mobile platform 100 held up to the user's head, where the user cannot see the screen; or to determine if the mobile platform 100 is in a purse or pocket. An ambient light detector may be used, e.g., to determine whether it is daytime/nighttime or if the mobile platform is inside/outside, which may be used adjust the brightness or colors of the advertisement. A higher state level indicates that the device context is one in which the user will be more likely to pay attention to an advertisement. Of course, other or additional activities may be associated with different states (S); [0022]: The mobile platform 100 may provide the sensor data to the server 150 via network 108 and the server 150 may then determine the attention level A. Alternatively, the mobile platform 100 may process the sensor data to determine the attention level A, and simply provide the attention level A to the server 150. Based on the attention level A, the server 150 may provide an advertisement to the mobile platform 100). In Response to Applicant’s arguments that objection to claim 15 should be withdrawn in light of amendment to claim 15, it should be noted that claim 15 is presented as original presentation without any amendment, and therefore the objection to claim 15 is maintained. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JWALANT B AMIN whose telephone number is (571)272-2455. The examiner can normally be reached Monday-Friday 10am - 630pm CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JWALANT AMIN/ Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Mar 15, 2024
Application Filed
Aug 30, 2025
Non-Final Rejection — §101, §103
Dec 04, 2025
Response Filed
Feb 23, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597091
COMPUTER-IMPLEMENTED METHOD, APPARATUS, SYSTEM AND COMPUTER PROGRAM FOR CONTROLLING A SIGHTEDNESS IMPAIRMENT OF A SUBJECT
2y 5m to grant Granted Apr 07, 2026
Patent 12592020
TRACKING SYSTEM, TRACKING METHOD, AND SELF-TRACKING TRACKER
2y 5m to grant Granted Mar 31, 2026
Patent 12585324
PROCESSOR, IMAGE PROCESSING DEVICE, GLASSES-TYPE INFORMATION DISPLAY DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12585130
LUMINANCE-AWARE UNINTRUSIVE RECTIFICATION OF DEPTH PERCEPTION IN EXTENDED REALITY FOR REDUCING EYE STRAIN
2y 5m to grant Granted Mar 24, 2026
Patent 12579571
METHOD FOR IMPROVING AESTHETIC APPEARANCE OF RETAILER GRAPHICAL USER INTERFACE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
94%
With Interview (+15.3%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 631 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month