DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is reply to the Application Number 18/089,243 filed on 11/18/2025
Claims 1 – 10 are currently pending and have been examined. Claims 1, 5, 6 and 10 have been amended.
This action is made FINAL
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4, 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (EP 3495189 A1) in view of Jeong et al. (US 20210197669 A1).
Regarding claim 1, Kim teaches a system, for controlling a vehicle display , the system comprising:
one or more transparent displays: (Kim: Abstract: “A vehicle control device (800) for controlling a vehicle (100,200) including first and second display units disposed at different positions therein, can include a communication unit (430,810) configured to communicate with the first and second display units; and a controller (170,820) configured to in response to an occurrence of a preset condition (S910), make a selection of at least one of the first display unit (1010,1020,251) and the second display unit (1010,1020,251), and display a first execution screen (1030,1040,1050,1110,1120,1210,1220,1310,1320,14 20,1810,1910) of an application on the first display unit (1010,1020,251) or a second execution screen (1030,1040,1050,1110,1120,1210,1220,1310,1320,1420,1810,1910) of the application on the second display unit (1010,1020, 251) according to the selection, or change the first execution screen (1030,1040,1050,1110,1120,1210,1220,1310,1320,1420,1810,1910) displayed on the first display unit (1010,1020,251) or the second execution screen (1030,1040,1050,1110,1120,1210,1220,1310,1320,1420,810, 1910) displayed on the second display unit (1010,1020,251) according to the selection.”; Paragraph 0100: “The display module 251 may include a transparent display. The transparent display may be attached to the windshield or the window. The transparent display may have a predetermined degree of transparency and output a predetermined screen thereon. The transparent display may include at least one of a thin film electroluminescent (TFEL), a transparent OLED, a transparent LCD, a transmissive transparent display and a transparent LED display. The transparent display may have adjustable transparency.”)
… a memory for storing instructions, and at least one processor connected to the memory, wherein the at least one processor, upon execution of the instructions, is configured to: (Kim; Paragraph 0150 – 0151: “According to an embodiment, the object detecting apparatus 300 may include a plurality of processors 370 or may not include any processor 370. For example, each of the camera 310, the radar 320, the LiDAR 330, the ultrasonic sensor 340 and the infrared sensor 350 may include the processor in an individual manner. When the processor 370 is not included in the object detecting apparatus 300, the object detecting apparatus 300 may operate according to the control of a processor of an apparatus within the vehicle 100 or the controller 170.”; Paragraph 0233: “The memory 140 is electrically connected to the controller 170. The memory 140 may store basic data for units, control data for controlling operations of units and input/output data. The memory 140 may be a variety of storage devices, such as ROM, RAM, EPROM, a flash drive, a hard drive and the like in a hardware configuration. The memory 140 may store various data for overall operations of the vehicle 100, such as programs for processing or controlling the controller 170.”)
acquire traveling environmental information of a vehicle; (Kim: Paragraph 0065: “The driving environment information may be generated based on object information provided from an object detecting apparatus 300.”)
… generate control information associated with a displaying position of the additional information within the one or more transparent displays with respect to the external scene according to the traveling environmental information; and
display the additional information on a region within the transparent display according to the control information. (Kim: Paragraph 0100: “The display module 251 may include a transparent display. The transparent display may be attached to the windshield or the window.”; Paragraphs 0285 – 0286: “Referring to FIG. 11, a second execution screen 1120 of a navigation application may be output to the second display unit 1020. Here, a touch input 1100 can be applied to the second display unit 1020. As a result, a first execution screen 1110 of a navigation application may be output to the first display unit 1010. The second execution screen 1120 may be continuously output to the second display unit 1020.”,
Supplemental Note: navigation information is interpreted as control information. Please see Figure A below)
PNG
media_image1.png
888
1251
media_image1.png
Greyscale
Figure A - (Kim: Fig. 11)
In sum, Kim teaches a system, for controlling a vehicle display , the system comprising: one or more transparent displays: a memory for storing instructions, and at least one processor connected to the memory, wherein the at least one processor, upon execution of the instructions, is configured to: acquire traveling environmental information of a vehicle; generate control information associated with a displaying position of the additional information within the one or more transparent displays with respect to the external scene according to the traveling environmental information; and display the additional information on a region within the transparent display according to the control information. Kim however does not teach an augmented reality device; that controls the augmented reality device to present augmented reality images including additional information over the one or more transparent displays combined with an external scene viewed through the one or more transparent displays: wherein the images displayed by the augmented reality device on the one or more transparent displays are configured based on occupant information of one or more occupants in the vehicle whereas Jeong does.
Jeong teaches an augmented reality device; (Jeong: Abstract: “A vehicular three-dimensional head-up display includes a display device functioning as a light source; and a combiner for simultaneously reflecting light from the light source toward a driver's seat and transmitting light from the outside of a vehicle, and may include an optical configuration in which an image created by the light from the light source is displayed as a virtual image of a three-dimensional perspective laid to correspond to the ground in front of the vehicle.”)
… control the augmented reality device to present augmented reality images including additional information over the one or more transparent displays combined with an external scene viewed through the one or more transparent displays: (Jeong: Paragraph 0066 – 0067: “Referring to FIG. 3, it is important and effective to display the aforementioned information, for example, lane information 31 and information 32 on the distance from a vehicle in front, as a virtual image on an actual road surface at the point of view of the driver. The 3D head-up display according to an example embodiment may represent a virtual screen as a 3D perspective laid to correspond to the ground and thereby implement information desired to transfer to the user as augmented reality on the road surface actually gazed by the user while driving without a need to shift the focus of eyes from the point of view of the user while driving to another location in various driving environments.”,
Supplemental Note: as shown in Figure B, the augmented reality images comprise of showing augmented reality information and additional information through the heads-up display)
PNG
media_image2.png
380
290
media_image2.png
Greyscale
Figure B - (Jeong: Fig. 3)
wherein the images displayed by the augmented reality device on the one or more transparent displays are configured based on occupant information of one or more occupants in the vehicle (Jeong: Paragraph 0111: “Referring to FIG. 10, if the location of the virtual image 24 matches a background 25′ corresponding to an actual location (natural view) of a driver's gaze as illustrated in (A) of FIG. 10, a virtual image with normal sense of distance may be generated and a comfortable FOV may be acquired.”; Paragraph 0115: “Therefore, a corresponding error may be corrected to maintain a difference in distance between the virtual image 24 and the background 25′ within the error tolerance range. Referring to FIG. 13, the 3D head-up display 400 according to an example embodiment may include a processor 1310 configured to correct a difference in distance between an actual location corresponding to a driver's gaze and the virtual image 24 based on surrounding information.”; Paragraph 0140: “In particular, the 3D head-up display according to an example embodiment may naturally acquire information without a need to adjust the focus of eyes while driving by providing an image on the ground in front of the driver's main gaze while driving. The 3D head-up display according to an example embodiment may implement an image at the exactly same location as a driving FOV and thereby acquire a comfortable FOV without a difference (i.e., a vergence accommodation conflict) between an accommodation and a vergence that causes dizziness and motion sickness in VR or AR and may implement the AR optimized for the driver in the vehicle”,
Supplemental Note: the comfortable FOV is based on the actual location of a driver’s gaze and the display possess the ability to adjust the augmented image due to any errors which may cause dizziness or motion sickness to the driver)
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Kim with the teachings of Jeong with a reasonable expectation of success. Both teach the ability of gathering the vehicle’s environment and displaying pertinent information on a transparent display. Jeong differs as it applies augmented imagery which correlates with the environment shown in front of the windshield as shown in Fig. 3 whereas Kim teaches in Fig. 11 of only showing additional information, such as making a left turn. One with knowledge in the art would find it obvious to try to implement the augmentation method of Jeong with the system of Kim. This combination will increase the transparency of how accurately the additional information is shown to the user, for example, in Fig. 11 of Kim the left turn sign is shown, however if there are multiple left turn entrances it can cause confusion for the driver. Alternatively, utilizing the system of Jeong would allow the ability to show the left turn movements following the roadway geometry into the correct left turn entrance. This is shown in the perspective as seen by the driver and displaying it on a 3D heads up display allows for a more accurate navigational process (Jeong: Paragraph 0006). Furthermore, Jeong teaches the ability to display the augmented images on the 3D head-up display in a comfortable FOV for the driver to accurately view the images without any dizziness or motion sickness (Jeong: Paragraph 0140). One of ordinary skill in the art would find it obvious to try to implement this function of Jeong with the transparent display of Kim as to mitigate dizziness or motion sickness when the driver views the images. This would ensure the driver is able to clearly see the information on the transparent display without straining themselves, thus further increasing the safety of the vehicle which may be compromised by the driver having motion sickness, for example.
Regarding claim 2, Kim, as modified, teaches wherein the at least one processor is further configured to: (Kim: Paragraph 0237: “At least one processor and the controller 170 included in the vehicle 100 may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro controllers, microprocessors, and electric units performing other functions.”)
acquire the traveling environmental information associated with a state of a road surface in front of the vehicle. (Kim: Paragraph 0109 – 0113: “In addition, the user interface apparatus 200 may be called as a display apparatus for vehicle. The user interface apparatus 200 may operate according to the control of the controller 170. The object detecting apparatus 300 is an apparatus for detecting an object located at outside of the vehicle 100. The object may be a variety of objects associated with driving (operation) of the vehicle 100. Referring to FIGS. 5 and 6, an object O may include a traffic lane OB10, another vehicle OB11, a pedestrian OB12, a two-wheeled vehicle OB13, traffic signals OB14 and OB15, light, a road, a structure, a speed hump, a geographical feature, an animal and the like.”; Paragraph 0118: “The traffic signals may include a traffic light OB15, a traffic sign OB14 and a pattern or text drawn on a road surface.”; Paragraph 0127: “For example, the camera 310 may be disposed adjacent to a front windshield within the vehicle to acquire a front image of the vehicle.”)
Regarding claim 4, Kim, as modified, teaches wherein the at least one processor is further configured to: (Kim: Paragraph 0237: “At least one processor and the controller 170 included in the vehicle 100 may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro controllers, microprocessors, and electric units performing other functions.”)
receive traveling information of the vehicle; and (Kim: Paragraphs 0021 – 0022: “Preferably, the controller is further configured to in response to receiving information that the vehicle has entered a region within a predetermined distance from a destination while the second execution screen is displayed on the second display unit, display the first execution screen on the first display unit. Preferably, the controller is further configured to in response to receiving information that the vehicle is traveling onto a specific road while the second execution screen is displayed on the second display unit, display the first execution screen on the first display unit.”)
compute the adjustment value, based on the traveling information and the traveling environmental information. (Kim: Paragraph 0015: “Preferably, the application is a navigation application for providing road guidance information to a destination, and wherein the first and second execution screens include map images provided from the navigation application.”; Paragraph 0100: “The display module 251 may include a transparent display. The transparent display may be attached to the windshield or the window.”; Paragraphs 0285 – 0286: “Referring to FIG. 11, a second execution screen 1120 of a navigation application may be output to the second display unit 1020. Here, a touch input 1100 can be applied to the second display unit 1020. As a result, a first execution screen 1110 of a navigation application may be output to the first display unit 1010. The second execution screen 1120 may be continuously output to the second display unit 1020.”,
Supplemental Note: based on where the vehicle is navigating to, control information regarding turn by turn information is displayed to the driver. Please see Figure A above)
Regarding claim 7, Kim teaches a method of controlling a vehicle display, the method comprising: (Kim: Paragraph 0019 – 0020: “Preferably, the controller is further configured to control the first and second display units in different manners, according to whether a preset user input is applied by a driver or a passenger. Preferably, the controller is further configured to in response to the preset unit input being applied by the driver, change both of the first and second execution screens respectively displayed on the first and second display units, and in response to the preset unit input being applied by the passenger, change the second execution screen displayed on the second display unit and not to change the first execution screen displayed on the first display unit.”)
collecting traveling-associated information of a vehicle; (Kim: Paragraphs 0021 – 0022: “Preferably, the controller is further configured to in response to receiving information that the vehicle has entered a region within a predetermined distance from a destination while the second execution screen is displayed on the second display unit, display the first execution screen on the first display unit. Preferably, the controller is further configured to in response to receiving information that the vehicle is traveling onto a specific road while the second execution screen is displayed on the second display unit, display the first execution screen on the first display unit.”)
predicting traveling behavior of the vehicle; (Kim: Paragraph 0015: “Preferably, the application is a navigation application for providing road guidance information to a destination, and wherein the first and second execution screens include map images provided from the navigation application.”,
Supplemental Note: navigating to a location is interpreted as predicting traveling behavior of the vehicle)
… generating an adjustment value associated with a displaying position of the additional information within the transparent display with respect to the external scene according to the traveling-associated information; and
displaying the additional information on a region within the transparent display according to the adjustment value. (Kim: Paragraph 0375 – 0376: “Referring to FIG. 22, TBT information 2210 may be output to the cluster 1010, and road guidance information 2220 may be output to the CID 1020. If an accident occurs on a driving path, a map image 2220 output to the CID 1020 may display a current position of the vehicle 100, an accident occurrence spot, etc.”; Paragraphs 0380 – 0382: “Alternatively, the summary information 2230 related to contents of a corresponding accident may be output as an upper layer of the TBT information 2210. [0381] Referring to FIG. 23, similar to FIG. 22, TBT information 2210 may be output to the cluster 1010, and road guidance information 2220 may be output to the CID 1020. [0382] If an accident occurs on a driving path, a map image 2220 output to the CID 1020 may display a current position of the vehicle 100, an accident occurrence spot, etc.”,
Supplemental Note: Information about an accident can be shown where the turn by turn (TBT) information is updated on the display as shown in Figure C below)
PNG
media_image3.png
881
1284
media_image3.png
Greyscale
Figure C - (Kim: Fig. 22)
In sum, Kim teaches a method of controlling a vehicle display, the method comprising: collecting traveling-associated information of a vehicle; predicting traveling behavior of the vehicle; generating an adjustment value associated with a displaying position of the additional information within the transparent display with respect to the external scene according to the traveling-associated information; and displaying the additional information on a region within the transparent display according to the adjustment value. Kim however does not teach controlling an augmented reality device to present augmented reality images including additional information over a transparent display of the vehicle combined with an external scene viewed through the transparent display whereas Jeong does.
Jeong teaches controlling an augmented reality device (Jeong: Abstract: “A vehicular three-dimensional head-up display includes a display device functioning as a light source; and a combiner for simultaneously reflecting light from the light source toward a driver's seat and transmitting light from the outside of a vehicle, and may include an optical configuration in which an image created by the light from the light source is displayed as a virtual image of a three-dimensional perspective laid to correspond to the ground in front of the vehicle.”)
to present augmented reality images including additional information over a transparent display of the vehicle combined with an external scene viewed through the transparent display; (Jeong: Paragraph 0066 – 0067: “Referring to FIG. 3, it is important and effective to display the aforementioned information, for example, lane information 31 and information 32 on the distance from a vehicle in front, as a virtual image on an actual road surface at the point of view of the driver. The 3D head-up display according to an example embodiment may represent a virtual screen as a 3D perspective laid to correspond to the ground and thereby implement information desired to transfer to the user as augmented reality on the road surface actually gazed by the user while driving without a need to shift the focus of eyes from the point of view of the user while driving to another location in various driving environments.”,
Supplemental Note: as shown in Figure B above, the augmented reality images comprise of showing augmented reality information and additional information through the heads-up display)
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Kim with the teachings of Jeong with a reasonable expectation of success. Please refer to claim 1 as they both state the same functional language and therefore rejected under the same pretenses.
Regarding claim 8, Kim, as modified, teaches wherein the collecting of traveling-associated information comprises:
collecting the traveling-associated information associated with a state of a road surface in front of the vehicle. (Kim: Paragraph 0109 – 0113: “In addition, the user interface apparatus 200 may be called as a display apparatus for vehicle. The user interface apparatus 200 may operate according to the control of the controller 170. The object detecting apparatus 300 is an apparatus for detecting an object located at outside of the vehicle 100. The object may be a variety of objects associated with driving (operation) of the vehicle 100. Referring to FIGS. 5 and 6, an object O may include a traffic lane OB10, another vehicle OB11, a pedestrian OB12, a two-wheeled vehicle OB13, traffic signals OB14 and OB15, light, a road, a structure, a speed hump, a geographical feature, an animal and the like.”; Paragraph 0118: “The traffic signals may include a traffic light OB15, a traffic sign OB14 and a pattern or text drawn on a road surface.”; Paragraph 0127: “For example, the camera 310 may be disposed adjacent to a front windshield within the vehicle to acquire a front image of the vehicle.”)
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (EP 3495189 A1) in view of Jeong et al. (US 20210197669 A1), further in view of Suk et al. (KR101835409B1).
Regarding claim 3, Kim, as modified, teaches wherein at least one processor is further configured to: (Kim: Paragraph 0237: “At least one processor and the controller 170 included in the vehicle 100 may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro controllers, microprocessors, and electric units performing other functions.”)
In sum, Kim teaches a vehicle with a processor. Kim however does not teach predict a vehicle traveling behavior according to the traveling environmental information; and compute an adjustment value of the displaying position of current information of the additional information being presented through the display whereas Suk does.
Suk teaches predict a vehicle traveling behavior according to the traveling environmental information; and compute an adjustment value of the displaying position of current information of the additional information being presented through the display (Suk: Paragraph 0014: “According to another aspect of the present invention, there is provided a method for controlling a head-up display, the method comprising: displaying a position of virtual object information extracted from a sensed image measured through a front measurement device for acquiring real- Matching with the position of the real world information and outputting; Receiving a motion value from the motion sensing unit and calculating a motion prediction value; And correcting a position of the virtual object information based on the motion prediction value, wherein the step of correcting the position of the virtual object information comprises: outputting the virtual object information when the motion predicted value exceeds the set value, Wherein the step of restricting the output of the virtual object information further comprises the step of causing the control unit to output a warning via the warning unit.”)
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Kim with the teachings of Suk with a reasonable expectation of success. Both Kim and Suk teach the ability to display environmental data on a heads-up display to alert the driver of their environment. Suk furthers the ability to have a predict value of an object in motion near the host vehicle and adjust the position of the virtual object information displayed. One with knowledge in the art would find it obvious to try to implement this function of Suk with the vehicle system of Kim. This improves the safety of the passengers in the vehicle by providing the user with data about abrupt vehicle maneuvers (Suk: Paragraph 0008) and the adjust the position of the virtual object information on the display so it is not obscuring the adjacent vehicle (Suk: Paragraph 0019). For example, a navigational direction on the heads up display could be blocking an adjacent vehicle making an abrupt lane change into the host vehicle lane. The ability to adjust the display by monitoring the adjacent vehicle so the navigational information is not obscuring the view mitigates an potential opportunity for a collision as it keeps the adjacent vehicle in view to the driver at all times.
Claims 5, 6 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (EP 3495189 A1) in view of Jeong et al. (US 20210197669 A1), further in view of Ryuichi et al. (JP 2021-170225 A).
Regarding claim 5, Kim, as modified, teaches wherein the at least one processor is further configured to:
receive occupant information; and (Kim: Paragraph 0092: “The internal camera 220 may acquire an internal image of the vehicle. The processor 270 may detect a user’s state based on the internal image of the vehicle. The processor 270 may acquire information related to the user’s gaze from the internal image of the vehicle. The processor 270 may detect a user gesture from the internal image of the vehicle.”)
In sum, Kim teaches wherein the at least one processor is further configured to: acquiring comprises: transmitting occupant information of an occupant in the vehicle. Kim however does not teach computing the adjustment value, based on the occupant information whereas Ryuichi does.
Ryuichi teaches compute the adjustment value, based on the occupant information (Ryuichi: Paragraph 0026: “The line-of-sight detection unit 10 functions as a visual object detection means for detecting a visual object (gaze target) visually recognized by an occupant (for example, a driver) in the above-mentioned real space or virtual space. More specifically, in the case of the above mentioned real space, the line-of-sight detection unit 10 extracts the direction of the line of sight from the face image of the occupant (for example, the driver) acquired by using the above-mentioned in-vehicle camera VR 2. A known line-of-sight tracking technique can be applied to the above-mentioned in-vehicle camera VR 2. line-of-sight detection of such an occupant”; Paragraphs 0031: “As the transaction display control means, the transaction execution / display control unit 70 causes the presentation device DS as the display means to display the information of the visual object (gaze event) determined to be of high interest, and also displays the information of the visual object (gaze event) determined to be of high interest. It has a function to execute transaction processing related to the visual object. As such "transaction processing", for example, various settings can be made according to the type of gaze event (visually-viewed object of interest). As an example, if the gaze event is a restaurant, transaction processing includes reservation processing. In addition, if the gaze event is a store of goods, the purchase process of the goods can be mentioned. As described above, the control device CTL of the present embodiment has a function of changing the settlement process related to the gaze event according to the type of the gaze event.”,
Supplemental Note: after analyzing the user’s gaze at an object of interest, the display can be controlled to present information of the visual object)
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Kim with the teachings of Ryuichi with a reasonable expectation of success. Both Kim and Ryuichi teach the ability of obtaining a user’s gaze, for Kim this is done by an internal vehicle camera and Ryuichi uses a line-of-sight detection unit, also in a form of an in-vehicle camera, thus one with knowledge in the art would find these components to be a simple substitution. Ryuichi furthers this invention by being able to recognize what the user is gazing at by the use of its sensor and then able to adjust information on a display pertaining to what the user is gazing at. This would be obvious to try by one with knowledge in the art for the vehicle taught by Kim. Kim teaches the ability of the displays inside the vehicle to adjust depending on environmental factors, driving factors, external object detection and more. The information on the screen is adjusted if any of these preset conditions are met to give the user appropriate information regarding these factors. For example, Kim states: (Kim: Paragraph 0301: “ As another example, if the vehicle approaches a point where a driving direction is to be changed, a cross road, a region where there exists an obstacle, etc., a map image showing the vehicle in an enlarged manner may be output to the cluster 1010.”), thus as the system detects an obstacle, the system is able to adjust the display to enlarge the map showing the vehicle so the obstacle can be safely avoided. This function ties into the ability to recognize the user’s gaze focused on an external object in which the display can adjust to show more information it. For example, if the vehicle detects the user’s gaze at an upcoming traffic cone in a work zone, the vehicle can enlarge the map for that section or be able to bring up a camera of how close the vehicle is to the obstacle. This function will improve the safety of the occupants of the vehicle as the driver is provided with additional information pertaining to the obstacle, thus able to perform more accurate controls.
Regarding claim 6, Kim, as modified, teaches wherein: the occupant information comprises one or more of a position of the one or more occupants, a sitting height of the one or more occupants, a position at which the one or more occupants gaze (Kim: Paragraph 0092: “The internal camera 220 may acquire an internal image of the vehicle. The processor 270 may detect a user’s state based on the internal image of the vehicle. The processor 270 may acquire information related to the user’s gaze from the internal image of the vehicle. The processor 270 may detect a user gesture from the internal image of the vehicle.”)
In sum, Kim teaches the occupant information comprises one or more of a position of the occupant, a sitting height of the occupant, a position at which the occupant gazes. However Kim does not teach acquiring information of where the occupant gazes toward an external object of interest from the external scene, and a gaze direction in which the occupant gazes toward the external object of interest whereas Ryuichi does.
Ryuichi teaches toward an external object of interest from the external scene, and , a gaze direction in which the one or more occupants gaze toward the external object of interest. (Ryuichi: Paragraph 0026: “The line-of-sight detection unit 10 functions as a visual object detection means for detecting a visual object (gaze target) visually recognized by an occupant (for example, a driver) in the above-mentioned real space or virtual space. More specifically, in the case of the above mentioned real space, the line-of-sight detection unit 10 extracts the direction of the line of sight from the face image of the occupant (for example, the driver) acquired by using the above-mentioned in-vehicle camera VR 2. A known line-of-sight tracking technique can be applied to the above-mentioned in-vehicle camera VR 2. line-of-sight detection of such an occupant”)
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Kim with the teachings of Ryuichi with a reasonable expectation of success. Please refer to the claim rejection of claim 5 as both state the same function and therefore rejected under the same pretenses.
Regarding claim 10, Kim, as modified, teaches wherein the collecting of traveling-associated information comprises collecting occupant information of the one or more occupants in the vehicle comprising one or more of a position of the one or more occupants, a sitting height of the one or more occupants, a position at which the one or more occupants gaze (Kim: Paragraph 0092: “The internal camera 220 may acquire an internal image of the vehicle. The processor 270 may detect a user’s state based on the internal image of the vehicle. The processor 270 may acquire information related to the user’s gaze from the internal image of the vehicle. The processor 270 may detect a user gesture from the internal image of the vehicle.”)
In sum, Kim teaches wherein the collecting of traveling-associated information comprises collecting occupant information of an occupant in the vehicle comprising one or more of a position of the occupant, a sitting height of the occupant, a position at which the occupant gazes. Kim however does not teach acquiring information of where the occupant gazes toward an external object of interest of the external scene, and a gaze direction in which the occupant gazes toward the external object of interest, and wherein the generating the adjustment value further comprises: calculating the adjustment value based on the occupant information whereas Ryuichi does.
Ryuichi teaches toward an external object of interest of the external scene, and a gaze direction in which the one or more occupants gaze toward the external object of interest, and
wherein the generating of the adjustment value further comprises: calculating the adjustment value based on the occupant information (Ryuichi: Paragraph 0026: “The line-of-sight detection unit 10 functions as a visual object detection means for detecting a visual object (gaze target) visually recognized by an occupant (for example, a driver) in the above-mentioned real space or virtual space. More specifically, in the case of the above mentioned real space, the line-of-sight detection unit 10 extracts the direction of the line of sight from the face image of the occupant (for example, the driver) acquired by using the above-mentioned in-vehicle camera VR 2. A known line-of-sight tracking technique can be applied to the above-mentioned in-vehicle camera VR 2. line-of-sight detection of such an occupant”; Paragraphs 0031: “As the transaction display control means, the transaction execution / display control unit 70 causes the presentation device DS as the display means to display the information of the visual object (gaze event) determined to be of high interest, and also displays the information of the visual object (gaze event) determined to be of high interest. It has a function to execute transaction processing related to the visual object. As such "transaction processing", for example, various settings can be made according to the type of gaze event (visually-viewed object of interest). As an example, if the gaze event is a restaurant, transaction processing includes reservation processing. In addition, if the gaze event is a store of goods, the purchase process of the goods can be mentioned. As described above, the control device CTL of the present embodiment has a function of changing the settlement process related to the gaze event according to the type of the gaze event.”,
Supplemental Note: after analyzing the user’s gaze at an object of interest, the display can be controlled to present information of the visual object)
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Kim with the teachings of Ryuichi with a reasonable expectation of success. Please refer to the claim rejection of claim 5 as both state the same function and therefore rejected under the same pretenses.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (EP 3495189 A1) in view of Jeong et al. (US 20210197669 A1), further in view of Jun et al. (KR102042364B1).
Regarding claim 9, Kim, as modified, teaches wherein the collecting of traveling-associated information comprises:
collecting speed information and deceleration information (Kim: Paragraph 0230: “The sensing unit 120 may further include an accelerator sensor, a pressure sensor, an engine speed sensor, an air flow sensor (AFS), an air temperature sensor (ATS), a water temperature sensor (WTS), a throttle position sensor (TPS), a TDC sensor, a crank angle sensor (CAS), and the like.”;,
Supplemental Note: the sensing unit is able to detect acceleration and speed, it can also use those sensors to display information about the speed limit at a certain route onto the dashboard)
In sum, Kim teaches wherein the collecting of traveling-associated information comprises: collecting speed information and deceleration information. Kim however does not teach collecting speed information and deceleration information when passing a bump, and wherein the generating the adjustment value further comprises; calculating the adjustment value based on the speed information and the deceleration information whereas Jun does.
Jun teaches when passing a bump, and wherein the generating of the adjustment value further comprises calculating the adjustment value based on the speed information and the deceleration information. (Jun: Paragraph 0012: “when controlling the speed of the vehicle with respect to the speed bump, it is possible to drive the vehicle according to the safety, comfort, or preference of each vehicle and driver by setting the difference between the acceleration and deceleration and final arrival speed of each mode according to the characteristics of the vehicle or driver.”; Paragraph 0014: “In order to achieve the above object, the speed bump detection vehicle control system according to the present invention includes a speed bump detection unit for detecting a speed bump in front of the vehicle while the speed bump is detected by the speed bump detection unit. In this case, the controller calculates a distance between the vehicle and the speed bump based on the traveling speed of the vehicle, calculates a time point when the vehicle reaches the speed bump, and controls the speed of the vehicle, and receives a signal from the controller. And a display unit for displaying a driving speed of the vehicle, whether the speed bump is detected, and a distance between the vehicle and the speed bump.”)
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Kim with the teachings of Jun with a reasonable expectation of success. Both Kim and Jun teach a vehicle in having sensors which detect the vehicle’s speed and acceleration/deceleration, thus one with knowledge in the art would find these as a simple substitution. Jun uses this data to identify when the vehicle approaches and passes over a speed bump, this would obvious to try to combine with the vehicle of Kim as Kim already states the ability to identify obstacles on the roadway and provides additional information to the driver pertaining to the obstacle (Kim: Paragraph 0301; “ As another example, if the vehicle approaches a point where a driving direction is to be changed, a cross road, a region where there exists an obstacle, etc., a map image showing the vehicle in an enlarged manner may be output to the cluster 1010.”). The ability of recognizing a speed bump fits into the existing functionality of Kim’s system of detecting an obstacle, thus after identifying a speed bump on a current travel path, the system can identify the obstacle for the future and adjust the information on the display to show the driver the approaching speedbump when driving on that path again. This increases the awareness of the driver and allows the driver to more efficiently drive the vehicle based on the additional information, thus one with knowledge in the art would find it obvious to try.
Response to Arguments
Applicant’s arguments, see section Rejection under 35 U.S.C. 103 in the REMARKS, filed 11/18/2025, with respect to 35 U.S.C. 103 prior art rejection regarding claims 1 – 10 have been considered but are not persuasive.
Applicant states the modifying Kim in view of Jeong with regards Jeong teaching displaying augmented reality images on the transparent display as obvious to try by one of ordinary skill in the art is an improper modification. Applicant states that such modifications interfere the driver’s view, thus this modification would be unsatisfactory for Kim’s intended purpose of displaying information on a main display. Examiner respectfully disagrees. Kim teaches the ability of a vehicle system to display information on a first and second execution screen. The first screen including turn-by-turn information and the second execution screen to show a points of interests along the path to the destination (Kim: Paragraph 0017). This is further illustrated in Figure A above in which the first screen 1010 and the second screen 1020 both display the stated features. Combining this ability of Jeong being able to display augmented reality images on the transparent display would be obvious to try to combine with Kim by one of ordinary skill in the art. As seen in Figure B above, Jeong teaches showing additional lane and distance information on the 3D head-up display. This additional information improves the safety and convenience of the driver as they will be able to view additional roadway information, and therefore would be obvious to try by one of ordinary skill in the art.
Applicant further states that Kim in view of Jeong fails to teach the amended claim limitation of “wherein the images displayed by the augmented reality device on the one or more transparent displays are configured based on occupant information of one or more occupants in the vehicle”. Examiner respectfully disagrees. Jeong teaches the ability to adjust the location of the augmented reality image as displayed on the 3D head-up display so it is in a comfortable FOV with the driver’s gaze. An error regarding the placement of the augmented reality image can also be adjusted so it is further within the comfortable FOV, thus occupant information is used to configure the images on the transparent display. Dependent claims 2 – 6 are still rejected per their dependency on claim 1.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVAM SHARMA whose telephone number is (703)756-1726. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached at 571-270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIVAM SHARMA/Examiner, Art Unit 3665
/Erin D Bishop/Supervisory Patent Examiner, Art Unit 3665