DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3 and 5-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) the steps of “select a target object among a plurality of objects present before the vehicle” and “set a target area on a windshield of the vehicle”, which are an evaluation and determination step, respectively, that made me performed by the human mind. This judicial exception is not integrated into a practical application because the steps of sensing driving information and surrounding information are data collection, and the step of projecting an indicator is presenting information on a display, which represent extra-solution activity in an attempt to link the steps into a particular technological environment, i.e. vehicles. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claims do not recite a specific mathematical formula or detailed algorithm; its core is to sense data, select a target, decide which location to display an indicator, and display the indicator, which are equivalent to the steps of collect, analyze, and present. Therefore, the steps fall within an abstract idea grouping.
As per claim 2, “position information” and “size information” amount to further data collection as extra-solution activity.
As per claim 3, the auxiliary display unit further claims presenting the gathered data, and the internal sensor further claims additional data gathering, which are additional extra-solution activity.
As per claim 5, projecting an arrow or point is further presenting data, which represents additional extra-solution activity, and the steps of determining the target object being determined as being located out a preset boundary and approaching the vehicle are additional mental observational steps.
As per claim 6, projecting the indicator in a color is further presenting data, which represents additional extra-solution activity.
As per claim 7, displaying a warning mark image is further presenting data, which represents additional extra-solution activity.
As per claim 8, a non-transitory computer-readable medium defines nothing more than a generic medium for storing a program.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 6-7, 17, and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Murai (U.S. 2019/0204598 A1).
Claim 1, Murai teaches:
A method of driver assistance for controlling a vehicle (Murai, Fig. 1, Paragraph [0020]), wherein the vehicle comprises a processor (Murai, Fig. 1: 10), at least one sensor (Murai, Fig. 1: 12), and a display unit (Murai, Fig. 1: 14), the method comprising:
sensing, by the at least one sensor, driving information of the vehicle (Murai, Paragraph [0023], Vehicle sensor 12 may include a sensor that detects a traveling state of the host vehicle, e.g. a vehicle-speed or a steering-amount.) and surrounding information about surroundings of the vehicle (Murai, Paragraph [0022], Vehicle sensor 12 may include a sensor that detects a situation outside the host vehicle, e.g. an on-board camera that captures an image of a region ahead of the host vehicle.);
collecting, by the processor, the sensed driving information and the sensed surrounding information to select a target object among a plurality of objects present before the vehicle (Murai, Paragraph [0031], The detection unit 18, which is one functional block of display control device 10, determines whether an emergent situation has occurred, wherein the emergent situation is based on information regarding a specific object and information of the host vehicle (see Murai, Paragraphs [0027-0030]). For example, when a specific object is detected, information indicated by traffic signs may also be collected, and the detection unit 18 determines whether the driver of the host vehicle is highly likely or is currently violating the information indicated by traffic signs, e.g. right turn prohibition. Therefore, the presence of the specific object and the information indicated by traffic signs are examples of surrounding information, and whether the driver is likely to and/or is violating the information are examples of driving information.);
collecting, by the processor, target information related to the target object to set a target area on a windshield of the vehicle (Murai, Paragraph [0032], When an emergent situation has been caused by the specific object, the system determines that a warning against the presence of the specific object needs to be presented to the driver, wherein the specific object is functionally determined to be a target object based on the emergent situation.); and
projecting, by the display unit, an indicator for the target object in the target area of the windshield of the vehicle (Murai, Paragraph [0032], The controller 20 causes the HUD 14 to project a highlighting image onto the windshield of the host vehicle that is superimposed on the specific object. The location of the specific object within the window affects the location of the highlighting image, i.e. the target area (see Murai, Fig. 3A: 34a, Paragraph [0042] for example).).
Claim 2, Murai further teaches:
The method of claim 1, wherein the target information comprises:
position information about longitudinal and lateral positions of the target object (Murai, Paragraph [0028], The direction of travel of a specified object may be tracked by the detection unit 18. Thus, the position of the specified object with respect to the field of view (see Murai, Fig. 2, for example) as well as the position of the specified object with respect to the road on which the host vehicle is traveling are both determined by the detection unit 18. The positions are functionally equivalent to longitudinal and lateral positions of the specified object.); and
size information of the target object (Murai, Fig. 4, Paragraph [0048], In the example of Fig. 4, the system identifies a pedestrian 36 and vehicle 38 as specified objects, and generates respective highlighting images 40 and 42, wherein the highlighting images 40 and 42 represent size information of their respective specified objects. In Fig. 4, the highlighting image 42 is larger than the highlighting image 40, which represents that the vehicle 38 is larger than the pedestrian 36.).
Claim 6, Murai further teaches:
The method of claim 1, wherein the projecting of the indicator comprises projecting the indicator in a color determined based on a speed of the target object (Murai, Paragraphs [0043] and [0054], The highlighting images may be represented in different colors based on the kind of specified object. The highlighting images are generated in response to the detection unit 18 determining that the specified object is the source of an emergent situation (see Murai, Paragraph [0032]), wherein the specified object may be the source of the emergent situation based on the change of position of the specified object, i.e. the speed (see Murai, Paragraph [0028]). Therefore, the highlighting images, and their respective colors, are based on a speed of the specified object. It is noted that it appears the Applicant intends for the color to represent the speed of the target object, however, the claims do not inherently or explicitly define this aspect of the invention.).
Claim 7, Murai further teaches:
The method of claim 1, further comprising displaying a warning mark image within the target area (Murai, Fig. 2: 30b, Paragraphs [0037-0038], An example of a warning mark is a second highlighting image 30b which may be in text form.).
Claim 17, Murai teaches:
A method of driver assistance for a vehicle (Murai, Fig. 1, Paragraph [0020]), the method comprising:
sensing driving information of the vehicle (Murai, Paragraph [0023], Vehicle sensor 12 may include a sensor that detects a traveling state of the host vehicle, e.g. a vehicle-speed or a steering-amount.) and surrounding information about surroundings of the vehicle (Murai, Paragraph [0022], Vehicle sensor 12 may include a sensor that detects a situation outside the host vehicle, e.g. an on-board camera that captures an image of a region ahead of the host vehicle.);
selecting one or more target objects among a plurality of objects present before the vehicle based on the driving information and the surrounding information, and based on each of the one or more target objects being detected as a potential hazard for the vehicle (Murai, Paragraph [0031], The detection unit 18, which is one functional block of display control device 10, determines whether an emergent situation has occurred, wherein the emergent situation is based on information regarding a specific object and information of the host vehicle (see Murai, Paragraphs [0027-0030]). For example, when a specific object is detected, information indicated by traffic signs may also be collected, and the detection unit 18 determines whether the driver of the host vehicle is highly likely or is currently violating the information indicated by traffic signs, e.g. right turn prohibition. Therefore, the presence of the specific object and the information indicated by traffic signs are examples of surrounding information, and whether the driver is likely to and/or is violating the information are examples of driving information. As an example, Fig. 4 indicates two objects, including a pedestrian 36 and a vehicle 38.);
for each of the one or more target objects, collecting target information related to the one or more target objects (Murai, Paragraphs [0031-0032]);
for each of the one or more target objects, setting a target area on a windshield of the vehicle (Murai, Paragraph [0032], When an emergent situation has been caused by the specific object, the system determines that a warning against the presence of the specific object needs to be presented to the driver, wherein the specific object is functionally determined to be a target object based on the emergent situation.); and
for each of the one or more target objects, projecting an indicator at the target area of the windshield of the vehicle (Murai, Paragraph [0032], The controller 20 causes the HUD 14 to project a highlighting image onto the windshield of the host vehicle that is superimposed on the specific object. The location of the specific object within the window affects the location of the highlighting image, i.e. the target area (see Murai, Fig. 3A: 34a, Paragraph [0042] for example).).
Claim 19, Murai further teaches:
The method of claim 17, further comprising, for a given target object of the one or more target objects, projecting the indicator at a left or right end area of the windshield in response to the given target object being located out of a boundary for the windshield and approaching toward the vehicle (Murai, Figs. 2-4, Paragraphs [0028] and [0032], Based on the movement of a specified object potentially being in an emergent situation with the host vehicle, the system projects a warning via the HUD 14 regarding the specified object by utilizing at least one highlighting image. For example, in Fig. 2, the pedestrian 28 is located on a left side of the view from the windshield of the host vehicle, which is located outside of a preset boundary, e.g. represented by the road ahead of the host vehicle. In the same example, it may be determined by the detection unit 18, based on the traveling direction of the pedestrian 28 (see Murai, Paragraph [0028]), that the pedestrian 28 may be a cause of an emergent situation, and respective highlighting images may be projected in response thereto.).
Claim 20, Murai further teaches:
The method of claim 17, wherein for a given target object of the one or more target objects, the projecting of the indicator comprises projecting the indicator in a color determined based on a speed of the given target object (Murai, Paragraphs [0043] and [0054], The highlighting images may be represented in different colors based on the kind of specified object. The highlighting images are generated in response to the detection unit 18 determining that the specified object is the source of an emergent situation (see Murai, Paragraph [0032]), wherein the specified object may be the source of the emergent situation based on the change of position of the specified object, i.e. the speed (see Murai, Paragraph [0028]). Therefore, the highlighting images, and their respective colors, are based on a speed of the specified object. It is noted that it appears the Applicant intends for the color to represent the speed of the target object, however, the claims do not inherently or explicitly define this aspect of the invention.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 5, 8-10, and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Murai (U.S. 2019/0204598 A1).
Claim 5, Murai teaches:
The method of claim 1, further comprising projecting a highlighting image at a left or right end area of the windshield in response to the target object being determined as being located out of a preset boundary for the windshield and approaching to the vehicle (Murai, Figs. 2-4, Paragraphs [0028] and [0032], Based on the movement of a specified object potentially being in an emergent situation with the host vehicle, the system projects a warning via the HUD 14 regarding the specified object by utilizing at least one highlighting image. For example, in Fig. 2, the pedestrian 28 is located on a left side of the view from the windshield of the host vehicle, which is located outside of a preset boundary, e.g. represented by the road ahead of the host vehicle. In the same example, it may be determined by the detection unit 18, based on the traveling direction of the pedestrian 28 (see Murai, Paragraph [0028]), that the pedestrian 28 may be a cause of an emergent situation, and respective highlighting images may be projected in response thereto.).
Murai does not explicitly teach:
An arrow or point.
However, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the shape of the highlighting image to be an arrow or a point, as a matter of engineering/design choice. Such a modification would not change the principal operation of the system, as a whole, which is to provide a highlighting image that is recognizable by the driver for warning the driver of an emergent situation. Therefore, modifying the shape of the highlighting images to an arrow or point would yield predictable results. See MPEP 2144.04.
Claim 8, Murai further teaches:
The method according to claim 1.
Murai does not explicitly teach:
A non-transitory computer-readable recording medium storing a program for executing the method according to claim 1.
However, it would have been obvious to one of ordinary skill in the art, at the time of filing, to integrate the teaching of a program loaded in a memory in terms of software with respect to the display control device 10 (see Murai, Paragraph [0021]), to the display system 1. Such a modification would not change the principal operation of the system, as a whole, and would yield predictable results.
Claim 9, Murai teaches:
A vehicle (Murai, Fig. 1, Paragraph [0020], The display system 1 of Fig. 1 is implemented in a host vehicle.), comprising:
a sensor (Murai, Fig. 1: 12);
a display unit (Murai, Fig. 1: 14); and
a processor (Murai, Fig. 1: 10) processor configured to control the display unit (Murai, Paragraph [0032], The controller 20 of display control device 10 causes the HUD 14 to project the highlighting image.), wherein a combination of the processor, the sensor, and the display unit is configured to:
sense driving information of the vehicle (Murai, Paragraph [0023], Vehicle sensor 12 may include a sensor that detects a traveling state of the host vehicle, e.g. a vehicle-speed or a steering-amount.) and surrounding information about the surroundings of the vehicle (Murai, Paragraph [0022], Vehicle sensor 12 may include a sensor that detects a situation outside the host vehicle, e.g. an on-board camera that captures an image of a region ahead of the host vehicle.);
collect the sensed driving information and the sensed surrounding information, and select a target object one among a plurality of objects present before the vehicle based on the sensed driving information and the sensed surrounding information (Murai, Paragraph [0031], The detection unit 18, which is one functional block of display control device 10, determines whether an emergent situation has occurred, wherein the emergent situation is based on information regarding a specific object and information of the host vehicle (see Murai, Paragraphs [0027-0030]). For example, when a specific object is detected, information indicated by traffic signs may also be collected, and the detection unit 18 determines whether the driver of the host vehicle is highly likely or is currently violating the information indicated by traffic signs, e.g. right turn prohibition. Therefore, the presence of the specific object and the information indicated by traffic signs are examples of surrounding information, and whether the driver is likely to and/or is violating the information are examples of driving information. As an example, Fig. 4 indicates two objects, including a pedestrian 36 and a vehicle 38.);
collect target information related to the target object (Murai, Paragraphs [0031-0032]);
set a target area on a windshield of the vehicle based on the target information (Murai, Paragraph [0032], When an emergent situation has been caused by the specific object, the system determines that a warning against the presence of the specific object needs to be presented to the driver, wherein the specific object is functionally determined to be a target object based on the emergent situation.); and
project an indicator for the target object in the target area of the windshield of the vehicle (Murai, Paragraph [0032], The controller 20 causes the HUD 14 to project a highlighting image onto the windshield of the host vehicle that is superimposed on the specific object. The location of the specific object within the window affects the location of the highlighting image, i.e. the target area (see Murai, Fig. 3A: 34a, Paragraph [0042] for example).).
Murai does not explicitly teach:
The processor configured to control the sensor.
However, it would have been obvious to one of ordinary skill in the art, at the time of filing, for the system to include a processor/controller for the vehicular sensor 12. For example, in the case where the vehicular sensor 12 is capable of detecting an object itself (see Murai, Paragraph [0027]), it would have been obvious to have a processor/controller to control the functions of the vehicular sensor 12. Such a modification would ensure the system functions for its intended purpose and would therefore yield predictable results.
Claim 10, Murai further teaches:
The vehicle of claim 9, wherein the target information comprises:
position information about longitudinal and lateral positions of the target object (Murai, Paragraph [0028], The direction of travel of a specified object may be tracked by the detection unit 18. Thus, the position of the specified object with respect to the field of view (see Murai, Fig. 2, for example) as well as the position of the specified object with respect to the road on which the host vehicle is traveling are both determined by the detection unit 18. The positions are functionally equivalent to longitudinal and lateral positions of the specified object.); and
size information of the target object (Murai, Fig. 4, Paragraph [0048], In the example of Fig. 4, the system identifies a pedestrian 36 and vehicle 38 as specified objects, and generates respective highlighting images 40 and 42, wherein the highlighting images 40 and 42 represent size information of their respective specified objects. In Fig. 4, the highlighting image 42 is larger than the highlighting image 40, which represents that the vehicle 38 is larger than the pedestrian 36.).
Claim 14, Murai teaches:
The vehicle of claim 9, wherein the combination of the processor, the sensor, and the display unit is configured to project a highlighting image at a left or right end area of the windshield in response to the target object being determined as being located out of a preset boundary for the windshield and approaching to the vehicle (Murai, Figs. 2-4, Paragraphs [0028] and [0032], Based on the movement of a specified object potentially being in an emergent situation with the host vehicle, the system projects a warning via the HUD 14 regarding the specified object by utilizing at least one highlighting image. For example, in Fig. 2, the pedestrian 28 is located on a left side of the view from the windshield of the host vehicle, which is located outside of a preset boundary, e.g. represented by the road ahead of the host vehicle. In the same example, it may be determined by the detection unit 18, based on the traveling direction of the pedestrian 28 (see Murai, Paragraph [0028]), that the pedestrian 28 may be a cause of an emergent situation, and respective highlighting images may be projected in response thereto.).
Murai does not explicitly teach:
An arrow or point.
However, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the shape of the highlighting image to be an arrow or a point, as a matter of engineering/design choice. Such a modification would not change the principal operation of the system, as a whole, which is to provide a highlighting image that is recognizable by the driver for warning the driver of an emergent situation. Therefore, modifying the shape of the highlighting images to an arrow or point would yield predictable results. See MPEP 2144.04.
Claim 15, Murai further teaches:
The vehicle of claim 9, wherein the combination of the processor, the sensor, and the display unit is further configured to project the indicator in a color determined based on a speed of the target object (Murai, Paragraphs [0043] and [0054], The highlighting images may be represented in different colors based on the kind of specified object. The highlighting images are generated in response to the detection unit 18 determining that the specified object is the source of an emergent situation (see Murai, Paragraph [0032]), wherein the specified object may be the source of the emergent situation based on the change of position of the specified object, i.e. the speed (see Murai, Paragraph [0028]). Therefore, the highlighting images, and their respective colors, are based on a speed of the specified object. It is noted that it appears the Applicant intends for the color to represent the speed of the target object, however, the claims do not inherently or explicitly define this aspect of the invention.).
Claim 16, Murai further teaches:
The vehicle of claim 9, wherein the combination of the processor, the sensor, and the display unit is further configured to display a warning mark image within the target area (Murai, Fig. 2: 30b, Paragraphs [0037-0038], An example of a warning mark is a second highlighting image 30b which may be in text form.).
Claims 3-4, 11-13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Murai (U.S. 2019/0204598 A1) in view of Seder et al. (U.S. 2012/0093357 A1).
Claim 3, Murai teaches:
The method of claim 2.
Murai does not specifically teach:
Wherein the display unit comprises:
an auxiliary display unit comprising at least one light-emitting diode (LED) or laser device on a front surface thereof;
an operation unit disposed under the auxiliary display unit and configured to control an angle of the auxiliary display unit; and
an internal sensor disposed on a rear surface of the display unit or in the operation unit and configured to sense eyes of a driver of the vehicle, wherein the auxiliary display unit is disposed with the front surface thereof facing the windshield and in a parallel direction with respect to the windshield with a constant distance therebetween.
Seder teaches:
Wherein the display unit (Seder, Figs. 1 and 2: 155, 158, 150, The combination of elements 150, 155, and 158 responsible for projecting images onto the windscreen of the vehicle is interpreted as a display unit.) comprises:
an auxiliary display unit comprising at least one light-emitting diode (LED) or laser device on a front surface thereof (Seder, Fig. 1: 158, Paragraph [0014], The graphics projection system 158 includes a laser or projector device, e.g. Fig. 2: 20.);
an operation unit disposed under the auxiliary display unit and configured to control an angle of the auxiliary display unit (Seder, Fig. 1: 155, Paragraph [0014], The EVS graphics engine 155, as seen in Fig. 1, is located under graphics projection system 158 and controls the graphics projection system 158. The graphics projection system 158 projects onto a heads up display 150 at a fixed angle, e.g. the device 20 of Fig. 2 projects images onto substrate 14 at a particular angle.); and
an internal sensor disposed on a rear surface of the display unit or in the operation unit and configured to sense eyes of a driver of the vehicle (Seder, Fig. 1: 160, Paragraphs [0014-0015], The eye location sensing system 160 includes sensors to approximate a location of the head of an occupant and further the orientation or gaze location of the eyes of the occupant. As seen in Fig. 1, the eye location sensing system 160 is located behind elements 150, 155, and 158. It would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the location of the eye location sensing system 160 to be on a rear surface of at least one of the elements 155 or 158, as a matter of design choice. Such a modification would not change the operation of the eye location sensing system 160 and would therefore yield predictable results. See MPEP 2144.04.), wherein the auxiliary display unit is disposed with the front surface thereof facing the windshield and in a parallel direction with respect to the windshield with a constant distance therebetween (Seder, Fig. 1: 158, Paragraphs [0015] and [0018-0020], The graphics projection system 158 is configured to project onto the HUD 150 at a fixed angle. As seen in Fig. 2, which represents a device 20 that projects a laser onto substrate 14, the right side of device 20 faces the substrate 14, i.e. the windshield, wherein one of ordinary skill in the art would recognize that the lens associated with the device 20 would be substantially parallel with the substrate 14 in order to project the images 15 and 16 onto the substrate 14.).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Murai by integrating the teaching of the enhanced vision system (EVS), as taught by Seder.
The motivation would be to enable dynamic registration of images upon the HUD such that the images correspond to a view of the operator for improved accuracy (see Seder, Paragraph [0015]).
Claim 4, Murai in view of Seder further teaches:
The method of claim 3, wherein the projecting of the indicator comprises:
sensing, by the internal sensor, the eyes of the driver using the internal sensor (Seder, Paragraphs [0014-0015]);
predicting, by the processor, a gaze of the driver based on the sensed eyes of the driver (Seder, Paragraphs [0014-0015], The location of an occupant’s head and the orientation or gaze location of the eyes of the occupant are determined based on an approximate location of the head of an occupant. Thus, the determined gaze is functionally equivalent to a predicted gaze based on the approximate location of the head.); and
projecting, by the auxiliary display unit, the indicator in the target area of the windshield based on the predicted gaze of the driver (Seder, Paragraphs [0014-0015], Determining the approximate gaze of the driver enables dynamic registration of images upon the HUD to the occupant.).
Claim 11, Murai teaches:
The vehicle of claim 10.
Murai does not specifically teach:
Wherein the display unit comprises:
an auxiliary display unit comprising at least one light-emitting diode (LED) or laser device; an operation unit configured to control an angle of the auxiliary display unit; and an internal sensor configured to sense eyes of a driver.
Seder teaches:
Wherein the display unit (Seder, Figs. 1 and 2: 155, 158, 150, The combination of elements 150, 155, and 158 responsible for projecting images onto the windscreen of the vehicle is interpreted as a display unit.) comprises:
an auxiliary display unit comprising at least one light-emitting diode (LED) or laser device (Seder, Fig. 1: 158, Paragraph [0014], The graphics projection system 158 includes a laser or projector device, e.g. Fig. 2: 20.);
an operation unit configured to control an angle of the auxiliary display unit (Seder, Fig. 1: 155, Paragraph [0014], The EVS graphics engine 155, as seen in Fig. 1, controls the graphics projection system 158. The graphics projection system 158 projects onto a heads up display 150 at a fixed angle, e.g. the device 20 of Fig. 2 projects images onto substrate 14 at a particular angle.); and
an internal sensor configured to sense eyes of a driver (Seder, Fig. 1: 160, Paragraphs [0014-0015], The eye location sensing system 160 includes sensors to approximate a location of the head of an occupant and further the orientation or gaze location of the eyes of the occupant.).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Murai by integrating the teaching of the enhanced vision system (EVS), as taught by Seder.
The motivation would be to enable dynamic registration of images upon the HUD such that the images correspond to a view of the operator for improved accuracy (see Seder, Paragraph [0015]).
Claim 12, Murai in view of Seder further teaches:
The vehicle of claim 11, wherein the auxiliary display unit is disposed with a front surface thereof facing the windshield, and wherein the auxiliary display unit is disposed in a parallel direction with respect to the windshield with a constant distance therebetween (Seder, Fig. 1: 158, Paragraphs [0015] and [0018-0020], The graphics projection system 158 is configured to project onto the HUD 150 at a fixed angle. As seen in Fig. 2, which represents a device 20 that projects a laser onto substrate 14, the right side of device 20 faces the substrate 14, i.e. the windshield, wherein one of ordinary skill in the art would recognize that the lens associated with the device 20 would be substantially parallel with the substrate 14 in order to project the images 15 and 16 onto the substrate 14.).
Claim 13, Murai in view of Seder further teaches:
The vehicle of claim 12, wherein the combination of the processor, the sensor, and the display unit is further configured to:
sense the eyes of the driver using the internal sensor (Seder, Paragraphs [0014-0015]);
predict a gaze of the driver based on the sensed eyes of the driver (Seder, Paragraphs [0014-0015], The location of an occupant’s head and the orientation or gaze location of the eyes of the occupant are determined based on an approximate location of the head of an occupant. Thus, the determined gaze is functionally equivalent to a predicted gaze based on the approximate location of the head.); and
adjust the target area based on the predicted gaze of the driver (Seder, Paragraphs [0014-0015], Determining the approximate gaze of the driver enables dynamic registration of images upon the HUD to the occupant, which is functionally equivalent to adjusting the target area by adjusting the images projected onto the target area.).
Claim 18, Murai teaches:
The method of claim 17.
Murai does not specifically teach:
Further comprising:
sensing eyes of a driver;
predicting a gaze of the driver based on the sensing of the eyes of the driver; and
for each of the one or more target objects, adjusting the target area based on the predicted gaze of the driver.
Seder teaches:
Further comprising:
sensing eyes of a driver (Seder, Paragraphs [0014-0015]);
predicting a gaze of the driver based on the sensing of the eyes of the driver (Seder, Paragraphs [0014-0015], The location of an occupant’s head and the orientation or gaze location of the eyes of the occupant are determined based on an approximate location of the head of an occupant. Thus, the determined gaze is functionally equivalent to a predicted gaze based on the approximate location of the head.); and
for each of the one or more target objects, adjusting the target area based on the predicted gaze of the driver (Seder, Paragraphs [0014-0015], Determining the approximate gaze of the driver enables dynamic registration of images upon the HUD to the occupant, which is functionally equivalent to adjusting the target area by adjusting the images projected onto the target area.).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Murai by integrating the teaching of the enhanced vision system (EVS), as taught by Seder.
The motivation would be to enable dynamic registration of images upon the HUD such that the images correspond to a view of the operator for improved accuracy (see Seder, Paragraph [0015]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES J YANG whose telephone number is (571)270-5170. The examiner can normally be reached 9:30am-6:00p M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN ZIMMERMAN can be reached at (571) 272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES J YANG/ Primary Examiner, Art Unit 2686