Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/3/25 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 12, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2008/0165084 A1 (hereinafter Giegold) in view of U.S. Patent Application Publication 2021/0233207 A1 (hereinafter Ha) in view of “Driver Gaze Tracking and Eyes Off the Road Detection System” by Francisco Vicente, et al. (hereinafter Vicente).
Regarding claim 1, the limitations “A method implemented by a first device, the method comprising: projecting an image … obtaining a first pre-distortion model … and correcting the image projected by the first device based on the first pre-distortion model” are taught by Giegold (Giegold, e.g. abstract, paragraphs 2-66, describes a system for correcting a projected image in a HUD system by applying a pre-distortion model to an image before projection onto the windshield of a vehicle. In particular, Giegold, e.g. paragraphs 2, 4, 13, 41, 42, 50, 52-54, indicates that the HUD system includes a projector and a computer performing control and image processing functions, i.e. the claimed first device, where the image processing includes applying a set of pre-distortion parameters to the image before projecting the resulting pre-distorted image, i.e. the claimed correcting a first projected image based on an obtained a pre-distortion model.)
The limitations “receiving first position information from a second device, wherein the first position information comprises position information of a first feature in a … coordinate system … and wherein the first feature represents feature information of a user” and obtaining a first pre-distortion model “based on the first position information” are taught by Giegold (Giegold, e.g. paragraphs 55, 56, teaches that there may also be a camera-based system available for tracking the position and gaze direction of the driver’s eyes, i.e. the claimed receiving first position information from a second device, comprising position information of a first feature representing feature information of a user. Further, Giegold, e.g. paragraphs 13-19, 32-37, 42, 52-56, teaches that the position information may be used by the HUD during an operating phase to determine a set of pre-distortion parameters based on the received position and orientation relative to the positions/orientations of the viewpoints captured during the calibration phase, i.e. the received position/orientation information is determined in the claimed coordinate system, and is used to determine a set of pre-distortion parameters to apply to the image during the operating phase based on the sets of pre-distortion parameters determined during the calibration phase, i.e. as claimed, the first pre-distortion model used to correct the projected image is determined based on the first position information.)
The limitations receiving first position information from a second device, wherein the first position information comprises position information of a first feature in a world coordinate system, wherein the first device is an origin of the world coordinate system and wherein the first feature represents feature information of a user” are not explicitly taught by Giegold (As discussed above, Giegold, e.g. paragraphs 13-19, 32-37, 42, 52-56, teaches that the position information may be used by the HUD during an operating phase to determine a set of pre-distortion parameters based on the received position and orientation relative to the positions/orientations of the viewpoints captured during the calibration phase, i.e. the received position/orientation information is determined in the claimed coordinate system. While one of ordinary skill in the art would understand that any coordinate system inherently comprises an origin coordinate, Giegold does not explicitly indicate that the origin coordinate is defined at the location of the display device, i.e. the claimed world coordinate system using the first device location as the origin.) However, this limitation is taught by Ha (Ha, e.g. abstract, paragraphs 45-92, describes a vehicle display system presenting augmented reality images on the windshield of the vehicle, including performing a warping transformation on images prior to display based on the detected eye positions of the driver, e.g. paragraphs 55-61, 78-85, i.e. analogous to both Giegold’s system and the claimed invention, Ha’s vehicle display pre-distorts an image for display based on the driver’s eye location. Further, Ha, e.g. paragraph 54, teaches that the coordinates receives from the eye tracker may be relative coordinates based on a point of the display device, i.e. the claimed coordinate system using the first/display device location as the origin coordinate. Finally, although the broadest reasonable interpretation of the “world” coordinate system does not require mapping to a planet based coordinate system, i.e. one of ordinary skill in the art would recognize that the use of “world coordinate system” can be interpreted to correspond to a base/global coordinate system used to relate a plurality of coordinate systems/frames of reference, e.g. as in the Guo reference discussed below, figure 2, where the world coordinate system relates a sensor coordinate system to a head tracker coordinate system and viewpoint coordinate system, Ha, e.g. figure 7, paragraphs 86-88, also teaches that the display device screen plane may be mapped to the world coordinate system of a mapping system, using a location on the display screen plane as the world coordinate origin.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Giegold’s HUD system to use Ha’s display relative eye tracker coordinates because Giegold does not indicate any preferred origin coordinate for the coordinate system used represent the viewpoint position/orientation information, and Ha teaches that the eye tracker positions can be provided as relative to a point on the display device, i.e. the claimed world coordinate system wherein the first device is the origin of the world coordinate system.
The limitations “wherein the first position information is based on feature position information and depth information, wherein the feature position information represents the position information of the feature information in second image information comprising the feature information of the user, and wherein the depth information represents a straight-line distance from the feature information of the user to the second device” are implicitly taught by Giegold in view of Ha (As discussed above, Giegold, e.g. paragraphs 55, 56, teaches that there may be a camera-based system available for tracking the position and gaze direction of the driver’s eyes, i.e. the claimed first position information representing the position of features in a second image of the user. Further, Giegold, e.g. paragraph 61, indicates that the range of eye positions used by the pre-distortion model do not have to be in a plane, i.e. the tracked position may be 3D. Ha, e.g. paragraphs 73-76, describes an analogous camera based eye tracking system determining 3D positions of the users eyes, where as discussed above, Ha, e.g. paragraphs 54, teaches that the coordinates received by the display device from the eye tracker are relative to a point on the display device, i.e. Ha also teaches determining the 3D positions of the driver’s eyes determined from one or more images capturing the user’s face. Giegold and Ha both teach determining the claimed first position information based on feature position information, wherein the feature position information represents the position information of the feature information in second image information comprising the feature information of the user, but Giegold and Ha do not explicitly describe how the 3rd dimension of the eye tracking positions are determined by the eye tracker. While one of ordinary skill in the art would have found it implicit that the 3rd dimension of the eye tracking positions determined by the camera based eye tracker would correspond to the claimed depth information representing a straight-line distance from the feature information of the user to the second device, i.e. one of ordinary skill in the art would know that it is conventional for stereo image based 3D positioning to determine the 3D positions of detected features based on 2D image feature coordinates and a straight line distance/depth from the camera(s) capturing the images, neither Giegold or Ha explicitly state that the 3D position used to provide the display device relative eye coordinates of Ha, paragraph 54, is determined using a straight line distance from the camera capturing the image to the position of the detected eyes/features as claimed.) However, this limitation is taught by Vicente (Vicente, e.g. abstract, sections I, III-IV, describes a system for tracking the positions and gaze direction of a driver using a camera facing the driver, e.g. figures 3, 9. Vicente, section III, describes the technique in detail, including III A, capturing the images of the driver, III B, detecting features of the driver, including their eyes/pupils, III C-D, determining the head pose and 3D pupil position, and finally III E, determining the 3D gaze vector for a 3D eye position using the vector t’eye defined in the camera coordinate system to determine the gaze vector, where the vector t’eye defined in the camera coordinate system corresponds to the claimed feature position information representing the position information of the feature information in second image information comprising the feature information of the user and depth information representing a straight-line distance from the feature information of the user to the second device. That is, Vicente’s vector t’eye defined in the camera coordinate system corresponds to the pupil landmark location in the 2D image captured by the camera, and the length of t’eye represents the depth as a straight-line distance from the camera. Finally, analogous to Ha’s teaching of the eye tracker providing eye coordinate information relative to a point on the display device, Vicente, section III E teaches that the eye vector and 3D gaze vector are transformed from the camera coordinate system to the world coordinate system using a rotation matrix, i.e. as claimed, the feature position/depth information is determined relative to the second image/second device, but provided to the display/first device after transforming to the world coordinate system used by the first device.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Giegold’s HUD system, using Ha’s display relative eye tracker coordinates, to use Vicente’s camera based eye tracking system for tracking the position of the driver’s eyes because both Giegold and Ha teach the use of camera based eye tracking systems at a relatively high level detail and Vicente’s description of the camera based eye tracking system includes a more detailed explanation of how the camera based eye tracking system operates, including the above noted elements of determining the 3D eye position using the vector t’eye defined in the camera coordinate system and transforming the 3D eye position vector from the camera coordinate system to a world coordinate system. As noted above, one of ordinary skill in the art would know that it is conventional for stereo image based 3D positioning to determine the 3D positions of detected features based on 2D image feature coordinates and a straight line distance/depth from the camera(s) capturing the images, and analogously Vicente teaches a camera based eye tracking system determining eye/feature position coordinates and straight line distance/depth. In Giegold’s modified system, Vicente’s eye tracking system would be used to determine the vector t’eye, as well as gaze direction, for both of the driver’s eyes, and transform the eye position vectors from the camera coordinate system to Ha’s world coordinate system defined relative to a point on the display device, corresponding to the claimed receiving first position information in a world coordinate system, and the first position information is based on the feature position/depth information defined in the camera/second device coordinate system.
Regarding claim 12, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, with Giegold, e.g. paragraphs 42, 52-54, 60 teaching that the computing unit of the HUD is a programmable processor having memory, i.e. the claimed processor executing instructions stored in a memory.
Regarding claim 16, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, i.e. Giegold, paragraphs 55, 56, indicates the position data is of the driver’s eyes.
Claims 2, 3, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2008/0165084 A1 (hereinafter Giegold) in view of U.S. Patent Application Publication 2021/0233207 A1 (hereinafter Ha) in view of “Driver Gaze Tracking and Eyes Off the Road Detection System” by Francisco Vicente, et al. (hereinafter Vicente) as applied to claims 1 and 12 above, and further in view of “A Calibration Method for Automotive Augmented Reality Head-Up Displays Using a Chessboard and Warping Maps” by Xiang Gao, et al. (hereinafter Gao).
Regarding claim 2, the limitations “wherein obtaining the first pre-distortion model based on the first position information comprises: obtaining second position information based on the first position information, wherein the second position information is position information of multiple pieces of preset position information … wherein the preset position information is preset by the first device; and obtaining the first pre-distortion model corresponding to the second position information” are taught by Giegold (As discussed in the claim 1 rejection above, Giegold, e.g. paragraphs 13-19, 32-37, 42, 52-56, teaches that the position information may be used by the HUD during an operating phase to determine a set of pre-distortion parameters based on the received position and orientation relative to the positions/orientations of the viewpoints captured during the calibration phase, i.e. the positions/orientations of the viewpoints captured during the calibration phase correspond to the claimed multiple pieces of preset position information. Further, Giegold, e.g. paragraphs 20, 60, indicate that the calibration processing may be carried out at least partially by the HUD computing device, i.e. as claimed, the multiple pieces of preset position information are preset by the first device. Finally, Giegold, e.g. paragraphs 32, 33, indicates that the HUD may either directly use the pre-distortion parameters for a viewing position/orientation or interpolate between the pre-distortion parameters for intermediate viewing position/orientations, i.e. the second position information determined based on the first position information is/are the multiple preset positions from the calibration phase, where the applied pre-distortion model is interpolated from the pre-distortion models corresponding to said multiple preset positions, i.e. as claimed, the first pre-distortion model is obtained based on the second position information/multiple preset positions which are obtained based on the first position information.)
The limitation "wherein the second position information is position information of multiple pieces of preset position information having a distance in the world coordinate system from the first position information less than a preset threshold” is not explicitly taught by Giegold (As discussed above, Giegold, e.g. paragraphs 32, 33, 36 indicates that the HUD may either directly use the pre-distortion parameters for a viewing position/orientation or interpolate between the pre-distortion parameters for intermediate viewing position/orientations during the operating phase. Further, while Giegold, e.g. paragraphs 58, 60, indicates that the range of calibration viewpoint positions/orientations should include calibration viewpoints at the extremes and center of the eyebox, and one of ordinary skill in the art would understand that a large number of viewpoint positions could be used during calibration, i.e. more than the exemplary 5 calibration viewpoints, Giegold does not explicitly suggest that the pre-distortion parameter interpolation for intermediate viewing positions/orientations is based on a subset of the sets of pre-distortion parameters corresponding to the calibration viewpoints, based on a spatial distance, as claimed, or otherwise. Finally, it is noted that Ha, e.g. paragraphs 84, 85, teaches that predetermined pre-distortion/warping meshes may be interpolated for a target point, but does not explicitly teach that the selection of source pre-distortion/warping meshes is based on a spatial distance as claimed.) However, this limitation is taught by Gao (Gao, e.g. abstract, sections 1-6, describes a calibration method for a vehicular AR-HUD, analogous to Giegold’s system, including capturing images for calibration at a plurality of viewpoints spanning a driver eyebox, e.g. figures 2a, 2c, sections 3.1, 3.2, 4.1, performing calibration to determine a warping map for pre-distorting the projected image at each calibration viewpoint, e.g. sections 4.2-4.4, and finally the calibration is confirmed using test viewpoints at intermediate positions by identifying the 4 nearest neighbor calibration viewpoints and interpolating the corresponding warping maps to determine a warping map/pre-distortion model for the test viewpoint position, e.g. section 4.5. That is, Gao teaches that there may be a large number of calibration viewpoints spanning the driver eyebox, e.g. figure 2c, and that interpolation of the warping maps/pre-distortion parameters may be limited to the 4 nearest neighbor calibration viewpoints to the interpolated viewpoint, corresponding to the claimed second position information comprising multiple preset positions having a distance in the preset coordinate system from the first position information less than a preset threshold, i.e. each of Gao’s test viewpoint positions has a warping map determined by interpolating between the nearest calibration viewpoint in the 4 respective directions/quadrants of ±Y and ±Z. Applicant’s disclosure, e.g. paragraphs 144-146 indicates that the preset threshold is not necessarily a specific number but could be a selection of a smallest distance, such that Gao’s nearest neighbor selection corresponds to the claimed multiple pieces of preset position information having a distance in the preset coordinate system from the first position information less than a preset threshold.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Giegold’s HUD system, using Ha’s display relative eye tracker coordinates, using Vicente’s camera based eye tracking system, to use Gao’s 4 nearest neighbor selection technique for selecting calibration viewpoints for interpolating the pre-distortion parameters/model for the detected position of the driver’s eyes in order to limit the number of calibration viewpoints used to perform the interpolation, i.e. Giegold, e.g. paragraph 35, indicates that minimizing computational complexity during the operating phase is advantageous, and when a large number of calibration viewpoints are used, as in Gao’s example, one of ordinary skill in the art would recognize that interpolating between a limited number of calibration viewpoints would reduce computational complexity in comparison to interpolation between all of the calibration viewpoints. In Giegold’s modified system, when the received position of the driver’s eye is intermediate to the calibration viewpoints, interpolation of the pre-distortion parameters/models is performed, as in paragraph 33, using only the 4 nearest neighbor calibration viewpoints to the received viewpoint position as taught by Gao, such that as claimed, the second information determined from the first information comprises multiple pieces of preset position information having a distance in the preset coordinate system from the first position information less than a preset threshold.
Regarding claim 3, the limitations “wherein prior to receiving the first position information from the second device, the method further comprises: receiving at least two pieces of first image information from a third device, wherein the at least two pieces of first image information represent information about images that are projected by the first device[;] obtaining standard image information representing a non-distorted projected image; separately comparing the at least two pieces of first image information with the standard image information … to obtain at least two first pre-distortion models, wherein the at least two first pre-distortion models are in a one-to-one correspondence with the first image information” are taught by Giegold and Gao (Giegold, e.g. paragraphs 23, 58, 59, 62, indicates that the calibration viewpoint images may be captured by an additional camera placed at the respective calibration viewpoints while the HUD is projecting a test image with little or no pre-distortion applied, i.e. the claimed receiving at least two pieces of first image information from a third device representing information about images projected by the first device/HUD. Analogously, Gao, sections 3, 4, figures 2a, 3, teaches that a smartphone captures images of a chessboard with control points projected thereon to perform the calibration, i.e. the claimed third device. Further, Giegold, e.g. paragraphs 58-63, indicates that the pre-distortion parameters are determined separately for each calibration viewpoint by comparing the captured images with the undistorted image data to be projected, i.e. as claimed, obtaining standard information representing a non-distorted projected image; separately comparing the respective first images with the standard information to determine a separate pre-distortion model/set of pre-distortion parameters, resulting in a one-to-one correspondence between the calibration viewpoints/first images and sets of pre-distortion parameters/models.)
The limitation “separately comparing the at least two pieces of first image information with the standard image information to obtain at least two preset distortion amounts, wherein a preset distortion amount represents a distortion amount of the first image information relative to the standard image information; and separately performing calculation, based on the at least two preset distortion amounts, to obtain at least two first pre-distortion models, wherein the at least two first pre-distortion models are in a one-to-one correspondence with the first image information” is implicitly taught by Giegold (As discussed above, Giegold, e.g. paragraphs 58-63, indicates that the pre-distortion parameters are determined separately for each calibration viewpoint by comparing the captured images with the undistorted image data to be projected. While Giegold does not explicitly state that an amount of distortion, per se, is determined from the well-known comparison process in order to determine the pre-distortion parameters, one of ordinary skill in the art would have recognized that the claimed amount of distortion is what Giegold’s well-known comparison process determines, i.e. the result of the comparison process identifies differences between the captured and projected image, which are used to determine the pre-distortion parameters correcting the identified differences, such that the claimed determined amount of distortion is implicit, if not inherent, in Giegold’s well-known comparison process. This is explicitly taught by Gao, i.e. Gao, section 4.4, figures 4c-4f, show that the amount of distortion is measured in u and v directions within the captured images using the control points, and used to determine a warping map for pre-distorting images, i.e. one of ordinary skill in the art would recognize that Giegold’s well-known comparison process of paragraph 63 includes first determining an amount of distortion, as claimed, prior to determining pre-distortion parameters to correct for the determined amount of distortion.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Giegold’s HUD system, using Ha’s display relative eye tracker coordinates, using Vicente’s camera based eye tracking system, using Gao’s 4 nearest neighbor selection technique for selecting calibration viewpoints for interpolating the pre-distortion parameters/model, to determine the sets of pre-distortion parameters for each calibration viewpoint based on an amount of distortion determined by comparing the captured images and undistorted projection image, both because one of ordinary skill in the art would have recognized that the claimed amount of distortion is what Giegold’s well-known comparison process determines, and because Gao describes an analogous comparison process showing that the pre-distortion parameters/model, i.e. Gao’s warping map, is determined based on the measured amount of u and v distortion between the projected and detected control points. It is noted that this does not actually require further modification of Giegold’s system, rather, Gao explicitly discloses details which one of ordinary skill in the art would have recognized are implicit, if not inherent, in Giegold’s well-known comparison process.
Regarding claim 13, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 2 above.
Regarding claim 14, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 3 above.
Claims 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2008/0165084 A1 (hereinafter Giegold) in view of U.S. Patent Application Publication 2021/0233207 A1 (hereinafter Ha) in view of “Driver Gaze Tracking and Eyes Off the Road Detection System” by Francisco Vicente, et al. (hereinafter Vicente) as applied to claims 1 and 12 above, and further in view of U.S. Patent Application 2016/0041386 A1 (hereinafter Moreno)
Regarding claim 4, the limitations “wherein obtaining the first pre-distortion model based on the first position information comprises: receiving gaze information from the second device … determining a first distortion amount based on the gaze information and the first position information … obtaining the first pre-distortion model based on … the first distortion amount” are taught by Giegold (As discussed in the claim 1 rejection above, Giegold, e.g. paragraphs 13-19, 32-37, 42, 52-56, teaches that the driver’s eye position and gaze orientation information may be used by the HUD during an operating phase to determine a set of pre-distortion parameters based on the received position and orientation, i.e. the received information from the eye tracking camera includes the claimed gaze information, where Giegold teaches that the received position and orientation may be used to determine a set of pre-distortion parameters based on a continuous function modeling the distortion characteristics mathematically, e.g. paragraphs 34, 35, 41, 65, i.e. the position and orientation are used to determine pre-distortion parameters correcting for an amount of distortion calculated at the position and orientation by the continuous function, corresponding to the claimed first pre-distortion model being determined based on an amount of deformation determined based on the position and gaze information.)
The limitation “wherein the first distortion amount represents a distortion amount of a human eye calibration image relative to a standard image, the human eye calibration image represents the image projected by the first device and that is presented in a human eye of the user, and the standard image is a non-distorted projected image” is taught by Giegold (As discussed above, Giegold, e.g. paragraphs 13-19, 32-37, 42, 52-56, teaches that the driver’s eye position and gaze orientation information may be used by the HUD during an operating phase to determine a set of pre-distortion parameters using a continuous function modeling the distortion characteristics mathematically, e.g. paragraphs 34, 35, 41, 65. Giegold, e.g. paragraphs 23, 58, 59, 62, indicates that calibration viewpoint images are be captured by a camera at respective calibration viewpoints while the HUD is projecting a test image with little or no pre-distortion applied, and further, Giegold, e.g. paragraphs 58-63, indicates that the pre-distortion parameters are determined separately for each calibration viewpoint by comparing the captured images with the undistorted image data to be projected, i.e. the pre-distortion parameters at each calibration viewpoint represent the correction for an amount of distortion of the claimed human eye calibration image to a standard image, where the image captured by the camera at the calibration viewpoint represents the image projected by the HUD as seen in a human eye of a user at the calibration viewpoint and the undistorted image data to be projected corresponds to the standard image/non-distorted projected image. Finally, Giegold, e.g. paragraphs 34, 41, 65, indicate that the continuous function modeling the distortion characteristics mathematically is determined from the sets of pre-distortion parameters determined for the calibration viewpoints, i.e. the pre-distortion parameters/model determined using the continuous function is correcting for an amount of distortion representing the calibrated viewpoints’ respective amounts of distortion, correspond to the claimed first distortion amount.)
The limitations “wherein the gaze information represents information about the user gazing at a reference point calibrated in the image projected by the first device; determining a first field of view range based on the gaze information, wherein the first field of view range represents a field of view range observed by a user” and “obtaining the first pre-distortion model based on the first field of view range and the first distortion amount” are not explicitly taught by Giegold (As discussed above, Giegold, e.g. paragraphs 13-19, 23, 32-37, 41, 42, 52-56, 59-65, teaches that system receives both position and gaze direction information of the driver’s eyes, and determines the first pre-distortion model based on a distortion amount represented using a continuous function modeled on the calibration viewpoint images and the calibration viewpoints/orientations, but Giegold does not teach that the gaze direction of the driver includes/identifies a reference point calibrated in the projected image, i.e. although Giegold’s continuous function represents distortion amounts of calibrated images, meaning the points therein are calibrated reference points in the projected image, Giegold does not identify or use the point being gazed at, only a direction of the gaze. Further, Giegold does not discuss determining a field of view range based on the gaze information, or by extension, using the field of view range to determine the pre-distortion parameters for correcting the projected image.) However, these limitations are taught by Moreno (Moreno, e.g. abstract, paragraphs 14-40, describes a vehicle HUD system, analogous to Giegold’s system, which projects images onto the windshield, e.g. paragraphs 14-18, 21, tracks the occupants’ viewpoint position, gaze direction, and gaze position on the windshield, e.g. paragraphs 19, 20, 22, 24, 25, 35, 36, and determines updated distortion correction parameters for projecting an image at the gaze position based in part on a determined field of view defined using the viewpoint position/direction, e.g. paragraphs 23-26, 28-31, 36, 37, where the in example of paragraph 26, the driver’s determined field of view based on the gaze vector 30 transitions from 15A to 15B between a first time and a second time. Moreno, e.g. paragraphs 28-31, 36, 37, teaches that display feedback images can be captured in order to dynamically calibrate the distortion correction performed by the HUD, where, in addition to correcting for distortion caused by the properties of the surface where the driver is looking as in paragraphs 28, 29, the dynamic calibration technique can also correct for detected occlusions between the viewpoint and the projected image/surface as well as glare or shading cause by external environment conditions as in paragraphs 30, 31, i.e. determine pre-distortion parameters based on the field of view of the user/driver, and based an amount of distortion at the projected image/windshield region determined based on the received viewpoint position and gaze reference point.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Giegold’s HUD system, using Ha’s display relative eye tracker coordinates, using Vicente’s camera based eye tracking system, to use Moreno’s dynamic calibration technique in order to improve the visibility of the projected images for the driver’s detected viewpoint position/orientation, i.e. Moreno’s technique further accounts for obstructions or distortions based the driver’s field of view relative to the display surface, and external environmental conditions. Giegold’s modified HUD system using Moreno’s dynamic calibration technique, as noted above, would determine pre-distortion parameters based on the field of view of the user/driver, and based an amount of distortion at the projected image/windshield region determined based on the received viewpoint position and gaze reference point, where the gaze reference point corresponds to the claimed reference point calibrated in an image projected by the first device, i.e. the reference point is determined on the projection surface within the coordinate system that is calibrated to the HUD’s image projection characteristics.
Regarding claim 15, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 4 above.
Response to Arguments
Applicant’s arguments, see pages 11-13, filed 11/3/25, with respect to the rejection(s) of claim(s) 1-4, 12-16 in view of Giegold, Ha, Gao, and Moreno have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Giegold, Ha, Vicente, Gao, and Moreno.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT BADER whose telephone number is (571)270-3335. The examiner can normally be reached 11-7 m-f.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT BADER/Primary Examiner, Art Unit 2611