Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Acknowledgement is made of amendment filed 10 November 2025 in which claims 1, 5, 6, 8, 11, and 14-20 are amended and claim 13 is cancelled. Claims 1-12 and 14-20 are currently pending and an office action on the merits follows.
Inventorship
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-6, 8-12, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2018/0253907 by Cashen et al. (“Cashen”) in view of U.S. Pub. No.2020/0192109 by Lee et al. (“Lee”)
As to claim 1, Cashen discloses a system of displaying an augmented reality (AR) image on a viewable surface of a vehicle (Cashen, AR system 100 for use in a vehicle 110, Figure 1, ¶ [0046]),
the system comprising: a scanning projector (Cashen, picture generation unit 404 (PGU) may comprise… digital light projector (DLP), ¶ [0065]), and a processing circuitry comprising a processor (Cashen, graphics unit 300, Figure 2) and memory (Cashen, Warp maps may be stored in a memory unit, such as a machine readable storage medium, of a graphics unit 300, ¶ [0144]), the processing circuitry being configured to:
receiving data indicative of a location of a target external to the vehicle (Cashen, The sensing unit 200 may use its plurality of sensors to monitor and scan an environment 104 to identify real-world targets 106. Importantly, sensing unit 200 may be configured to sense a given target's position. A target's sensed position may be forwarded to the graphics unit 300 as a target position input. A target's position input may represent the precise location or place of a particular target 106 relative to the FCM 202 or another desired location within the vehicle 110. The target position input may also be a predicted target position input, as will be described later in the disclosure. The target position input may be forwarded from the sensing unit 200 to the graphics unit 300 in coordinates, such as Cartesian or Spherical. Figure 1, ¶ [0049]);
determine a line-of-sight to the target based on, at least, an operator viewing position and an operator viewing orientation (Cashen, In the example in FIG. 21, the target 106 is a pedestrian located at Point X, which has a 3D coordinate position (x, y, z). The transform engine may also receive an input for the driver's eye position 670 in the form of 3D coordinates, such as (x, y, z). Here, the eye position 670 of the operator is at Point Y. The coordinates of the eye position 670 are relative to the tracking camera 204 (not shown). Design Data inputs, and in this case the Vehicle Layout data, may synch the target position input and the eye position input 670 such that they are both taken relative to one reference point, Figure 21, ¶ [0132])(Cashen, After the virtual display plane 422 has been identified, a vector is drawn between the eye position 670 and the target position. Here, if the pedestrian is known to be at Point X and the eye position is known to be at Point Y, the transform engine will compute a vector between Points X and Y. ¶ [0134]); and
Cashen does not expressly teach
control the scanning projector to project one or more pixels of the image onto a location substantially proximate to a point of intersection of the line-of-sight and the viewable surface,
the scanning projector being adapted to direct light of a light source toward an adjustable mirror, the adjustable mirror being controllable to reflect the light to display a pixel of the one or more pixels on the location substantially proximate to the point of intersection, and
the scanning projector bien further adapted to project beams that, when reflected by the viewable surface, as substantially parallel, thereby resulting in an image appearing at infinity to the operator, without subsequent collimation.
Lee teaches a 3D display apparatus including
control the scanning projector (Lee, a projector may scan a light to an optical layer 220. The projector may include at least one laser scanning module 210 configured to scan a laser to the optical layer 220. Figures 1 and 2, ¶ [0074]) to project one or more pixels of the image onto a location substantially proximate to a point of intersection of the line-of-sight and the viewable surface (Lee, The 3D display apparatus 100 may include a projector 110 and the optical layer 120. The projector 110 may scan a light to the optical layer 120. The optical layer 120 may include a plurality of optical elements… The optical elements may also be referred to as 3D pixels. A 3D pixel may refract or reflect only a light of a particular wavelength, and transmit a light of a wavelength other than the particular wavelength. Figure 1, ¶ [0066]),
the scanning projector being adapted to direct light of a light source toward an adjustable mirror (Lee, scanning mirror 211, Figure 2), the adjustable mirror being controllable to reflect the light to display a pixel of the one or more pixels on the location substantially proximate to the point of intersection (Lee, The laser scanning module 210 may scan a line of a second direction, for example, a line at a time in a lateral direction, while moving a laser beam in a first direction toward the optical layer 220, for example, moving down from the top, by rotating the scanning mirror 211. The laser scanning module 210 may generate a 2D image on the optical layer 220 through laser beam scanning. The scanning mirror of the laser scanning module 210 may rotate at a predetermined interval to scan a laser beam to the optical layer 220. Figure 2, ¶ [0076]), and
the scanning projector being further adapted to project beams that, when reflected by the viewable surface, as substantially parallel, thereby resulting in an image appearing at infinity to the operator, without subsequent collimation (Lee, A plurality of beams 230 may be determined in a 3D space based on the 2D image represented on the optical layer 220. For example, the plurality of beams generated in the 3D space may change based on image information of the 2D image displayed on the optical layer 220. To output a beam of different information, for example, a different color, to a different position of the optical layer 220 in response to rotation of the scanning mirror, the image information and the scanning interval of the scanning mirror may be synchronized. Figure 2, ¶ [0077]). As shown in figure 2 of Lee, no collimation is utilized to present an image to the user. In addition, the beams 230 are substantially parallel, as shown in figure 2 of Lee. Examiner notes that the term “substantially” is broad and encompasses a large range of angles for the beams.
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Cashen’s image projection system to include Lee’s 3D image projection system because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Cashen’s image projection system and Lee’s 3D image projection system perform the same general and predictable function, the predictable function being providing an image on a windshield for a user to view. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself – that is in the substitution of Cashen’s image projection system by replacing it with Lee’s 3D image projection system. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Thus, Cashen, as modified by Lee, teaches the laser scanning projector and rotating scanning mirror to present images on a windshield.
As to claim 2, Cashen, as modified by Lee, teaches the system wherein the processing circuitry is configured to perform the controlling the scanning projector responsive to a difference between a viewing orientation and a line-of-sight angle that does not exceed an operator field-of-view threshold (Lee, The laser scanning module 210 may scan a line of a second direction, for example, a line at a time in a lateral direction, while moving a laser beam in a first direction toward the optical layer 220, for example, moving down from the top, by rotating the scanning mirror 211. The laser scanning module 210 may generate a 2D image on the optical layer 220 through laser beam scanning. The scanning mirror of the laser scanning module 210 may rotate at a predetermined interval to scan a laser beam to the optical layer 220. Figure 2, ¶ [0076]) (Cashen, Referring now to FIG. 31, as mentioned previously, a given operator movement box 672 may contain a number of eye boxes 674. For instance, there could be three eye boxes 674 (for small S, medium M, and tall T drivers) or five eye boxes 674 (for small, medium-small, medium, medium-tall, and tall) in a given operator movement box 672. Each eye box 674 may contain a number of driver eye positions 676 (e.g., an eye box 674 with five rows by five columns would have twenty-five driver eye positions 676). As illustrated in FIG. 31, in an embodiment where calibrated predistortion data is combined with predictive predistortion data, calibrated warp maps 900 need only be calculated for one driver eye position 676 of each driver eye box 674. Preferably, the center driver eye position 676 or a driver eye position 676 near the center of an eye box 674 is chosen to be calibrated. In this example, only the center driver eye positions 676 of each eye box 674 have calibrated warp maps. It will be appreciated that more than one driver eye position 676 per eye box 674 may be calibrated, as this may improve image warping capabilities. In fact, all driver eye positions 676 within an eye box 674 may be calibrated. Figure 31, ¶ [0168]). Cashen teaches the separate eye boxes which is based on the user’s height to provide a proper AR image viewable within the user’s field-of-view. In addition, the motivation used is the same as in the rejection of claim 1.
As to claim 3, Cashen, as modified by Lee, teaches the system wherein the operator viewing position is in accordance with, at least, a position of an operator's seat in a vehicle compartment (Cashen, The alignment system 500 may also account for and correct the following dynamic error factors 650: curved mirror position 660, eye position 670, vehicle orientation 680, kinematics 690, and latency 700. Other dynamic error factors 650 are also possible, including vehicle seat translation (forward and backward adjustments), vehicle seat back reclining movement, et cetera. Figure 6, ¶ [0060]).
As to claim 4, Cashen, as modified by Lee, teaches the system wherein the operator viewing orientation is in accordance with, at least, an assumed orientation of the operator’s gaze (Cashen, if a driver moves his or her head to the right to gain a better perspective of a target 106 to the left of the vehicle 110, the tracking camera 204 may be configured to capture the head and/or eye movement to the right. ¶ [0053]) (Cashen, The display unit 400 takes the final graphic output, which has been corrected for misalignment and distortion by the alignment system 500, and projects an image onto an optical element. In FIG. 1, the optical element is a windshield 102. The rays projected by the display unit 400 are shown reflecting off of the windshield 102 toward the operator's eyes 108. As the windshield 102 has reflective properties, the reflected images create a virtual image on a virtual display plane 422 at an image distance from the windshield 102. ¶ [0063]).
As to claim 5, Cashen, as modified by Lee, teaches the system wherein the assumed gaze orientation is towards the target (Cashen, The display unit 400 takes the final graphic output, which has been corrected for misalignment and distortion by the alignment system 500, and projects an image onto an optical element. In FIG. 1, the optical element is a windshield 102. The rays projected by the display unit 400 are shown reflecting off of the windshield 102 toward the operator's eyes 108. As the windshield 102 has reflective properties, the reflected images create a virtual image on a virtual display plane 422 at an image distance from the windshield 102. ¶ [0063]). As shown in figure 1 of Cashen, the user’s eye is directed forward towards the target through the windshield.
As to claim 6, Cashen, as modified by Lee, teaches the system wherein the assumed gaze orientation is forward (Cashen, The display unit 400 takes the final graphic output, which has been corrected for misalignment and distortion by the alignment system 500, and projects an image onto an optical element. In FIG. 1, the optical element is a windshield 102. The rays projected by the display unit 400 are shown reflecting off of the windshield 102 toward the operator's eyes 108. As the windshield 102 has reflective properties, the reflected images create a virtual image on a virtual display plane 422 at an image distance from the windshield 102. ¶ [0063]). As shown in figure 1 of Cashen, the user’s eye is directed forward towards the target through the windshield.
As to claim 8, Cashen, as modified by Lee, teaches the system wherein the operator viewing position is a head position that provided by an operator view-tracking subsystem in accordance with, at least, data from cameras monitoring the operator’s head (Cashen, The tracking camera 204 may capture eye position and/or head position, and may report the location of the operator's eyes and/or head to the graphics unit 300. FIG. 1 depicts an exemplary tracking camera 204 capturing an operator's eye position. The position of the operator's eyes may be given in Cartesian or Spherical coordinates, for example. ¶ [0051]).
As to claim 9, Cashen, as modified by Lee, teaches the system wherein the operator viewing orientation is a head orientation that provided by an operator view-tracking subsystem in accordance with, at least, data from cameras monitoring a direction of the operator’s head (Cashen, The tracking camera 204 may capture eye position and/or head position, and may report the location of the operator's eyes and/or head to the graphics unit 300. FIG. 1 depicts an exemplary tracking camera 204 capturing an operator's eye position. The position of the operator's eyes may be given in Cartesian or Spherical coordinates, for example. ¶ [0051]).
As to claim 10, Cashen, as modified by Lee, teaches the system wherein the operator viewing orientation is a head orientation that provided by an operator view-tracking subsystem in accordance with data from cameras monitoring a direction of the operator’s pupils (Cashen, The tracking camera 204 may capture eye position and/or head position, and may report the location of the operator's eyes and/or head to the graphics unit 300. FIG. 1 depicts an exemplary tracking camera 204 capturing an operator's eye position. The position of the operator's eyes may be given in Cartesian or Spherical coordinates, for example. ¶ [0051]).
As to claim 11, Cashen, as modified by Lee, teaches the system wherein the processing circuitry is further configured to:
receive additional data indicative of at least one of a set comprising:
an updated location of the target,
an updated operator viewing position, and
an updated operator viewing orientation (Cashen, Kinematics inputs 690 that may be tracked include a vehicle's speed and direction (i.e., the vehicle's velocity), a target's speed and direction (i.e., the target's velocity), and with this information, a relative velocity may be calculated. Accordingly, where the following inputs are known: a target's initial position, the relative velocity between a vehicle 110 and a target 106, and the latency 700 of the system; a target's position relative to the vehicle 110 may be predicted at a future predetermined time. ¶ [0115])(Cashen, kinematics and latency inputs 690, 700, in combination, may be used as tools to predict a change in a target's position such that a target's sensed position may be adjusted to reflect a predicted target position in the 3D graphics generation stage. Moreover, system latency 700 can further be tracked during the graphics generation stage to allow for the warping processor 502 to correct for any image deflection caused by latency during this period. A target's position may be predicted in the FCM 202 of the sensing unit 200, in the graphics unit 300, other processing modules, or a combination thereof. ¶ [0117]),
determine an updated line-of-sight to the target (Cashen, kinematics and latency inputs 690, 700, in combination, may be used as tools to predict a change in a target's position such that a target's sensed position may be adjusted to reflect a predicted target position in the 3D graphics generation stage. Moreover, system latency 700 can further be tracked during the graphics generation stage to allow for the warping processor 502 to correct for any image deflection caused by latency during this period. A target's position may be predicted in the FCM 202 of the sensing unit 200, in the graphics unit 300, other processing modules, or a combination thereof. ¶ [0117]); and
further control the scanning projector to display the AR image on a location of the viewable surface that is located substantially along the updated line-of-sight (Cashen, The display unit 400 takes the final graphic output, which has been corrected for misalignment and distortion by the alignment system 500, and projects an image onto an optical element. In FIG. 1, the optical element is a windshield 102. The rays projected by the display unit 400 are shown reflecting off of the windshield 102 toward the operator's eyes 108. As the windshield 102 has reflective properties, the reflected images create a virtual image on a virtual display plane 422 at an image distance from the windshield 102. ¶ [0063]) (Lee, The laser scanning module 210 may scan a line of a second direction, for example, a line at a time in a lateral direction, while moving a laser beam in a first direction toward the optical layer 220, for example, moving down from the top, by rotating the scanning mirror 211. The laser scanning module 210 may generate a 2D image on the optical layer 220 through laser beam scanning. The scanning mirror of the laser scanning module 210 may rotate at a predetermined interval to scan a laser beam to the optical layer 220. Figure 2, ¶ [0076]). In addition, the motivation used is the same as in the rejection of claim 1.
As to claim 12, Cashen, as modified by Lee, teaches the system wherein the processing circuitry is further configured to control the scanning projector to display additional image data on the viewable surface (Cashen, A number of targets 106 could be highlighted by the AR system 100. Target 1 could be a left turn indication. The AR system 100 could highlight the left turn lane and project an arrow within the operator's FOV designating that a left turn should be made. Target 2 could be an indication instructing that there is a four-way stop sign at the intersection, and the stop sign could be highlighted. Target 3 could be an indication that a biker is swerving from the right lane into the left turn lane just in front of the operator's vehicle 110; the biker could be highlighted. An alert status for the biker could also be presented to the operator, such as a red flashing outline of the biker's body indicating that a collision with the biker may be imminent. Any number of Targets N may be tracked by sensing unit 200. AR system 100 may be configured to prioritize certain targets 106, as highlighting too many targets 106 may decrease an AR system's effectiveness. Figure 2, ¶ [0050]) (Lee, The laser scanning module 210 may scan a line of a second direction, for example, a line at a time in a lateral direction, while moving a laser beam in a first direction toward the optical layer 220, for example, moving down from the top, by rotating the scanning mirror 211. The laser scanning module 210 may generate a 2D image on the optical layer 220 through laser beam scanning. The scanning mirror of the laser scanning module 210 may rotate at a predetermined interval to scan a laser beam to the optical layer 220. Figure 2, ¶ [0076]). In addition, the motivation used is the same as in the rejection of claim 1.
As to claim 14, Cashen, as modified by Lee, teaches the system wherein the scanning projector comprises a laser (Lee, The laser scanning module 210 may scan a line of a second direction, for example, a line at a time in a lateral direction, while moving a laser beam in a first direction toward the optical layer 220, for example, moving down from the top, by rotating the scanning mirror 211. The laser scanning module 210 may generate a 2D image on the optical layer 220 through laser beam scanning. The scanning mirror of the laser scanning module 210 may rotate at a predetermined interval to scan a laser beam to the optical layer 220. Figure 2, ¶ [0076]). In addition, the motivation used is the same as in the rejection of claim 1.
As to claim 15, Cashen, as modified by Lee, teaches the system wherein the scanning projector comprises one or more microelectromechanical system (MEMS) scanning mirrors configured to reflect light from the laser (Lee, The scanning mirror 640 may be manufactured using micro electro mechanical system (MEMS) technology, and generate a 2D image by scanning laser beams focused on a single point to the optical layer using two driving axes. ¶ [0096]). In addition, the motivation used is the same as in the rejection of claim 1.
As to claim 16, Cashen, as modified by Lee, teaches the system wherein the scanning projector comprises beam-shaping or optics adapted to render the projected beams as substantially parallel (Lee, A plurality of beams 230 may be determined in a 3D space based on the 2D image represented on the optical layer 220. For example, the plurality of beams generated in the 3D space may change based on image information of the 2D image displayed on the optical layer 220. To output a beam of different information, for example, a different color, to a different position of the optical layer 220 in response to rotation of the scanning mirror, the image information and the scanning interval of the scanning mirror may be synchronized. Figure 2, ¶ [0077]). The beams 230 are substantially parallel based on the reflection/refraction off the optical layer 220, as shown in figure 2 of Lee. Examiner notes that the term “substantially” is broad and encompasses a large range of angles for the beams. In addition, the motivation used is the same as in the rejection of claim 1.
As to claim 17, Cashen, as modified by Lee, teaches the system wherein the scanning projector is suitable for displaying the AR image on a viewable surface that is not flat (Cashen, the graphics unit 300 may communicate directly with a picture generation unit 404 (PGU) that may operably project images onto a windshield 102 such that a virtual image is rendered within an operator's FOV. Figure 5, ¶ [0065]). The windshield 102 where the images are produced on is a curved surface.
As to claim 18, Cashen discloses a processing circuitry-based method of displaying an augmented reality (AR) image on a viewable surface of a vehicle (Cashen, AR system 100 for use in a vehicle 110, Figure 1, ¶ [0046]), the method comprising:
receiving data indicative of a location of a target external to the vehicle (Cashen, The sensing unit 200 may use its plurality of sensors to monitor and scan an environment 104 to identify real-world targets 106. Importantly, sensing unit 200 may be configured to sense a given target's position. A target's sensed position may be forwarded to the graphics unit 300 as a target position input. A target's position input may represent the precise location or place of a particular target 106 relative to the FCM 202 or another desired location within the vehicle 110. The target position input may also be a predicted target position input, as will be described later in the disclosure. The target position input may be forwarded from the sensing unit 200 to the graphics unit 300 in coordinates, such as Cartesian or Spherical. Figure 1, ¶ [0049]);
determining a line-of-sight to the target based on, at least, an operator viewing position and an operator viewing orientation (Cashen, In the example in FIG. 21, the target 106 is a pedestrian located at Point X, which has a 3D coordinate position (x, y, z). The transform engine may also receive an input for the driver's eye position 670 in the form of 3D coordinates, such as (x, y, z). Here, the eye position 670 of the operator is at Point Y. The coordinates of the eye position 670 are relative to the tracking camera 204 (not shown). Design Data inputs, and in this case the Vehicle Layout data, may synch the target position input and the eye position input 670 such that they are both taken relative to one reference point, Figure 21, ¶ [0132])(Cashen, After the virtual display plane 422 has been identified, a vector is drawn between the eye position 670 and the target position. Here, if the pedestrian is known to be at Point X and the eye position is known to be at Point Y, the transform engine will compute a vector between Points X and Y. ¶ [0134]); and
Cashen does not expressly disclose
controlling a scanning projector to project one or more pixels of the AR image onto a location substantially proximate to a point of intersection of the line-of-sight and the viewable surface,
the scanning projector being adapted to direct light of a light source toward an adjustable mirror, the adjustable mirror being controllable to reflect the light to display a pixel of the one or more pixels on the location substantially proximate to the point of intersection, and
the scanning projector bien further adapted to project beams that, when reflected by the viewable surface, as substantially parallel, thereby resulting in an image appearing at infinity to the operator, without subsequent collimation.
Lee teaches a 3D display apparatus including
c)control a scanning projector (Lee, a projector may scan a light to an optical layer 220. The projector may include at least one laser scanning module 210 configured to scan a laser to the optical layer 220. Figures 1 and 2, ¶ [0074]) to project one or more pixels of the AR image onto a location substantially proximate to a point of intersection of the line-of-sight and the viewable surface (Lee, The 3D display apparatus 100 may include a projector 110 and the optical layer 120. The projector 110 may scan a light to the optical layer 120. The optical layer 120 may include a plurality of optical elements… The optical elements may also be referred to as 3D pixels. A 3D pixel may refract or reflect only a light of a particular wavelength, and transmit a light of a wavelength other than the particular wavelength. Figure 1, ¶ [0066]),
the scanning projector being adapted to direct light of a light source toward an adjustable mirror (Lee, scanning mirror 211, Figure 2), the adjustable mirror being controllable to reflect the light to display a pixel of the one or more pixels on the location substantially proximate to the point of intersection (Lee, The laser scanning module 210 may scan a line of a second direction, for example, a line at a time in a lateral direction, while moving a laser beam in a first direction toward the optical layer 220, for example, moving down from the top, by rotating the scanning mirror 211. The laser scanning module 210 may generate a 2D image on the optical layer 220 through laser beam scanning. The scanning mirror of the laser scanning module 210 may rotate at a predetermined interval to scan a laser beam to the optical layer 220. Figure 2, ¶ [0076]), and
the scanning projector being further adapted to project beams that, when reflected by the viewable surface, as substantially parallel, thereby resulting in an image appearing at infinity to the operator, without subsequent collimation (Lee, A plurality of beams 230 may be determined in a 3D space based on the 2D image represented on the optical layer 220. For example, the plurality of beams generated in the 3D space may change based on image information of the 2D image displayed on the optical layer 220. To output a beam of different information, for example, a different color, to a different position of the optical layer 220 in response to rotation of the scanning mirror, the image information and the scanning interval of the scanning mirror may be synchronized. Figure 2, ¶ [0077]). As shown in figure 2 of Lee, no collimation is utilized to present an image to the user. In addition, the beams 230 are substantially parallel, as shown in figure 2 of Lee. Examiner notes that the term “substantially” is broad and encompasses a large range of angles for the beams.
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Cashen’s image projection system to include Lee’s 3D image projection system because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Cashen’s image projection system and Lee’s 3D image projection system perform the same general and predictable function, the predictable function being providing an image on a windshield for a user to view. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself – that is in the substitution of Cashen’s image projection system by replacing it with Lee’s 3D image projection system. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Thus, Cashen, as modified by Lee, teaches the laser scanning projector and rotating scanning mirror to present images on a windshield.
As to claim 19, Cashen discloses the system wherein the scanning projector comprises beam-shaping or optics adapted to render the projected beams as substantially parallel (Lee, A plurality of beams 230 may be determined in a 3D space based on the 2D image represented on the optical layer 220. For example, the plurality of beams generated in the 3D space may change based on image information of the 2D image displayed on the optical layer 220. To output a beam of different information, for example, a different color, to a different position of the optical layer 220 in response to rotation of the scanning mirror, the image information and the scanning interval of the scanning mirror may be synchronized. Figure 2, ¶ [0077]). The beams 230 are substantially parallel based on the reflection/refraction off the optical layer 220, as shown in figure 2 of Lee. Examiner notes that the term “substantially” is broad and encompasses a large range of angles for the beams. In addition, the motivation used is the same as in the rejection of claim 18.
As to claim 20, Cashen discloses a computer program product comprising a non-transitory computer readable storage medium retaining program instructions, which, when read by a processing circuitry, cause the processing circuitry to perform a computerized method of displaying an augmented reality (AR) image on a viewable surface of a vehicle (Cashen, AR system 100 for use in a vehicle 110, Figure 1, ¶ [0046]), the method comprising:
receiving data indicative of a location of a target external to the vehicle (Cashen, The sensing unit 200 may use its plurality of sensors to monitor and scan an environment 104 to identify real-world targets 106. Importantly, sensing unit 200 may be configured to sense a given target's position. A target's sensed position may be forwarded to the graphics unit 300 as a target position input. A target's position input may represent the precise location or place of a particular target 106 relative to the FCM 202 or another desired location within the vehicle 110. The target position input may also be a predicted target position input, as will be described later in the disclosure. The target position input may be forwarded from the sensing unit 200 to the graphics unit 300 in coordinates, such as Cartesian or Spherical. Figure 1, ¶ [0049]);
determining a line-of-sight to the target based on, at least, an operator viewing position and an operator viewing orientation (Cashen, In the example in FIG. 21, the target 106 is a pedestrian located at Point X, which has a 3D coordinate position (x, y, z). The transform engine may also receive an input for the driver's eye position 670 in the form of 3D coordinates, such as (x, y, z). Here, the eye position 670 of the operator is at Point Y. The coordinates of the eye position 670 are relative to the tracking camera 204 (not shown). Design Data inputs, and in this case the Vehicle Layout data, may synch the target position input and the eye position input 670 such that they are both taken relative to one reference point, Figure 21, ¶ [0132])(Cashen, After the virtual display plane 422 has been identified, a vector is drawn between the eye position 670 and the target position. Here, if the pedestrian is known to be at Point X and the eye position is known to be at Point Y, the transform engine will compute a vector between Points X and Y. ¶ [0134]); and
Cashen does not expressly disclose
controlling a scanning projector to project one or more pixels of the image onto a location substantially proximate to a point of intersection of the line-of-sight and the viewable surface,
the scanning projector being adapted to direct light of a light source toward an adjustable mirror, the adjustable mirror being controllable to reflect the light to display a pixel of the one or more pixels on the location substantially proximate to the point of intersection, and
the scanning projector bien further adapted to project beams that, when reflected by the viewable surface, as substantially parallel, thereby resulting in an image appearing at infinity to the operator, without subsequent collimation.
Lee teaches a 3D display apparatus including
c)control a scanning projector (Lee, a projector may scan a light to an optical layer 220. The projector may include at least one laser scanning module 210 configured to scan a laser to the optical layer 220. Figures 1 and 2, ¶ [0074]) to project one or more pixels of the image onto a location substantially proximate to a point of intersection of the line-of-sight and the viewable surface (Lee, The 3D display apparatus 100 may include a projector 110 and the optical layer 120. The projector 110 may scan a light to the optical layer 120. The optical layer 120 may include a plurality of optical elements… The optical elements may also be referred to as 3D pixels. A 3D pixel may refract or reflect only a light of a particular wavelength, and transmit a light of a wavelength other than the particular wavelength. Figure 1, ¶ [0066]),
the scanning projector being adapted to direct light of a light source toward an adjustable mirror (Lee, scanning mirror 211, Figure 2), the adjustable mirror being controllable to reflect the light to display a pixel of the one or more pixels on the location substantially proximate to the point of intersection (Lee, The laser scanning module 210 may scan a line of a second direction, for example, a line at a time in a lateral direction, while moving a laser beam in a first direction toward the optical layer 220, for example, moving down from the top, by rotating the scanning mirror 211. The laser scanning module 210 may generate a 2D image on the optical layer 220 through laser beam scanning. The scanning mirror of the laser scanning module 210 may rotate at a predetermined interval to scan a laser beam to the optical layer 220. Figure 2, ¶ [0076]), and
the scanning projector being further adapted to project beams that, when reflected by the viewable surface, as substantially parallel, thereby resulting in an image appearing at infinity to the operator, without subsequent collimation (Lee, A plurality of beams 230 may be determined in a 3D space based on the 2D image represented on the optical layer 220. For example, the plurality of beams generated in the 3D space may change based on image information of the 2D image displayed on the optical layer 220. To output a beam of different information, for example, a different color, to a different position of the optical layer 220 in response to rotation of the scanning mirror, the image information and the scanning interval of the scanning mirror may be synchronized. Figure 2, ¶ [0077]). As shown in figure 2 of Lee, no collimation is utilized to present an image to the user. In addition, the beams 230 are substantially parallel, as shown in figure 2 of Lee. Examiner notes that the term “substantially” is broad and encompasses a large range of angles for the beams.
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Cashen’s image projection system to include Lee’s 3D image projection system because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Cashen’s image projection system and Lee’s 3D image projection system perform the same general and predictable function, the predictable function being providing an image on a windshield for a user to view. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself – that is in the substitution of Cashen’s image projection system by replacing it with Lee’s 3D image projection system. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Thus, Cashen, as modified by Lee, teaches the laser scanning projector and rotating scanning mirror to present images on a windshield.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2018/0253907 by Cashen et al. (“Cashen”), in view of U.S. Pub. No.2020/0192109 by Lee et al. (“Lee”), and in further view of U.S. Pub. No. 2016/0246384 by Mullins et al. (“Mullins”).
As to claim 7, Cashen, as modified by Lee, does not expressly disclose the system wherein the operator viewing position is a head position that provided by an operator view-tracking subsystem in accordance with, at least, data from sensors mounted on an operator helmet. Specifically, Cashen does not teach a helmet with sensors for head position tracking.
Mullins teaches a head mounted display device wherein the operator viewing position is a head position that provided by an operator view-tracking subsystem in accordance with, at least, data from sensors mounted on an operator helmet (Mullins, The helmet 1102 may include sensors (e.g., optical, proximity, audio, etc. sensors) 1108 and 1110 disposed at the front, back, and a top section 1106 of the helmet 1102. Display lenses 1112 are mounted on a lens frame 1114. The display lenses 1112 include a transparent display. In use, images are displayed by the transparent display but still allow the user to view physical objects through the lenses 1112. The HMD 1100 also includes two eye gaze tracking sensors 1111 mounted to a housing of the helmet 1102. Each eye gaze tracking sensor 1111 monitors the pupil of a corresponding eye of a wearer or user of the HMD 1100. For example, each eye gaze tracking sensor 1111 may track a position of the pupil of the eye of the wearer of the helmet 1102 as the user moves his or her eyes. Accordingly, in an example embodiment, the eye gaze tracking sensors 1111, in conjunction with associated electronic tracking modules (e.g., provided in the HMD AR Application 214) can determine a direction at which the user is staring. Figure 11A, ¶ [0091]). As shown in figure 11A of Mullins, the eye gaze tracking sensors 1111 are placed on the helmet 1102.
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Cashen’s vehicle head-up display to include Mullins’ helmet AR display system because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Cashen’s vehicle head-up display as modified by Mullins’ helmet AR display system is known to yield a predictable result of providing individual display and eye tracking since the helmet AR system permits only the user to properly see the information. Thus, a person of ordinary skill would have appreciated including in Cashen’s vehicle head-up display the ability to do Mullins’ helmet AR display system since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Thus, Cashen, as modified by Lee and Mullins, teaches a helmet with eye tracking sensors.
Response to Arguments
Applicant’s arguments with respect to claims 1-12 and 14-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
The prior 35 USC 112 rejection for claims 5 and 6 is withdrawn due to the correcting amendments.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Pub. No. 2016/0167672 by Krueger teaches a head-worn device which includes a head orientation tracker.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRENT D CASTIAUX whose telephone number is (571)272-5143. The examiner can normally be reached Mon-Fri 7:30 AM- 4:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached at (571)272-7772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRENT D CASTIAUX/Primary Examiner, Art Unit 2623