DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
4. Claims 1, 3-8, 10-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bell et al. (US Patent Application Publication No. 2018/0314066 A1) in view of Sharadh Ramaswamy (US Patent No. 9,975,483 B1).
5. Regarding Claim 1, Bell discloses A method of operating a dimmer (paragraph [0044] reciting “… The computer-readable media 120 may further include a dimming engine 128 configured to determine one or more dimming parameters associated with the generation of the dimming masks 108. …) of an optical system, (Abstract reciting “A near-eye-display system generates a dimming mask to enhance contrast between a computer-generated image and a real-world view. …”;
paragraph [0105] reciting “While described herein in the context of near-eye display systems, the example optical systems and methods disclosed herein may be used in any suitable optical system, such as a rifle scope, telescope, spotting scope, binoculars, and heads-up display.” Near eye display corresponds to an optical system.) the method comprising: receiving, at the optical system, light associated with a world object; (paragraph [0040] reciting “… Accordingly, in some implementations, the head-mounted display device 100 may generate the dimming masks 108 directly behind one or more CG images that are generated by the transparent display 104 to prevent light that is reflected from one or more real-world objects from passing through the transparent display 104 at the particular location at which the CG images are being generated.”)
detecting, using an ambient light sensor of the optical system, an ambient light level based on the light associated with the world object; (paragraph [0099] reciting “In some implementations, the system may determine at least one opacity parameter based at least in part on luminance data that indicates a luminous intensity corresponding to one or more regions of the real-world view. For example, the system may deploy a light sensor 812 to determine a brightness (e.g. a luminous intensity) of the real-world view. Then, based upon the brightness of the real-world view, the system may determine how low to set the transmittance level of the at least one dimming region. Stated alternatively, the amount to which the system effectively turns down the brightness of the real-world view may be at least partially dependent on the brightness of the real-world view to begin with.” Light sensor 812 is an ambient light sensor because it detects brightness of the real-world view which correspond to ambient light level of the real-world environment.)
determining a plurality of spatially-resolved dimming values based on the gaze vector; (paragraph [0059] reciting “Turning back now to FIG. 2A, the optical system 200 further includes the eye tracking sensor 114 which is positioned to monitor one or more physical characteristics of the user's eye 204 such as, for example, a pupil diameter and/or gaze direction of the user's eye 204. In particular, the eye tracking sensor 114 may generate eye tracking data associated with the user's eye 204. Then, based at least in part on the eye tracking data, the optical system 200 may dynamically modify various characteristics of the dimming masks 108 according to the techniques described herein.”) and adjusting the dimmer in accordance with the plurality of spatially-resolved dimming values to reduce an intensity of the light associated with the world object. (paragraph [0071] reciting “… Therefore, in the event that the user moves to a darker ambient environment, the pupils 202 increase from the first pupil size to a second pupil size as illustrated in FIG. 5D, then the dimming engine 126 may determine new dimming parameters corresponding to generation of the one or more dimming masks 108. For example, in the illustrated scenario, the dimming engine 126 has determined new size parameters that include a second width and a second height (which are relatively bigger than the first width and first height respectively) at which the transparent dimming panel 106 is to generate the one or more dimming masks 108.”;
paragraph [0076] reciting “Turning now to FIGS. 7A-7F (collectively referred to as FIG. 7), a plurality of illustrations collectively demonstrate that the optical system may determine location parameters that indicate at least one location on the transparent dimming panel 106 to generate the dimming masks 108 based on a gaze direction of the user's eyes 204. …” Dimming panel 106 is the dimmer that lowers pixels’ transparency values which correspond to spatially-resolved diming values to block out the light associated with the object in the gaze direction of the pupil.)
While not explicitly disclosed by Bell, Ramaswamy discloses detecting, using an eye tracker of the optical system, a position of an eye of a user; (col.6, lines 8-26 reciting “In accordance with an embodiment, determining the user's gaze direction can include, for example, first determining that the user's head is within the field of view of at least one camera (e.g., a front facing camera) of the device. Using a single camera can enable the device to determine the relative direction of the user, and the size of the user's head in the captured image information can be used to estimate a distance to the user. In situations where there are at least two cameras, or a stereoscopic imager, operable to determine three-dimensional information, the relative position of the user's head to the device can be determined. The device can also analyze the image information in at least some embodiments to determine the relative position of the user's eyes with respect to the user's head. The eyes can be captured using ambient or infrared light, for example, in order to determine a size, shape, location, or other such aspect of the user's retina, cornea, iris, or other such aspect, which can be used to determine an approximate gaze direction of the user with respect to the device.” Eye tracker in the form of a front facing camera (or 2 front facing cameras) can capture image information used to determine a position the user’s eye (location of the cornea/retina/iris etc.).) determining a gaze vector of the eye of the user based on the position of the eye of the user and the ambient light level; (col. 6, lines 18-25 reciting “The device can also analyze the image information in at least some embodiments to determine the relative position of the user's eyes with respect to the user's head. The eyes can be captured using ambient or infrared light, for example, in order to determine a size, shape, location, or other such aspect of the user's retina, cornea, iris, or other such aspect, which can be used to determine an approximate gaze direction of the user with respect to the device.” The location of the retina/cornea/iris (position of the eye) is used to determine/approximate gaze direction of the user. The ambient light level corresponds to some ambient light (which has a light level), used to capture eyes which are then used to generate the location of retina/cornea/iris etc.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Bell with Ramaswamy so that the ambient light level can be used to determine gaze direction of the eye. This is clearly needed since gaze direction is required by Bell to determine where to create the dimming mask within head-mounted display’s field of view. Therefore, the modification is obviously beneficial.
6. Regarding Claim 3, Bell further discloses The method of claim 1, further comprising: generating, at a projector of the optical system, virtual image light to be projected onto an eyepiece of the optical system, (paragraph [0039] reciting “…
The transparent display 104 can be in any suitable form such as, for example, a waveguide, prism or multi-prism assembly configured to receive a generated CG image and direct the image towards a user's eye. In various embodiments, the transparent display 104 may be configured to use one or more light sources within the device to project the CG images toward the user's eye(s) and, more particularly, toward the user's pupil(s). The transparent display 104 may include within the device any suitable light source for generating images such as, for example, an LED projection engine.”
LED projection engine corresponds to a projector of the optical system where the light sources generate virtual image light that is project onto the transparent display 104 and towards the eyes. Transparent display 104 corresponds to eyepiece.) the virtual image light representing one or more virtual objects to be displayed; (paragraph [0039] reciting “… The transparent display 104 can be in any suitable form such as, for example, a waveguide, prism or multi-prism assembly configured to receive a generated CG image and direct the image towards a user's eye. In various embodiments, the transparent display 104 may be configured to use one or more light sources within the device to project the CG images toward the user's eye(s) and, more particularly, toward the user's pupil(s). The transparent display 104 may include within the device any suitable light source for generating images such as, for example, an LED projection engine.” Light from LED projection engine corresponds to CG images.) and projecting the virtual image light onto the eyepiece. (paragraph [0039] reciting “…The transparent display 104 can be in any suitable form such as, for example, a waveguide, prism or multi-prism assembly configured to receive a generated CG image and direct the image towards a user's eye.)
7. Regarding Claim 4, Bell further discloses The method of claim 3, wherein the plurality of spatially-resolved dimming values are determined further based on a desired visibility for the one or more virtual objects. (paragraph [0065] reciting “…
With particular reference to the line corresponding to a 3-mm dimming mask diameter placed 30-mm in front of a 3-mm pupil, the graph 300 indicates that only at dimming mask diameters at least equal to the pupil diameter does the user perceive any area having substantially zero-percent transmittance. Accordingly, the techniques described herein enable the optical system 200 to actively monitor the user's pupil diameter and dynamically modify the size of a generated dimming mask to achieve a desired user perceived transmittance.”
The dimming value of zero transmittance is based on desired visibility based on the pupil diameter, whereas outside the pupil the transmittance level for the penumbra is higher than zero.)
8. Regarding Claim 5, Ramaswamy further discloses The method of claim 1, wherein the position of the eye of the user corresponds to a position of a pupil of the eye of the user. (col. 6, lines 21-26 reciting “The eyes can be captured using ambient or infrared light, for example, in order to determine a size, shape, location, or other such aspect of the user's retina, cornea, iris, or other such aspect, which can be used to determine an approximate gaze direction of the user with respect to the device.”)
9. Regarding Claim 6, Bell further discloses The method of claim 1, wherein the plurality of spatially-resolved dimming values include at least two unique levels of dimming. (paragraph [0012] reciting “In some configurations, the system may communicate with a light sensor to obtain luminance data associated with a brightness of one or more portions of the real-world view. Based on the luminance data, the system may determine opacity parameters indicating one or more transmittance levels for a dimming mask. For example, if the brightness level of the real-world view is relatively high (e.g. due to the user being outside on a sunny day), the opacity parameters may cause the transparent dimming panel to generate a highly or even entirely opaque dimming mask to enhance contrast with a CG image. In contrast, if the brightness level of the real-world view is relatively low (e.g. due to the user being in an unlit night-time environment), the opacity parameters may cause the transparent dimming panel to generate a dimming mask with a relatively higher transmittance level(s).”)
10. Regarding Claim 7, Bell further discloses The method of claim 1, wherein the optical system comprises a wearable augmented reality (AR) device. (paragraph [0037] reciting “… The head-mounted display device 100 may utilize various technologies such as, for example, augmented reality (AR) technologies to generate composite views that include CG images superimposed over a real-world view. …”)
11. Regarding Claim 8, Bell discloses A non-transitory computer-readable medium comprising instructions that, when executed by a processor, cause the processor to perform operations comprising: (paragraph [0048] reciting “Computer-readable media can include computer storage media and/or communication media. Computer storage media can include one or more of volatile memory, nonvolatile memory, and/or other persistent and/or auxiliary computer storage media, removable and non-removable computer storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. …”)
detecting, using an ambient light sensor of the optical system, an ambient light level based on light associated with a world object received at the optical system; (paragraph [0099] reciting “In some implementations, the system may determine at least one opacity parameter based at least in part on luminance data that indicates a luminous intensity corresponding to one or more regions of the real-world view. For example, the system may deploy a light sensor 812 to determine a brightness (e.g. a luminous intensity) of the real-world view. Then, based upon the brightness of the real-world view, the system may determine how low to set the transmittance level of the at least one dimming region. Stated alternatively, the amount to which the system effectively turns down the brightness of the real-world view may be at least partially dependent on the brightness of the real-world view to begin with.” Light sensor 812 is an ambient light sensor because it detects brightness of the real-world view which correspond to ambient light level of the real-world environment.)
determining a plurality of spatially-resolved dimming values based on the gaze vector; (paragraph [0059] reciting “Turning back now to FIG. 2A, the optical system 200 further includes the eye tracking sensor 114 which is positioned to monitor one or more physical characteristics of the user's eye 204 such as, for example, a pupil diameter and/or gaze direction of the user's eye 204. In particular, the eye tracking sensor 114 may generate eye tracking data associated with the user's eye 204. Then, based at least in part on the eye tracking data, the optical system 200 may dynamically modify various characteristics of the dimming masks 108 according to the techniques described herein.”) and adjusting a dimmer (paragraph [0044] reciting “… The computer-readable media 120 may further include a dimming engine 128 configured to determine one or more dimming parameters associated with the generation of the dimming masks 108. …) of the optical system in accordance with the plurality of spatially-resolved dimming values to reduce an intensity of the light associated with the world object. (paragraph [0071] reciting “… Therefore, in the event that the user moves to a darker ambient environment, the pupils 202 increase from the first pupil size to a second pupil size as illustrated in FIG. 5D, then the dimming engine 126 may determine new dimming parameters corresponding to generation of the one or more dimming masks 108. For example, in the illustrated scenario, the dimming engine 126 has determined new size parameters that include a second width and a second height (which are relatively bigger than the first width and first height respectively) at which the transparent dimming panel 106 is to generate the one or more dimming masks 108.”;
paragraph [0076] reciting “Turning now to FIGS. 7A-7F (collectively referred to as FIG. 7), a plurality of illustrations collectively demonstrate that the optical system may determine location parameters that indicate at least one location on the transparent dimming panel 106 to generate the dimming masks 108 based on a gaze direction of the user's eyes 204. …” Dimming panel 106 is the dimmer that lowers pixels’ transparency values which correspond to spatially-resolved diming values to block out the light associated with the object in the gaze direction of the pupil.)
While not explicitly disclosed by Bell, Ramaswamy discloses detecting, using an eye tracker of an optical system, a position of an eye of a user; (col.6, lines 8-26 reciting “In accordance with an embodiment, determining the user's gaze direction can include, for example, first determining that the user's head is within the field of view of at least one camera (e.g., a front facing camera) of the device. Using a single camera can enable the device to determine the relative direction of the user, and the size of the user's head in the captured image information can be used to estimate a distance to the user. In situations where there are at least two cameras, or a stereoscopic imager, operable to determine three-dimensional information, the relative position of the user's head to the device can be determined. The device can also analyze the image information in at least some embodiments to determine the relative position of the user's eyes with respect to the user's head. The eyes can be captured using ambient or infrared light, for example, in order to determine a size, shape, location, or other such aspect of the user's retina, cornea, iris, or other such aspect, which can be used to determine an approximate gaze direction of the user with respect to the device.” Eye tracker in the form of a front facing camera (or 2 front facing cameras) can capture image information used to determine a position the user’s eye (location of the cornea/retina/iris etc.).)
determining a gaze vector of the eye of the user based on the position of the eye of the user and the ambient light level; (col. 6, lines 18-25 reciting “The device can also analyze the image information in at least some embodiments to determine the relative position of the user's eyes with respect to the user's head. The eyes can be captured using ambient or infrared light, for example, in order to determine a size, shape, location, or other such aspect of the user's retina, cornea, iris, or other such aspect, which can be used to determine an approximate gaze direction of the user with respect to the device.” The location of the retina/cornea/iris (position of the eye) is used to determine/approximate gaze direction of the user. The ambient light level corresponds to some ambient light (which has a light level), used to capture eyes which are then used to generate the location of retina/cornea/iris etc.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Bell with Ramaswamy so that the ambient light level can be used to determine gaze direction of the eye. This is clearly needed since gaze direction is required by Bell to determine where to create the dimming mask within head-mounted display’s field of view. Therefore, the modification is obviously beneficial.
12. Regarding Claim 10, Bell further discloses The non-transitory computer-readable medium of claim 8, wherein the operations further comprise: causing a projector of the optical system to generate virtual image light to be projected onto an eyepiece of the optical system, (paragraph [0039] reciting “…
The transparent display 104 can be in any suitable form such as, for example, a waveguide, prism or multi-prism assembly configured to receive a generated CG image and direct the image towards a user's eye. In various embodiments, the transparent display 104 may be configured to use one or more light sources within the device to project the CG images toward the user's eye(s) and, more particularly, toward the user's pupil(s). The transparent display 104 may include within the device any suitable light source for generating images such as, for example, an LED projection engine.”
LED projection engine corresponds to a projector of the optical system where the light sources generate virtual image light that is project onto the transparent display 104 and towards the eyes. Transparent display 104 corresponds to eyepiece.) the virtual image light representing one or more virtual objects to be displayed. (paragraph [0039] reciting “… The transparent display 104 can be in any suitable form such as, for example, a waveguide, prism or multi-prism assembly configured to receive a generated CG image and direct the image towards a user's eye. In various embodiments, the transparent display 104 may be configured to use one or more light sources within the device to project the CG images toward the user's eye(s) and, more particularly, toward the user's pupil(s). The transparent display 104 may include within the device any suitable light source for generating images such as, for example, an LED projection engine.” Light from LED projection engine corresponds to CG images.)
13. Regarding Claim 11, Bell further discloses The non-transitory computer-readable medium of claim 10, wherein the plurality of spatially-resolved dimming values are determined further based on a desired visibility for the one or more virtual objects. (paragraph [0065] reciting “…
With particular reference to the line corresponding to a 3-mm dimming mask diameter placed 30-mm in front of a 3-mm pupil, the graph 300 indicates that only at dimming mask diameters at least equal to the pupil diameter does the user perceive any area having substantially zero-percent transmittance. Accordingly, the techniques described herein enable the optical system 200 to actively monitor the user's pupil diameter and dynamically modify the size of a generated dimming mask to achieve a desired user perceived transmittance.”
The dimming value of zero transmittance is based on desired visibility based on the pupil diameter, whereas outside the pupil the transmittance level for the penumbra is higher than zero.)
14. Regarding Claim 12, Ramaswamy further discloses The non-transitory computer-readable medium of claim 8, wherein the position of the eye of the user corresponds to a position of a pupil of the eye of the user. (col. 6, lines 21-26 reciting “The eyes can be captured using ambient or infrared light, for example, in order to determine a size, shape, location, or other such aspect of the user's retina, cornea, iris, or other such aspect, which can be used to determine an approximate gaze direction of the user with respect to the device.”)
15. Regarding Claim 13, Bell further discloses The non-transitory computer-readable medium of claim 8, wherein the plurality of spatially-resolved dimming values include at least two unique levels of dimming. (paragraph [0012] reciting “In some configurations, the system may communicate with a light sensor to obtain luminance data associated with a brightness of one or more portions of the real-world view. Based on the luminance data, the system may determine opacity parameters indicating one or more transmittance levels for a dimming mask. For example, if the brightness level of the real-world view is relatively high (e.g. due to the user being outside on a sunny day), the opacity parameters may cause the transparent dimming panel to generate a highly or even entirely opaque dimming mask to enhance contrast with a CG image. In contrast, if the brightness level of the real-world view is relatively low (e.g. due to the user being in an unlit night-time environment), the opacity parameters may cause the transparent dimming panel to generate a dimming mask with a relatively higher transmittance level(s).”)
16. Regarding Claim 14, Bell further discloses The non-transitory computer-readable medium of claim 8, wherein the optical system comprises a wearable augmented reality (AR) device. (paragraph [0037] reciting “… The head-mounted display device 100 may utilize various technologies such as, for example, augmented reality (AR) technologies to generate composite views that include CG images superimposed over a real-world view. …”)
17. Regarding Claim 15, Bell discloses An optical system (Abstract reciting “A near-eye-display system generates a dimming mask to enhance contrast between a computer-generated image and a real-world view. …”;
paragraph [0105] reciting “While described herein in the context of near-eye display systems, the example optical systems and methods disclosed herein may be used in any suitable optical system, such as a rifle scope, telescope, spotting scope, binoculars, and heads-up display.” Near eye display corresponds to an optical system.) comprising: a dimmer; (paragraph [0044] reciting “… The computer-readable media 120 may further include a dimming engine 128 configured to determine one or more dimming parameters associated with the generation of the dimming masks 108. …)
an ambient light sensor configured to detect an ambient light level based on light associated with a world object received at the optical system; (paragraph [0099] reciting “In some implementations, the system may determine at least one opacity parameter based at least in part on luminance data that indicates a luminous intensity corresponding to one or more regions of the real-world view. For example, the system may deploy a light sensor 812 to determine a brightness (e.g. a luminous intensity) of the real-world view. Then, based upon the brightness of the real-world view, the system may determine how low to set the transmittance level of the at least one dimming region. Stated alternatively, the amount to which the system effectively turns down the brightness of the real-world view may be at least partially dependent on the brightness of the real-world view to begin with.” Light sensor 812 is an ambient light sensor because it detects brightness of the real-world view which correspond to ambient light level of the real-world environment.) and one or more processors communicatively coupled to the dimmer, the eye tracker, and the ambient light sensor, wherein the one or more processors are configured to: (see FIG. 1 wherein the processing unit 118 is communicatively coupled to eye tracking engine and dimmer engine 128.)
determine a plurality of spatially-resolved dimming values based on the gaze vector; (paragraph [0059] reciting “Turning back now to FIG. 2A, the optical system 200 further includes the eye tracking sensor 114 which is positioned to monitor one or more physical characteristics of the user's eye 204 such as, for example, a pupil diameter and/or gaze direction of the user's eye 204. In particular, the eye tracking sensor 114 may generate eye tracking data associated with the user's eye 204. Then, based at least in part on the eye tracking data, the optical system 200 may dynamically modify various characteristics of the dimming masks 108 according to the techniques described herein.”) and adjust the dimmer in accordance with the plurality of spatially-resolved dimming values to reduce an intensity of the light associated with the world object. (paragraph [0071] reciting “… Therefore, in the event that the user moves to a darker ambient environment, the pupils 202 increase from the first pupil size to a second pupil size as illustrated in FIG. 5D, then the dimming engine 126 may determine new dimming parameters corresponding to generation of the one or more dimming masks 108. For example, in the illustrated scenario, the dimming engine 126 has determined new size parameters that include a second width and a second height (which are relatively bigger than the first width and first height respectively) at which the transparent dimming panel 106 is to generate the one or more dimming masks 108.”;
paragraph [0076] reciting “Turning now to FIGS. 7A-7F (collectively referred to as FIG. 7), a plurality of illustrations collectively demonstrate that the optical system may determine location parameters that indicate at least one location on the transparent dimming panel 106 to generate the dimming masks 108 based on a gaze direction of the user's eyes 204. …” Dimming panel 106 is the dimmer that lowers pixels’ transparency values which correspond to spatially-resolved diming values to block out the light associated with the object in the gaze direction of the pupil.)
While not explicitly disclosed by Bell, Ramaswamy discloses an eye tracker configured to detect a position of an eye of a user; (col.6, lines 8-26 reciting “In accordance with an embodiment, determining the user's gaze direction can include, for example, first determining that the user's head is within the field of view of at least one camera (e.g., a front facing camera) of the device. Using a single camera can enable the device to determine the relative direction of the user, and the size of the user's head in the captured image information can be used to estimate a distance to the user. In situations where there are at least two cameras, or a stereoscopic imager, operable to determine three-dimensional information, the relative position of the user's head to the device can be determined. The device can also analyze the image information in at least some embodiments to determine the relative position of the user's eyes with respect to the user's head. The eyes can be captured using ambient or infrared light, for example, in order to determine a size, shape, location, or other such aspect of the user's retina, cornea, iris, or other such aspect, which can be used to determine an approximate gaze direction of the user with respect to the device.” Eye tracker in the form of a front facing camera (or 2 front facing cameras) can capture image information used to determine a position the user’s eye (location of the cornea/retina/iris etc.).) determine a gaze vector of the eye of the user based on the position of the eye of the user and the ambient light level; (col. 6, lines 18-25 reciting “The device can also analyze the image information in at least some embodiments to determine the relative position of the user's eyes with respect to the user's head. The eyes can be captured using ambient or infrared light, for example, in order to determine a size, shape, location, or other such aspect of the user's retina, cornea, iris, or other such aspect, which can be used to determine an approximate gaze direction of the user with respect to the device.” The location of the retina/cornea/iris (position of the eye) is used to determine/approximate gaze direction of the user. The ambient light level corresponds to some ambient light (which has a light level), used to capture eyes which are then used to generate the location of retina/cornea/iris etc.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Bell with Ramaswamy so that the ambient light level can be used to determine gaze direction of the eye. This is clearly needed since gaze direction is required by Bell to determine where to create the dimming mask within head-mounted display’s field of view. Therefore, the modification is obviously beneficial.
18. Regarding Claim 17, Bell further discloses The optical system of claim 15, further comprising: an eyepiece; (paragraph [0039] reciting “…The transparent display 104 can be in any suitable form such as, for example, a waveguide, prism or multi-prism assembly configured to receive a generated CG image and direct the image towards a user's eye.” Multi-prism is eyepiece.)
and a projector configured to generate virtual image light to be projected onto the eyepiece, (paragraph [0039] reciting “… The transparent display 104 can be in any suitable form such as, for example, a waveguide, prism or multi-prism assembly configured to receive a generated CG image and direct the image towards a user's eye. In various embodiments, the transparent display 104 may be configured to use one or more light sources within the device to project the CG images toward the user's eye(s) and, more particularly, toward the user's pupil(s). The transparent display 104 may include within the device any suitable light source for generating images such as, for example, an LED projection engine.”
LED projection engine corresponds to a projector of the optical system where the light sources generate virtual image light that is project onto the transparent display 104 and towards the eyes. Transparent display 104 corresponds to eyepiece.) the virtual image light representing one or more virtual objects to be displayed. (paragraph [0039] reciting “… The transparent display 104 can be in any suitable form such as, for example, a waveguide, prism or multi-prism assembly configured to receive a generated CG image and direct the image towards a user's eye. In various embodiments, the transparent display 104 may be configured to use one or more light sources within the device to project the CG images toward the user's eye(s) and, more particularly, toward the user's pupil(s). The transparent display 104 may include within the device any suitable light source for generating images such as, for example, an LED projection engine.” Light from LED projection engine corresponds to CG images.)
19. Regarding Claim 18, Bell further discloses The optical system of claim 17, wherein the plurality of spatially-resolved dimming values are determined further based on a desired visibility for the one or more virtual objects. (paragraph [0065] reciting “…
With particular reference to the line corresponding to a 3-mm dimming mask diameter placed 30-mm in front of a 3-mm pupil, the graph 300 indicates that only at dimming mask diameters at least equal to the pupil diameter does the user perceive any area having substantially zero-percent transmittance. Accordingly, the techniques described herein enable the optical system 200 to actively monitor the user's pupil diameter and dynamically modify the size of a generated dimming mask to achieve a desired user perceived transmittance.”
The dimming value of zero transmittance is based on desired visibility based on the pupil diameter, whereas outside the pupil the transmittance level for the penumbra is higher than zero.)
20. Regarding Claim 19, Ramaswamy further discloses The optical system of claim 15, wherein the position of the eye of the user corresponds to a position of a pupil of the eye of the user. (col. 6, lines 21-26 reciting “The eyes can be captured using ambient or infrared light, for example, in order to determine a size, shape, location, or other such aspect of the user's retina, cornea, iris, or other such aspect, which can be used to determine an approximate gaze direction of the user with respect to the device.”)
21. Regarding Claim 20, Bell further discloses The optical system of claim 15, wherein the optical system comprises a wearable augmented reality (AR) device. (paragraph [0037] reciting “… The head-mounted display device 100 may utilize various technologies such as, for example, augmented reality (AR) technologies to generate composite views that include CG images superimposed over a real-world view. …”)
22. Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Bell in view of Ramaswamy and further in view of Kinley Maria Slakey (US Patent Application Publication No. 2019/0188888 A1).
23. Regarding Claim 2, while the combination of Bell and Ramaswamy does not explicitly disclose, Slakey discloses The method of claim 1, further comprising: estimating cone and rod locations of the eye of the user, wherein the gaze vector is determined further based on the cone and rod locations. (paragraph [0093] reciting “FIG. 8 illustrates a technique for determining a gaze vector based on detected light information and cone and rod locations within an eye.”)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Bell and Ramaswamy with Slakey so that the gaze vector is further determine using detected rod/cone positions. This is an obviously beneficial modification since it performs the gaze vector determination more accurate.
24. Regarding Claim 9, while the combination of Bell and Ramaswamy does not explicitly disclose, Slakey discloses The non-transitory computer-readable medium of claim 8, wherein the operations further comprise: estimating cone and rod locations of the eye of the user, wherein the gaze vector is determined further based on the cone and rod locations. (paragraph [0093] reciting “FIG. 8 illustrates a technique for determining a gaze vector based on detected light information and cone and rod locations within an eye.”)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Bell and Ramaswamy with Slakey so that the gaze vector is further determine using detected rod/cone positions. This is an obviously beneficial modification since it performs the gaze vector determination more accurate.
25. Regarding Claim 16, while the combination of Bell and Ramaswamy does not explicitly disclose, Slakey discloses The optical system of claim 15, wherein the one or more processors are further configured to: estimate cone and rod locations of the eye of the user, wherein the gaze vector is determined further based on the cone and rod locations. (paragraph [0093] reciting “FIG. 8 illustrates a technique for determining a gaze vector based on detected light information and cone and rod locations within an eye.”)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Bell and Ramaswamy with Slakey so that the gaze vector is further determine using detected rod/cone positions. This is an obviously beneficial modification since it performs the gaze vector determination more accurate.
CONTACT
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK S CHEN whose telephone number is (571)270-7993. The examiner can normally be reached Mon - Fri 8-11:30 and 1:30-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 5712727794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRANK S CHEN/Primary Examiner, Art Unit 2611