Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to RCE filed 12/30/2025 in which claims 1-8, 15-17, 19-25 are pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/30/2025 has been entered.
Response to Arguments
Applicant’s arguments, see pages 8-11, filed 12/30/2025, with respect to the rejections of claims have been fully considered and amended claims are, a new grounds of rejection is made in view of Gao et al. (CN 119762663 A) (machine translation attached).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8, 15-17, 19-25 are rejected under 35 U.S.C. 103 as being unpatentable over Chao et al. (US 10,977,815 B1) in view of Gao et al. (CN 119762663 A).
Regarding claim 1, Chao discloses a three-dimensional object sensing system (Fig. 1 teaches eye-tracking system 100), comprising: a projector to transmit light onto an object (Fig. 1 & Col 5 lines 30-40 teaches of structured light generator 112 may include a light emitter 112a (e.g., a laser or LED, Col 6 lines 1-45 teaches the structured light pattern can be an interference pattern, e.g. a periodic pattern of alternating bright and dark regions that is generated by a SAW device as shown in cross section A-A of the structured light beam 113); a dual readout sensor to capture a reflection of the light from the object (FIG. 1 teaches a camera 114 may collect the light that is reflected by the illuminated portion of eye 150 and project it onto an image sensor of camera 114, camera 114 may effectively capture an image of the structured light pattern that is projected onto the corneal surface.. Multiple phase shifted interference fringes can be captured in a time-interleaved manner using the same set of pixels, thus reducing motion blur between the images and reducing or eliminating down-time between captures because all charge storage bins can be read out at the end of the time-interleaved capture); a synchronizer communicatively coupled to the projector and the dual readout sensor to coordinate transmission and capture of the light (Fig. 3 & Col 8 lines 8-16 teaches the controller 315 can be coupled to both the structured light generator 305 and the imaging device 310); and a processor communicatively (Fig. 13 & Col lines teaches console 1310 and/or near-eye display system 13330 may include one or more processor(s)) coupled to the projector, the sensor, and the synchronizer, the processor to: instruct the projector to transmit the light simultaneously with a structured pattern (col 6 lines 1-4 teaches the light emitted by structured light generator 112 may substantially uniformly illuminate a portion of the eye surface (e.g., cornea 152) with an interferometric structured light pattern such as that shown in the cross section A-A of the structured light beam 113. The structured light pattern can be an interference pattern, e.g. a periodic pattern of alternating bright and dark regions that is generated by a SAW device as shown in cross section A-A of the structured light beam 113); receive two readouts corresponding to the transmitted light with the structured pattern (Col 22 lines 3-18 teaches the structured light pattern that results from the interference of two or more diffracted beams generated by the SAW device may be a sinusoidal pattern of alternating bright-dark bars (i.e., interference fringes), examples of which are shown in FIGS. 8A-8F.); derive an enhanced signal from the two readouts; generate a wrapped phase map from the enhanced signal (col 20 lines 57-col 21 line teaches each fringe pattern (or frequency), the measurements made for the 3 phase shifts (θn) may be combined to yield a measured phase disparity ϕ.ij that may be wrapped over the range[0, 2π] according to equations (4)-(6)); generate an unwrapped phase map from the wrapped phase map (FIGS. 11A-11C show how more than one spatial frequency can be used to “unwrap” a phase measurement for fringe interferometry & Col 20 lines 49-col 21 line 6 teaches the structured light generator can project a sequence of sinusoidal patterns having different spatial frequencies to help “unwrap” the phase disparity ambiguity); and generate a three-dimensional reconstruction of the object from the unwrapped phase map (col 21 lines 7-15 , 30-45 teaches to determine a pixel depth, the measured phase disparities at three frequencies (e.g., Freq 1, Freq 2, and Freq 3) for the pixel may be mapped to a 3D data point in the 3D plot shown in FIG. 11C. The 3D data point may be compared against dotted lines 1110 and regions 1120 in the 3D plot. If the 3D data point falls in a particular region 1120 which includes a particular dotted line 1110, the depth associated with a point on the particular dotted line that is closest to the 3D point among all points on the particular dotted line can be used to determine the pixel depth. If the 3D data point is outside of any region 1120, the measurement errors may be too large and no depth value may be determined for the 3D data point).
Chao does not explicitly disclose received from a first and second channel of the dual readout sensor, wherein the light from the first channel has a respective structured pattern comprising a modulated light pattern and the light from the second channel has light without the structured pattern and comprising an unmodulated light pattern; receive two readouts including a first readout comprising a first frame capturing a reflection of the modulated light pattern, and a second readout comprising a second frame capturing a reflection of the unmodulated light pattern. However Gao discloses received from a first and second channel of the dual readout sensor, wherein the light from the first channel has a respective structured pattern comprising a modulated light pattern (page 7-8 teaches camera is used for capturing a dual-frequency phase-shift reflection stripe image formed by reflecting the dual-frequency phase-shift initial stripe image through the front surface and the rear surface of the object to be detected, wherein the front surface and the rear surface respectively represent the surface of the object to be detected which is contacted with the projected light first and the surface of the object to be detected which is contacted with the projected light later, page 8 lines teaches the dual-frequency phase-shift reflection fringe image is generated by continuously capturing reflection fringes of the dual-frequency phase-shift initial fringe image reflected by the front surface and the rear surface of the object to be measured by a camera, wherein the light intensity distribution I n (x, y) of the reflection fringes captured by the nth camera, Wherein A (x, y) is ambient background light, B f (x, y) and Br (x, y) are respectively the modulation intensities of the front and rear surfaces); and the light from the second channel has light without the structured pattern and comprising an unmodulated light pattern (Page 8 teaches the light intensity distribution of the reflected fringes captured by the nth camera In (x,y) is expressed as: Among them, is the ambient background light, and are the modulation intensities of the front and rear surfaces respectively, and represent the phases of the front and rear surfaces respectively); receive two readouts including a first readout comprising a first frame capturing a reflection of the modulated light pattern (Abstract teaches double-frequency phase shift reflection fringe image formed by reflection of the object to be detected, Page 9 teaches performing fast Fourier transform on the dual-frequency phase-shift reflection fringe image to generate a Fourier transform image; filtering the Fourier transform image to obtain stripe information of the front surface and the rear surface of the object to be detected; implementation of filtering a Fourier transform image obtained by performing Fourier transform. For a double-layer transparent object, the camera captures the fringes 201 superimposed after the reflection of the front and rear surfaces, and in order to decouple the phases of the front and rear surfaces, a phase initial value preprocessing method is adopted, still referring to fig. 3, the images are respectively subjected to Fast Fourier Transform (FFT) to generate Fourier transform images, and then the logarithmic spectrum of the Fourier transform images is calculated Since the surface patterns of the front and rear surfaces are different, the spatial modulation of the sinusoidal fringes is also different, so that a central peak representing the part of the ambient light and two groups of central symmetrical peaks representing the fringes modulated by the front and rear surfaces can be seen on the spectrum surface. The stripe density and direction of the front and rear surfaces can be determined according to the shape of the side surface, two groups of central symmetry peaks are respectively used as filters, and stripe information of the front and rear surfaces can be separated, as shown in 203); and a second readout comprising a second frame capturing a reflection of the unmodulated light pattern (page 9 teaches preprocessing a dual-frequency phase shift reflection fringe image, as shown in fig. 3, for a dual-layer transparent object such as a lens, a picture captured by a camera is shown as 201, fast Fourier Transform(FFT) is performed on the image to generate a Fourier transform image, as shown as 202, fringe density and direction of the front and rear surfaces can be determined according to shapes of the front and rear surfaces of the object to be detected, two groups of peaks symmetrical with each other are respectively used as filters, fringe information of the front and rear surfaces can be separated, as shown as 203, truncated phases of the front and rear surfaces can be calculated by using a four-step phase shift method after filtering, and then phase orders and expansion phases of the front and rear surfaces are calculated by using a time-phase expansion technology based on dual-frequency fringes, as shown as 204)). It would have been obvious to one having ordinary skill in art before the effective filing date of the invention to use the method for performing eye-tracking operation on a user utilizing structured light of Chao with the method performing global optimization on modulation degrees and phases of the front surface and the rear surface of the object to be detected by adopting a mode reconstruction method based on the first phase initial value, the second phase initial value and a preset first modulation degree initial value and a preset second modulation degree initial value to obtain phase resolving results of the front surface and the rear surface of the object to be detected, and performing three-dimensional reconstruction on the object to be detected based on the phase resolving results of Gao in order to provide a system with high precision, high speed and high dynamic range, and has profound significance in the fields of precision free-form surface processing and detection.
Regarding claim 2, Chao discloses the three-dimensional object sensing system of claim 1, wherein the processor is to derive the enhanced signal from the two readouts (Col lines teaches FIGS. 8A-8F, the lateral position of the interference patterns, i.e., the specific location of the dark and bright bands (also referred to herein as the “phase” of the structured light pattern) depends on the phase difference between the RF drive signal and the laser strobing signal).
Chao does not explicitly disclose by subtracting a reflection of the light without the structured pattern from a reflection of the light with the structured pattern. However Gao discloses by subtracting a reflection of the light without the structured pattern in a first readout from a reflection of the light with the structured pattern in a second readout. ( page 3-page 4 teaches filtering the Fourier transform image to obtain stripe information of the front surface and the rear surface of the object to be detected; by acquiring the position coordinates of a central peak on a spectrum surface formed by the logarithmic spectrum; and separating the stripe information of the front surface and the rear surface of the object to be detected based on the position coordinates of the central peak. page 9 teaches dual-frequency phase shift reflection fringe image, as shown in fig. 3, for a dual-layer transparent object such as a lens, a picture captured by a camera is shown as 201, fast Fourier Transform(FFT) is performed on the image to generate a Fourier transform image, as shown as 202, fringe density and direction of the front and rear surfaces can be determined according to shapes of the front and rear surfaces of the object to be detected, two groups of peaks symmetrical with each other are respectively used as filters, fringe information of the front and rear surfaces can be separated, as shown as203). Motivation to combine as indicated in claim 1.
Regarding claim 3, Chao discloses the three-dimensional object sensing system of claim 1, wherein the processor is to generate the unwrapped phase map from the wrapped phase map using a depth-calibrated unwrapped phase map (Figs.11A-11C show how more than one spatial frequency can be used to “unwrap” a phase measurement for fringe interferometry ,Col 20 lines 49-col 21 line 6 teaches FIGS. 11A-11C shows how more than one spatial frequency can be used to “unwrap” a phase disparity measurement.. FIG. 11C is a 3D plot showing the combinations of wrapped phase disparities measured for a particular set of frequencies (e.g., 3 frequencies). Dotted lines 1110 indicate expected combinations of phase disparities at different spatial frequencies for all possible depths, where each point on the lines corresponds to a depth value Z).
Regarding claim 4, Chao discloses the three-dimensional object sensing system of claim 1, wherein the processor is to generate the three-dimensional reconstruction of the object from the unwrapped phase map by converting phase information in the unwrapped phase map to three-dimensional coordinates (col 21 lines 7-45 & Fig. 11 C teaches to determine a pixel depth, the measured phase disparities at three frequencies (e.g., Freq 1, Freq 2, and Freq 3) for the pixel may be mapped to a 3D data point in the 3D plot shown in FIG. 11C. The 3D data point may be compared against dotted lines 1110 and regions 1120 in the 3D plot. If the 3D data point falls in a particular region 1120 which includes a particular dotted line 1110, the depth associated with a point on the particular dotted line that is closest to the 3D point among all points on the particular dotted line can be used to determine the pixel depth. If the 3D data point is outside of any region 1120, the measurement errors may be too large and no depth value may be determined for the 3D data point).
Regarding claim 5, Chao discloses the three-dimensional object sensing system of claim 1, wherein the processor is to generate the wrapped phase map from the enhanced signal by applying a Fourier transformation to the enhanced signal (col 19 lines 5-15 teaches the phase disparity ϕ.sub.ij for a given pixel at location (i, j) on the sensor can be derived by first recognizing that for a given capture, the measured intensities at each pixel (e.g., at pixel 913) represent the Fourier Transform of the illumination pattern )
Regarding claim 6, Chao discloses the three-dimensional object sensing system of claim 1, wherein the structured pattern is a periodic fringe pattern. (FIG. 9-10 teaches col 5 lines 52-col 6 lines 10 teaches fringe interferometry. the structured light pattern that results from the interference of two or more diffracted beams generated by the SAW device is a sinusoidal pattern of alternating bright-dark bars (referred to as interference fringes). The structured light pattern can be an interference pattern, e.g. a periodic pattern of alternating bright and dark regions that is generated by a SAW device as shown in cross section A-A of the structured light beam 113).
Regarding claim 7, Chao discloses the three-dimensional object sensing system of claim 1, wherein the projector comprises a side-emitting laser diode, a vertical-cavity surface-emitting laser diode, a superluminescent light-emitting diode, or a light- emitting diode (LED) (Fig. 1 col 5 lines 28-35 teaches Structured light generator 112 may include a light emitter 112a (e.g., a laser or LED)
Regarding claim 8, Chao discloses The three-dimensional object sensing system of claim 1, wherein the object is an eye and the three-dimensional object sensing system is an eye tracking system (Fig. 3 teaches eye-tracking system 300).
Regarding claim 15, Chao discloses a method, comprising: projecting a first light and a second light onto an object (col 22 lines 3-18 teaches the structured light pattern that results from the interference of two or more diffracted beams generated by the SAW device is a sinusoidal pattern of alternating bright-dark bars (referred to as interference fringes); capturing a reflection of the first light and a reflection of the second light from the object as two adjacent frames in a dual readout sensor (FIG. 8. shows illustrative interferometric structured light patterns & col 8 lines 18-32 teaches the structured light pattern 350 illuminates the eye 320, resulting in one or more scattered or reflected structured light patterns 355 being generated based on the reflection/scattering of the structured light pattern 350 from, e.g., the corneal surface of the eye 320. The imaging device 310 then captures one or more images of the scattered/reflected structured light patterns 355.); deriving an enhanced signal from the captured reflection of the first light and the captured reflection of the second light (col 20 lines 57-col 21 line teaches each fringe pattern (or frequency), the measurements made for the 3 phase shifts (θn) may be combined to yield a measured phase disparity ϕ.ij that may be wrapped over the range[0, 2π] according to equations (4)-(6)); generating a wrapped phase map from the enhanced signal (col 20 lines 57-col 21 line teaches each fringe pattern (or frequency), the measurements made for the 3 phase shifts (θn) may be combined to yield a measured phase disparity ϕ.ij that may be wrapped over the range[0, 2π] according to equations (4)-(6))); generating an unwrapped phase map from the wrapped phase map using a depth-calibrated unwrapped phase map (FIGS. 11A-11C show how more than one spatial frequency can be used to “unwrap” a phase measurement for fringe interferometry & Col 20 lines 49-col 21 line 6 teaches the structured light generator can project a sequence of sinusoidal patterns having different spatial frequencies to help “unwrap” the phase disparity ambiguity);; and generating a three-dimensional reconstruction of the object from the unwrapped phase map (col 21 lines 7-15 , 30-45 teaches to determine a pixel depth, the measured phase disparities at three frequencies (e.g., Freq 1, Freq 2, and Freq 3) for the pixel may be mapped to a 3D data point in the 3D plot shown in FIG. 11C. The 3D data point may be compared against dotted lines 1110 and regions 1120 in the 3D plot. If the 3D data point falls in a particular region 1120 which includes a particular dotted line 1110, the depth associated with a point on the particular dotted line that is closest to the 3D point among all points on the particular dotted line can be used to determine the pixel depth. If the 3D data point is outside of any region 1120, the measurement errors may be too large and no depth value may be determined for the 3D data point).
Chao does not explicitly disclose including a first frame including a modulated light pattern and a second frame including a non-modulated light pattern in a dual readout sensor; deriving an enhanced signal from the first frame including the captured reflection of the first light and the second frame that including the captured reflection of the second light by removing a noise component from the captured reflection of the first light; wherein the dual readout sensor includes a first channel to receive the captured reflection of the first light and a second channel to receive the captured reflection of the second light, wherein the first light has a structured pattern comprising the modulated light pattern and the second light is without the structured pattern and comprising the non-modulated light pattern. However Gao discloses including a first frame including a modulated light pattern and a second frame including a non-modulated light pattern in a dual readout sensor (page 7 teaches camera is used for capturing a dual-frequency phase-shift reflection stripe image formed by reflecting the dual-frequency phase-shift initial stripe image through the front surface and the rear surface of the object to be detected, wherein the front surface and the rear surface respectively represent the surface of the object to be detected which is contacted with the projected light first and the surface of the object to be detected which is contacted with the projected light later, page 8 lines teaches the dual-frequency phase-shift reflection fringe image is generated by continuously capturing reflection fringes of the dual-frequency phase-shift initial fringe image reflected by the front surface and the rear surface of the object to be measured by a camera, wherein the light intensity distribution I n (x, y) of the reflection fringes captured by the nth camera, Wherein A (x, y) is ambient background light, B f (x, y) and Br (x, y) are respectively the modulation intensities of the front and rear surfaces); deriving an enhanced signal from the first frame including the captured reflection of the first light and the second frame that including the captured reflection of the second light by removing a noise component from the captured reflection of the first light (Page 9 teaches performing fast Fourier transform on the dual-frequency phase-shift reflection fringe image to generate a Fourier transform image; filtering the Fourier transform image to obtain stripe information of the front surface and the rear surface of the object to be detected; implementation of filtering a Fourier transform image obtained by performing Fourier transform. For a double-layer transparent object, the camera captures the fringes 201 superimposed after the reflection of the front and rear surfaces, and in order to decouple the phases of the front and rear surfaces, a phase initial value preprocessing method is adopted, still referring to fig. 3, the images are respectively subjected to Fast Fourier Transform (FFT) to generate Fourier transform images, and then the logarithmic spectrum of the Fourier transform images is calculated Since the surface patterns of the front and rear surfaces are different, the spatial modulation of the sinusoidal fringes is also different, so that a central peak representing the part of the ambient light and two groups of central symmetrical peaks representing the fringes modulated by the front and rear surfaces can be seen on the spectrum surface. The stripe density and direction of the front and rear surfaces can be determined according to the shape of the side surface, two groups of central symmetry peaks are respectively used as filters, and stripe information of the front and rear surfaces can be separated, as shown in 203); wherein the dual readout sensor includes a first channel to receive the captured reflection of the first light and a second channel to receive the captured reflection of the second light, wherein the first light has a structured pattern comprising the modulated light pattern and the second light is without the structured pattern and comprising the non-modulated light pattern (page 9 teaches preprocessing a dual-frequency phase shift reflection fringe image, as shown in fig. 3, for a dual-layer transparent object such as a lens, a picture captured by a camera is shown as 201, fast Fourier Transform(FFT) is performed on the image to generate a Fourier transform image, as shown as 202, fringe density and direction of the front and rear surfaces can be determined according to shapes of the front and rear surfaces of the object to be detected, two groups of peaks symmetrical with each other are respectively used as filters, fringe information of the front and rear surfaces can be separated, as shown as 203, truncated phases of the front and rear surfaces can be calculated by using a four-step phase shift method after filtering, and thenphase orders and expansion phases of the front and rear surfaces are calculated by using a time-phase expansion technology based on dual-frequency fringes, as shown as 204, page 13 teaches preprocessing module 72, configured to preprocess the dual-frequency phase shift reflection fringe image to obtain a first phase initial value of the front surface of the object to be detected and a second phase initial value of the rear surface of the object to be detected respectively). It would have been obvious to one having ordinary skill in art before the effective filing date of the invention to use the method for performing eye-tracking operation on a user utilizing structured light of Chao with the method performing global optimization on modulation degrees and phases of the front surface and the rear surface of the object to be detected by adopting a mode reconstruction method based on the first phase initial value, the second phase initial value and a preset first modulation degree initial value and a preset second modulation degree initial value to obtain phase resolving results of the front surface and the rear surface of the object to be detected, and performing three-dimensional reconstruction on the object to be detected based on the phase resolving results of Gao in order to provide a system with high precision, high speed and high dynamic range, and has profound significance in the fields of precision free-form surface processing and detection..
Regarding claim 16, Chao discloses the method of claim 15, wherein the structured pattern of the first light includes a periodic fringe pattern. (FIG. 9-10 teaches col 5 lines 52-col 6 lines 10 teaches fringe interferometry. the structured light pattern that results from the interference of two or more diffracted beams generated by the SAW device is a sinusoidal pattern of alternating bright-dark bars (referred to as interference fringes). The structured light pattern can be an interference pattern, e.g. a periodic pattern of alternating bright and dark regions that is generated by a SAW device as shown in cross section A-A of the structured light beam 113).
Regarding claim 17, Gao discloses the method of claim 16, wherein deriving the enhanced signal comprises: subtracting non-modulated signals from the second channel from modulated signals plus non-modulated signals from the first channel (Page 9 teaches performing fast Fourier transform on the dual-frequency phase-shift reflection fringe image to generate a Fourier transform image; filtering the Fourier transform image to obtain stripe information of the front surface and the rear surface of the object to be detected;). Motivation to combine as indicated in claim 15.
Regarding claim 19, Gao discloses the method of claim 18, wherein deriving the enhanced signal comprises: subtracting a direct component (DC) signal of the second light from a DC and alternating component (AC) signal of the first light. (Page 9 teaches performing fast Fourier transform on the dual-frequency phase-shift reflection fringe image to generate a Fourier transform image; filtering the Fourier transform image to obtain stripe information of the front surface and the rear surface of the object to be detected; implementation of filtering a Fourier transform image obtained by performing Fourier transform. For a double-layer transparent object, the camera captures the fringes 201 superimposed after the reflection of the front and rear surfaces, and in order to decouple the phases of the front and rear surfaces, a phase initial value preprocessing method is adopted, still referring to fig. 3, the images are respectively subjected to Fast Fourier Transform (FFT) to generate Fourier transform images, and then the logarithmic spectrum of the Fourier transform images is calculated Since the surface patterns of the front and rear surfaces are different, the spatial modulation of the sinusoidal fringes is also different, so that a central peak representing the part of the ambient light and two groups of central symmetrical peaks representing the fringes modulated by the front and rear surfaces can be seen on the spectrum surface. The stripe density and direction of the front and rear surfaces can be determined according to the shape of the side surface, two groups of central symmetry peaks are respectively used as filters, and stripe information of the front and rear surfaces can be separated, as shown in 203)). Motivation to combine as indicated in claim 15.
Regarding claim 20, Chao discloses the method of claim 15, wherein generating the wrapped phase map comprises: applying a Fourier transformation to the enhanced signal (col 19 lines 5-15 teaches the phase disparity ϕ.sub.ij for a given pixel at location (i, j) on the sensor can be derived by first recognizing that for a given capture, the measured intensities at each pixel (e.g., at pixel 913) represent the Fourier Transform of the illumination pattern).
Regarding claim 21, Chao discloses the method of claim 15, wherein generating the three-dimensional reconstruction of the object from the unwrapped phase map includes converting phase information in the unwrapped phase map to three-dimensional coordinates (Col 21 lines 7-45 & Fig. 11 C teaches to determine a pixel depth, the measured phase disparities at three frequencies (e.g., Freq 1, Freq 2, and Freq 3) for the pixel may be mapped to a 3D data point in the 3D plot shown in FIG. 11C. The 3D data point may be compared against dotted lines 1110 and regions 1120 in the 3D plot. If the 3D data point falls in a particular region 1120 which includes a particular dotted line 1110, the depth associated with a point on the particular dotted line that is closest to the 3D point among all points on the particular dotted line can be used to determine the pixel depth. If the 3D data point is outside of any region 1120, the measurement errors may be too large and no depth value may be determined for the 3D data point).
Regarding claim 22, Chao discloses method of claim 21 wherein generating the three-dimensional reconstruction of the object from the unwrapped phase map includes generating a three-dimensional reconstruction of an eye (Col 7 lines 15-20 teaches a 3D image, or depth image, of the surface of the user's eye can be computed).
Regarding claim 23, Chao discloses a non-transitory machine-accessible storage medium that provides instructions that, when executed by a machine, will cause the machine to perform operations (col 6 lines the fringe interferometry can be performed by an image processing system can include one or more processor(s) and a non-transitory computer-readable medium ) comprising: projecting a first light and a second light onto an object (col 22 lines 3-18 teaches the structured light pattern that results from the interference of two or more diffracted beams generated by the SAW device is a sinusoidal pattern of alternating bright-dark bars (referred to as interference fringes);;capturing a reflection of the first light and a reflection of the second light from the object as two frames in a dual readout sensor (FIG. 8. shows illustrative interferometric structured light patterns & col 8 lines 18-32 teaches the structured light pattern 350 illuminates the eye 320, resulting in one or more scattered or reflected structured light patterns 355 being generated based on the reflection/scattering of the structured light pattern 350 from, e.g., the corneal surface of the eye 320. The imaging device 310 then captures one or more images of the scattered/reflected structured light patterns 355); deriving an enhanced signal from the captured reflection of the first light and the captured reflection of the second light (col 20 lines 57-col 21 line teaches each fringe pattern (or frequency), the measurements made for the 3 phase shifts (θn) may be combined to yield a measured phase disparity ϕ.ij that may be wrapped over the range[0, 2π] according to equations (4)-(6))); generating a wrapped phase map from the enhanced signal (col 20 lines 57-col 21 line teaches each fringe pattern (or frequency), the measurements made for the 3 phase shifts (θn) may be combined to yield a measured phase disparity ϕ.ij that may be wrapped over the range[0, 2π] according to equations (4)-(6)); generating an unwrapped phase map from the wrapped phase map using a depth- calibrated unwrapped phase map (FIGS. 11A-11C show how more than one spatial frequency can be used to “unwrap” a phase measurement for fringe interferometry & Col 20 lines 49-col 21 line 6 teaches the structured light generator can project a sequence of sinusoidal patterns having different spatial frequencies to help “unwrap” the phase disparity ambiguity); and generating a three-dimensional reconstruction of the object from the unwrapped phase map col 21 lines 7-15 , 30-45 teaches to determine a pixel depth, the measured phase disparities at three frequencies (e.g., Freq 1, Freq 2, and Freq 3) for the pixel may be mapped to a 3D data point in the 3D plot shown in FIG. 11C. The 3D data point may be compared against dotted lines 1110 and regions 1120 in the 3D plot. If the 3D data point falls in a particular region 1120 which includes a particular dotted line 1110, the depth associated with a point on the particular dotted line that is closest to the 3D point among all points on the particular dotted line can be used to determine the pixel depth. If the 3D data point is outside of any region 1120, the measurement errors may be too large and no depth value may be determined for the 3D data point).
Chao does not explicitly disclose capturing a reflection of the first light having a structured pattern comprising a modulated light pattern and a reflection of the second light without the structured pattern and comprising an unmodulated light pattern from the object as two frames comprising a respective first frame and second frame in a dual readout sensor; deriving an enhanced signal from the captured reflection of the first light and the captured reflection of the second light by removing a noise component from the captured reflection of the first light; wherein the dual readout sensor includes a first channel to receive the captured reflection of the first light and a second channel to receive the captured reflection of the second light. However Gao discloses capturing a reflection of the first light having a structured pattern comprising a modulated light pattern and a reflection of the second light without the structured pattern and comprising an unmodulated light pattern from the object as two frames comprising a respective first frame and second frame in a dual readout sensor (page 7 teaches camera is used for capturing a dual-frequency phase-shift reflection stripe image formed by reflecting the dual-frequency phase-shift initial stripe image through the front surface and the rear surface of the object to be detected, wherein the front surface and the rear surface respectively represent the surface of the object to be detected which is contacted with the projected light first and the surface of the object to be detected which is contacted with the projected light later, page 8 lines teaches the dual-frequency phase-shift reflection fringe image is generated by continuously capturing reflection fringes of the dual-frequency phase-shift initial fringe image reflected by the front surface and the rear surface of the object to be measured by a camera, wherein the light intensity distribution I n (x, y) of the reflection fringes captured by the nth camera, Wherein A (x, y) is ambient background light, B f (x, y) and Br (x, y) are respectively the modulation intensities of the front and rear surfaces); deriving an enhanced signal from the captured reflection of the first light and the captured reflection of the second light by removing a noise component from the captured reflection of the first light (Page 9 teaches performing fast Fourier transform on the dual-frequency phase-shift reflection fringe image to generate a Fourier transform image; filtering the Fourier transform image to obtain stripe information of the front surface and the rear surface of the object to be detected; implementation of filtering a Fourier transform image obtained by performing Fourier transform. For a double-layer transparent object, the camera captures the fringes 201 superimposed after the reflection of the front and rear surfaces, and in order to decouple the phases of the front and rear surfaces, a phase initial value preprocessing method is adopted, still referring to fig. 3, the images are respectively subjected to Fast Fourier Transform (FFT) to generate Fourier transform images, and then the logarithmic spectrum of the Fourier transform images is calculated Since the surface patterns of the front and rear surfaces are different, the spatial modulation of the sinusoidal fringes is also different, so that a central peak representing the part of the ambient light and two groups of central symmetrical peaks representing the fringes modulated by the front and rear surfaces can be seen on the spectrum surface. The stripe density and direction of the front and rear surfaces can be determined according to the shape of the side surface, two groups of central symmetry peaks are respectively used as filters, and stripe information of the front and rear surfaces can be separated, as shown in 203); wherein the dual readout sensor includes a first channel to receive the captured reflection of the first light and a second channel to receive the captured reflection of the second light (page 9 teaches preprocessing a dual-frequency phase shift reflection fringe image, as shown in fig. 3, for a dual-layer transparent object such as a lens, a picture captured by a camera is shown as 201, fast Fourier Transform(FFT) is performed on the image to generate a Fourier transform image, as shown as 202, fringe density and direction of the front and rear surfaces can be determined according to shapes of the front and rear surfaces of the object to be detected, two groups of peaks symmetrical with each other are respectively used as filters, fringe information of the front and rear surfaces can be separated, as shown as 203, truncated phases of the front and rear surfaces can be calculated by using a four-step phase shift method after filtering, and then phase orders and expansion phases of the front and rear surfaces are calculated by using a time-phase expansion technology based on dual-frequency fringes, as shown as 204, page 13 teaches preprocessing module 72, configured to preprocess the dual-frequency phase shift reflection fringe image to obtain a first phase initial value of the front surface of the object to be detected and a second phase initial value of the rear surface of the object to be detected respectively). It would have been obvious to one having ordinary skill in art before the effective filing date of the invention to use the method for performing eye-tracking operation on a user utilizing structured light of Chao with the method performing global optimization on modulation degrees and phases of the front surface and the rear surface of the object to be detected by adopting a mode reconstruction method based on the first phase initial value, the second phase initial value and a preset first modulation degree initial value and a preset second modulation degree initial value to obtain phase resolving results of the front surface and the rear surface of the object to be detected, and performing three-dimensional reconstruction on the object to be detected based on the phase resolving results of Gao in order to provide a system with high precision, high speed and high dynamic range, and has profound significance in the fields of precision free-form surface processing and detection..
Regarding claim 24, Chao discloses the non-transitory machine-accessible storage medium of claim 23, wherein the operations of generating the three-dimensional reconstruction of the object includes converting phase information in the unwrapped phase map to three-dimensional coordinates (col 21 lines 7-45 & Fig. 11 C teaches to determine a pixel depth, the measured phase disparities at three frequencies (e.g., Freq 1, Freq 2, and Freq 3) for the pixel may be mapped to a 3D data point in the 3D plot shown in FIG. 11C. The 3D data point may be compared against dotted lines 1110 and regions 1120 in the 3D plot. If the 3D data point falls in a particular region 1120 which includes a particular dotted line 1110, the depth associated with a point on the particular dotted line that is closest to the 3D point among all points on the particular dotted line can be used to determine the pixel depth. If the 3D data point is outside of any region 1120, the measurement errors may be too large and no depth value may be determined for the 3D data point).
Regarding claim 25, Chao discloses the non-transitory machine-accessible storage medium of claim 23, wherein the machine includes an eye tracking system (Fig. 3 teaches eye-tracking system 300).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROWINA J CATTUNGAL whose telephone number is (571)270-5922. The examiner can normally be reached Monday-Thursday 7:30am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached on (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROWINA J CATTUNGAL/Primary Examiner, Art Unit 2425