DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice of Pre-AIA or AIA Status
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Agaoglu et al. (US20230239586).
Regarding claim 1, Agaoglu teaches a time-multiplexed eye-tracking system (figs.1-11, paragraphs [0005], [0028], The device 100 may capture images of/track one eye at a time but alternate illumination/capture/tracking while achieving a combined sampling rate for eye tracking) comprising:
a first set of light emitters (fig.1, illuminator 122b) and a second set of light emitters (fig.1, illuminator 122a) that are to be employed to emit light beams towards a first eye (fig.1, eye 125b) and a second eye (fig.1, eye 125a) of a user (paragraph [0020], the illuminators 122a, 122b may emit light towards the eyes 125a, 125b of the user), respectively;
a first set of light sensors (fig.1, camera 120b) and a second set of light sensors (fig.1, camera 120a; paragraph [0022], the eye cameras 120a, 120b each include one or more photo sensors, other sensors, and/or processing components that use received light to track an eye characteristic of the eye) that are to be employed to sense reflections of the light beams off a surface of the first eye and a surface of the second eye (see paragraphs [0021]-[0024], camera 120a and camera 120b that are to be employed to sense reflections of the light beams off a surface of the eye 125a and a surface of the eye 125b), respectively; and
at least one processor (figs.10-11, paragraph [0005], The device also includes a processor and a computer-readable storage medium comprising instructions that upon execution by one or more processors cause the device to perform operations; paragraph [0037], the method may be performed by a processor; paragraph [0052], The device 100 includes one or more processing units 1102, e.g., microprocessors) communicably coupled to the first set of light sensors (fig.1, the 120b) and the second set of light sensors (fig.1, the 120a), wherein the at least one processor (paragraph [0052], The device 100 includes one or more processing units 1102, e.g., microprocessors) is configured to:
control (paragraph [0052] one or more communication buses 1104 may include circuitry that interconnects and controls communications between components) the first set of light sensors and the second set of light sensors to operate in a time-multiplexed manner (paragraph [0030], FIGS. 3-6 illustrate examples of staggering the capture of eye images and predicting intermediate eye characteristic to provide eye tracking in accordance with some implementations);
process sensor data (fig.11, paragraph [0057] data structures), collected by the first set of light sensors (fig.1, the illuminator 122b) at a first time instant (paragraph [0004] An eye characteristic of the first eye at the capture times is determined based on the images of the first eye at those times), to determine a gaze direction of the first eye (fig.1, the eye 125b) at the first time instant (paragraph [0004] various implementations track an eye characteristic gaze direction of a user's eyes by staggering image capture of each eye); and
process sensor data (fig.11, paragraph [0057] data structures), collected by the second set of light sensors (fig.1, the illuminator 122a) at a second time instant, to determine a gaze direction of the second eye at the second time instant (paragraph [0029], FIG. 2 illustrates an example of capturing eye images for eye tracking.., images of both the right eye and the left eye are captured at the same rate at approximately the same times and used to track an eye characteristic of each eye).
Regarding claim 2, Agaoglu discloses the invention as described in Claim 1 and further teaches wherein the at least one processor (paragraph [0005], The device also includes a processor and a computer-readable storage medium comprising instructions that upon execution by one or more processors cause the device to perform operations) is configured to:
estimate a gaze direction of the second eye at the first time instant (fig.5, the time of 504a), based on the determine gaze direction of the first eye at the first time instant (fig.5, the time of 508a) and at least one previous gaze direction of the second eye at a previous time instant (fig.5, the time of 502a); and
estimate a gaze direction of the first eye at the second time instant (fig.5, the time of 506a), based on the determine gaze direction of the second eye at the second time instant (fig.5, the time of 502b) and at least one previous gaze direction of the first eye at a previous time instant (fig.5, the time of 508a).
Regarding claim 3, Agaoglu discloses the invention as described in Claim 1 and further teaches wherein when controlling the first set of light sensors and the second set of light sensors (described in claim 1) to operate in the time-multiplexed manner (described in claim 1), the at least one processor is configured to employ a time offset between the first time instant and the second time instant (paragraph [0027], The left and right eye cameras are run with a 1/X sec, e.g., ~11 ms when X=90 fps, phase offset; paragraph [0005], the intermediate gaze directions between frames for each eye may be predicted based on the other eye's gaze direction at each intermediate frame time and a predicted vergence at that time) that lies within a predefined threshold from 50 percent of a time interval between two consecutive time instants when the sensor data is collected from any one of the first set of light sensors and the second set of light sensors (see figs.2-6, paragraph [0028], The device 100 may capture images of/track one eye at a time but alternate illumination/capture/tracking while achieving a combined sampling rate for eye tracking---so Agaoglu teaches that lies within a predefined threshold from 50 percent of a time interval between two consecutive time instants when the sensor data is collected from any one of the first set of light sensors and the second set of light sensors).
Regarding claim 4, Agaoglu discloses the invention as described in Claim 1 and further teaches wherein when controlling (fig.11, paragraph [0052], the one or more communication buses 1104 may include circuitry that interconnects and controls communications between components) the first set of light sensors and the second set of light sensors (fig.1, cameras120a, 120b, paragraph [0053], sensors 1106 may include one or more eye cameras, one or more other cameras, one or more light sensors) to operate in the time-multiplexed manner (figs.2-6, time-multiplexed), the at least one processor is configured to send instructions (fig.11, paragraph [0058], instructions sets 1140) to the first set of light sensors and the second set of light sensors (fig.1, cameras 120b and 120a), wherein the instructions indicate at least one of:
time slots (see figs.2-6, paragraph [0005], intermediate frames; paragraph [0054], at a particular point in time or multiple points in time at a frame rate) in which the first set of light sensors (fig.1, camera 120b) are to sense the reflections of the light beams (see, fig.1, the reflections of the light beams),
time slots in which the second set of light sensors are to sense the reflections of the light beams (see paragraph [0054], a camera may be a frame/shutter-based camera that, at a particular point in time or multiple points in time at a frame rate, generates an image, e.g., of an eye of the user. Each image may include a matrix of pixel values corresponding to pixels of the image which correspond to locations of a matrix of light sensors of the camera),
a sampling rate (fig.5, paragraph [0033],a 1-sample vergence prediction is illustrated as illustrated by circle 510, encircling the data used in the vergence-based prediction) at which the first set of light sensors (figs.1 and 5, right eye camera) are to sense the reflections of the light beams (see fig.1, the reflections of the light beams),
a sampling rate at which the second set of light sensors are to sense the reflections of the light beams (paragraph [0028], for an intermediate time, using a last sample/capture of the eye).
Regarding claim 5, Agaoglu discloses the invention as described in Claim 1 and further teaches wherein the at least one processor (described in claim 1) is configured to control the first set of light emitters and the second set of light emitters (fig.1, 122a and 122b) to operate in the time-multiplexed manner (figs.2-6, in the time-multiplexed manner; paragraphs [0020]-[0023], the illuminators 122a, 122b, reflects off the eyes 125a, 125b, and is detected by the cameras 120a, 120b and used to determine eye characteristics of the eyes 125a, 125b).
Regarding claim 6, Agaoglu discloses the invention as described in Claim 5 and further teaches wherein, when controlling the first set of light emitters and the second set of light emitters to operate in the time-multiplexed manner (described in claim 5), the at least one processor is configured to send instructions (1, fig.11, instruction sets 1140) to the first set of light emitters and the second set of light emitters (fig.1, the illuminators 122a, 122b), wherein the instructions indicate at least one of:
time slots in which the first set of light emitters are to emit the light beams,
time slots in which the second set of light emitters are to emit the light beams,
a rate at which the first set of light emitters are to emit the light beams,
a rate (paragraph [0033], In FIG. 5 , the left eye camera and the right eye camera capture images at ½ the rate illustrated in FIG. 2 ) at which the second set of light emitters (fig.1, the 122a) are to emit the light beams (paragraph [0021], illuminators 122a, 122b emit light towards the eyes 125a,125b).
Regarding claim 7, Agaoglu teaches a method implemented by a time-multiplexed eye-tracking system (figs.1-11, paragraph [0028], The device 100 may capture images of/track one eye at a time but alternate illumination/capture/ tracking while achieving a combined sampling rate for eye tracking) comprising a first set of light emitters (fig.1, illuminator 122b), a second set of light emitters (fig.1, illuminator 122a), a first set of light sensors (fig.1, camera 120b), a second set of light sensors (fig.1, camera 120a), and at least one processor (figs.10-11, paragraph [0005], The device also includes a processor and a computer-readable storage medium comprising instructions that upon execution by one or more processors cause the device to perform operations; paragraph [0037], the method may be performed by a processor; paragraph [0052], The device 100 includes one or more processing units 1102, e.g., microprocessors), wherein the method comprises:
Controlling (paragraph [0052] one or more communication buses 1104 may include circuitry that interconnects and controls communications between components) the first set of light sensors and the second set of light sensors to operate in a time-multiplexed manner (paragraph [0030] FIGS. 3-6 illustrate examples of staggering the capture of eye images and predicting intermediate eye characteristic to provide eye tracking in accordance with some implementations);
processing sensor data (fig.11, paragraph [0057] data structures), collected by the first set of light sensors (fig.1, the illuminator 122b) at a first time instant (paragraph [0004] An eye characteristic of the first eye at the capture times is determined based on the images of the first eye at those times), to determine a gaze direction of the first eye (fig.1, the eye 125b) at the first time instant (paragraph [0004] various implementations track an eye characteristic gaze direction of a user's eyes by staggering image capture of each eye); and
processing sensor data (fig.11, paragraph [0057] data structures), collected by the second set of light sensors (fig.1, the illuminator 122a) at a second time instant, to determine a gaze direction of the second eye at the second time instant (paragraph [0029], FIG. 2 illustrates an example of capturing eye images for eye tracking.., images of both the right eye and the left eye are captured at the same rate at approximately the same times and used to track an eye characteristic of each eye).
Regarding claim 8, Agaoglu discloses the invention as described in Claim 7 and further teaches wherein the method comprises:
estimating a gaze direction of the second eye at the first time instant (fig.5, the time of 504a), based on the determine gaze direction of the first eye at the first time instant (fig.5, the time of 508a) and at least one previous gaze direction of the second eye at a previous time instant (fig.5, the time of 502a); and
estimating a gaze direction of the first eye at the second time instant (fig.5, the time of 506a), based on the determine gaze direction of the second eye at the second time instant (fig.5,the time of 502b) and at least one previous gaze direction of the first eye at a previous time instant (fig.5, the time of 508a).
Regarding claim 9, Agaoglu discloses the invention as described in Claim 7 and further teaches wherein at the step of controlling the first set of light sensors and the second set of light sensors to operate in a time-multiplexed manner (figs.2-6, the time-multiplexed manner), the method comprises employing a time offset between the first time instant and the second time instant (paragraph [0027], The left and right eye cameras are run with a 1/X sec, e.g., ~11 ms when X=90 fps, phase offset; paragraph [0005], the intermediate gaze directions between frames for each eye may be predicted based on the other eye's gaze direction at each intermediate frame time and a predicted vergence at that time) that lies within a predefined threshold from 50 percent of a time interval between two consecutive time instants when the sensor data is collected from any one of the first set of light sensors and the second set of light sensors (see figs.2-6, paragraph [0028], The device 100 may capture images of/track one eye at a time but alternate illumination/capture/tracking while achieving a combined sampling rate for eye tracking---so Agaoglu teaches that lies within a predefined threshold from 50 percent of a time interval between two consecutive time instants when the sensor data is collected from any one of the first set of light sensors and the second set of light sensors).
Regarding claim 10, Agaoglu discloses the invention as described in Claim 7 and further teaches wherein at the step of controlling (fig.11, paragraph [0052], the one or more communication buses 1104 may include circuitry that interconnects and controls communications between components) the first set of light sensors and the second set of light sensors (fig.1, cameras120a, 120b, paragraph [0053], sensors 1106 may include one or more eye cameras, one or more other cameras, one or more light sensors) to operate in the time-multiplexed manner(figs.2-6, time-multiplexed), the method comprises sending instructions (fig.11, paragraph [0058], instructions sets 1140) to the first set of light sensors and the second set of light sensors (fig.1, cameras 120b and 120a), wherein the instructions indicate at least one of:
time slots (see figs.2-6, paragraph [0005], intermediate frames; paragraph [0054], at a particular point in time or multiple points in time at a frame rate) in which the first set of light sensors (fig.1, camera 120b) are to sense the reflections of the light beams (fig.1, the reflections of the light beams),
time slots in which the second set of light sensors are to sense the reflections of the light beams (see paragraph [0054], a camera may be a frame/shutter-based camera that, at a particular point in time or multiple points in time at a frame rate, generates an image, e.g., of an eye of the user. Each image may include a matrix of pixel values corresponding to pixels of the image which correspond to locations of a matrix of light sensors of the camera),
a sampling rate (fig.5, paragraph [0033],a 1-sample vergence prediction is illustrated as illustrated by circle 510, encircling the data used in the vergence-based prediction) at which the first set of light sensors (figs.1 and 5, right eye camera) are to sense the reflections of the light beams (see fig.1, the reflections of the light beams),
a sampling rate at which the second set of light sensors are to sense the reflections of the light beams (paragraph [0028], for an intermediate time, using a last sample/capture of the eye).
Regarding claim 11, Agaoglu discloses the invention as described in Claim 7 and further teaches wherein the method comprises controlling the first set of light emitters and the second set of light emitters (the illuminators 122a, 122b) to operate in the time-multiplexed manner (figs.2-6, in the time-multiplexed manner; paragraphs [0020]-[0023], the illuminators 122a, 122b, reflects off the eyes 125a, 125b, and is detected by the cameras 120a, 120b and used to determine eye characteristics of the eyes 125a, 125b).
Regarding claim 12, Agaoglu discloses the invention as described in Claim 7 and further teaches wherein at the step of controlling the first set of light emitters and the second set of light emitters (illuminators 122a, 122b) to operate in the time-multiplexed manner (figs.2-6, the time-multiplexed manner), the method comprises sending instructions to the first set of light emitters and the second set of light emitters, wherein the instructions indicate at least one of:
time slots in which the first set of light emitters are to emit the light beams,
time slots in which the second set of light emitters are to emit the light beams,
a rate at which the first set of light emitters are to emit the light beams,
a rate (paragraph [0033], In FIG. 5 , the left eye camera and the right eye camera capture images at ½ the rate illustrated in FIG. 2 ) at which the second set of light emitters (fig.1, the 122a) are to emit the light beams (paragraph [0021], illuminators 122a, 122b emit light towards the eyes 125a,125b).
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1 and 7 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Russel (US20240085980).
Regarding claim 1, Russel teaches a time-multiplexed eye-tracking system (Russel, figs.1-10, abstract, paragraphs [0016-0017], a time-multiplexed eye-tracking system; capturing visual data of a left eye and a right eye using alternate sampling) comprising:
a first set of light emitters and a second set of light emitters that are to be employed to emit light beams (Russel, fig.6, paragraph [0074], light sources 326a and 326b, such as light emitting diodes “LED”s) towards a first eye and a second eye of a user (Russel, fig.6, paragraph [0074], the light sources 326a and 326b may generate glints, e.g., reflections off of the user's eyes), respectively;
a first set of light sensors and a second set of light sensors (Russel, fig.6, paragraph [0074] infrared cameras 324 paired with infrared light sources 326) that are to be employed to sense reflections of the light beams off a surface of the first eye and a surface of the second eye, respectively (fig.6, paragraph [0074] The light sources 326a and 326b may generate glints, e.g., reflections off of the user's eyes that appear in images of the eye captured by camera 324, one light source 326 and one camera 324 associated with a single one of the user's eyes 610); and
at least one processor (fig.6, CPUs, GPU) communicably coupled to the first set of light sensors and the second set of light sensors (fig.6, the 326a and 326b), wherein the at least one processor (fig.6, CPU 614) is configured to:
control (fig.6, controller 618) the first set of light sensors and the second set of light sensors (Russel, paragraph [0075], Eye tracking module 614 may receive images from eye tracking camera(s) 324 and may analyze the images to extract various pieces of information; paragraph [0077], the render controller 618 may use information on the user's center of perspective to simulate a render camera) to operate in a time-multiplexed manner (figs.7-10, the time-multiplexed; paragraph [0084], One or more eye tracking cameras, such as cameras 324, can capture video, including, for instance, frames of a video, and/or images, sometimes referred to as visual data or gaze vectors of a left eye and right eye at a particular frame rate);
process sensor data (fig.6, buffer 615; paragraph [0075], Eye tracking module 614 may receive images from eye tracking cameras 324 and may analyze the images to extract various pieces of information), collected by the first set of light sensors at a first time instant, to determine a gaze direction of the first eye at the first time instant (see Russel, fig.8, paragraph [0086-0091],--- the first set of light sensors 324 at a first time instant t1, to determine a gaze direction of the left eye at the first time instant t1); and
process sensor data (described above), collected by the second set of light sensors at a second time instant, to determine a gaze direction of the second eye at the second time instant (see Russel, fig.8, paragraphs [0086-0091],--- the second set of light sensors 324 at a second time instant t2, to determine a gaze direction of the right eye at the second time instant t2).
Regarding claim 7, Russel teaches a method implemented by a time-multiplexed eye-tracking system (, figs.1-10, abstract, paragraphs [0016-0017], a time-multiplexed eye-tracking system; capturing visual data of a left eye and a right eye using alternate sampling) comprising a first set of light emitters, a second set of light emitters (Russel, fig.6, paragraph [0074], light sources 326a and 326b, such as light emitting diodes “LED”s), a first set of light sensors, a second set of light sensors (Russel, fig.6, paragraph [0074] infrared cameras 324 paired with infrared light sources 326), and at least one processor (see Russel, fig.6, CPUs, GPU), wherein the method comprises:
controlling (fig.6, controller 618) the first set of light sensors and the second set of light sensors (Russel, paragraph [0075], Eye tracking module 614 may receive images from eye tracking camera(s) 324 and may analyze the images to extract various pieces of information; paragraph [0077], the render controller 618 may use information on the user's center of perspective to simulate a render camera) to operate in a time-multiplexed manner ((figs.7-10, the time-multiplexed; paragraph [0084], One or more eye tracking cameras, such as cameras 324, can capture video, including, for instance, frames of a video, and/or images, sometimes referred to as visual data or gaze vectors of a left eye and right eye at a particular frame rate);
processing sensor data (fig.6, buffer 615; paragraph [0075], Eye tracking module 614 may receive images from eye tracking cameras 324 and may analyze the images to extract various pieces of information), collected by the first set of light sensors at a first time instant, to determine a gaze direction of the first eye at the first time instant (see Russel, fig.8, paragraph [0086-0091],--- the first set of light sensors 324 at a first time instant t1, to determine a gaze direction of the left eye at the first time instant t1); and
processing sensor data (described above), collected by the second set of light sensors at a second time instant, to determine a gaze direction of the second eye at the second time instant (see fig.8, paragraph [0086-0091],--- the second set of light sensors 324 at a second time instant t2, to determine a gaze direction of the right eye at the second time instant t2).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUEI-JEN LEE EDENFIELD whose telephone number is (571)272-3005. The examiner can normally be reached Mon. -Thurs 8:00 am - 5:30 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Pham can be reached on 571-272-3689. The fax phone number for the organization where this application or proceeding is assigned is 571-273- 8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Services Representative or access to the automated information system, call 800-786-9199(In USA or Canada) or 571-272-1000.
/KUEI-JEN L EDENFIELD/
Examiner, Art Unit 2872
/THOMAS K PHAM/Supervisory Patent Examiner, Art Unit 2872