DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 31-33, 38-41, 43, 45, 49 and 50 have been amended, claim 51 is new and claim 36 is cancelled
Response to Arguments
Applicant’s arguments, see page 10, filed 12/09/2025, with respect to the 112 rejections of claims 32-33, 36, 38, 40-41, 43, 45 and 49 have been fully considered and are persuasive. The 112 rejections of claims 32-33, 36, 38, 40-41, 43, 45 and 49 have been withdrawn.
Applicant’s arguments with respect to claim 39 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s arguments with respect to claims 31 and 50 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 31-35, 37-38, 46 and 50 are rejected under 35 U.S.C. 103 as being unpatentable over Horesh et al. (US 20160249800 A1) in view of Trail et al. (US 20180246590 A1), Lin (US 20180335839 A1) and Wilhelm (US 20200342201 A1).
Regarding claim 31, Horesh discloses in at least figure 1, a method for generating data suitable
for determining (eye tracking method paragraph [0054]) at least one eye state variable (gaze vector 110
fig. 1) of at least one eye of a subject (human eye paragraph [0016]), the eye (human eye paragraph
[0016]) comprising an eyeball (fluid filled portion of the eye 120 fig. 1), an iris (iris 102 fig. 1) defining a
pupil (pupil 100 fig. 1), and a cornea (cornea 103 fig. 1), the at least one eye state variable (gaze vector
110 fig. 1) being derivable from at least one image (a plurality of eye images are obtained from an array
camera paragraph [0054] to determine a gaze vector based on pupil center position paragraph [0063])
of the eye (human eye paragraph [0016]) taken with a camera (array camera 200 fig. 2) of known
camera intrinsics (an array camera 200 includes an image sensor having a plurality of lenses paragraph
[0019]), the method comprising:
providing a first 3D eye model modeling corneal refraction (calculating the 3D position of the pupil edge can take into consideration the refractive properties of the cornea paragraph [0024], described as “only a 3D eye model based eye state determination which takes effects of corneal refraction into account is maximally useful for the purpose of precision pupillometry” in the current application paragraph [007]);
using a given algorithm to calculate (the processing unit 700 executes instructions to perform
any of the methodologies paragraph [0046]) the at least one eye state variable (gaze vector 110 fig. 1)
using one or more of the synthetic images (an intermediate image may be synthesized paragraph
[0025]) and a further 3D eye model (a 3D position of the pupil may be determined from the 3D position
of the corneal center of curvature paragraph [0023]) having at least one parameter (pupil edge
paragraph [0023]);
determining a characteristic (glints paragraph [0026]) of the image of the pupil (originally
obtained images paragraph [0027]) within each of the synthetic images (an intermediate image may be
synthesized paragraph [0025]),
Horesh does not explicitly disclose, generating, using a model for the camera [of the known camera intrinsics synthetic images, of several model eyes according to the first 3D eye model, for a plurality of given values of the at least one eye state variable;
wherein the characteristic of the image of the pupil is a measure of the circularity of the pupil area outline,
determining one or more hypothetically optimal values of the at least one parameter of the further 3D eye model that minimize the error between the value(s) of the at least one given eye state variable and the value(s) of the corresponding eye state variable obtained when applying the given algorithm and
establishing a relationship between the one or more hypothetically optimal values of the at least
one parameter of the further 3D eye model and the characteristic of the pupil image
wherein the at least one parameter of at least one of the first 3D eye and the further 3D eye model is selected from the list of:
a distance between a center of an eyeball and a center of a pupil, a size measure of the eyeball, a size measure of an iris, a distance pupil center to cornea center, a distance cornea center to eyeball center, a distance pupil center to limbus center, a distance crystalline lens to eyeball center, a distance crystalline lens to cornea center, a distance crystalline lens to corneal apex, a refractive property of a vitreous humor, a refractive property of the crystalline lens, an ellipsoidal shape measure of an eyeball, an ellipsoidal shape measure of the cornea, and a degree of astigmatism.
However Trail discloses in at least figure 3, generating, using a model for the camera (this image generating function, IM(α, β, P), may be implemented by rendering a mathematical model of the structured light pattern (e.g., a ray model) on the 3D model of the eye and the image output by IM(α, β, P) may be an approximation of the image expected to be captured by the camera 320 for the given values of α, β, and P paragraph [0058]) of the known camera intrinsics (the camera 320 has properties to capture an image of the eye that is approximated by the 3D model of the eye paragraph [0058], described as “camera properties (typically including camera intrinsics) of the camera intended to be used in a corresponding device for producing image data of a subject's eye” in current application paragraph [00168]) synthetic images (the image output by IM(α, β, P) is an approximation from the structured light pattern (e.g., a ray model) paragraph [0058], described as “the synthetic images may be achieved by raytracing an arrangement of a camera model” in current application paragraph [00168]), of several model eyes (the model store 410 may contain two model's M1 and M2 one for each eye paragraph [0052]) according to the first 3D eye model (M is a 3D model which approximates the surface geometry of the eye paragraph [0053]), for a plurality of given values (the 3D model may incorporate values, α and β to correspond to the angular direction of the foveal axis, the roll angle γ, and the pupil diameter d paragraph [0053]).
Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the camera model as taught by Trail in the in the eye tracking method of Horesh. The model M of a user's eye to estimate the current orientation of the eye (paragraph [0052]).
Additionally Lin discloses in at least figure 8, determining one or more hypothetically optimal
values (calculating estimated pupil region S301 fig. 8) of the at least one parameter (ellipse parameters
for the pupil region paragraph [0044]) of the further 3D eye model (3D eye model paragraph [0062])
that minimize the error between the value(s) (the processing circuit 110 may be configured to measure
multiple error values and optimize the eye model paragraph [0065]) of the at least one given eye state
variable (gaze vector from pupil region of interest S4 fig. 8) and the value(s) of the corresponding eye
state variable (viewpoint of the eye according to the gaze vector S5 fig. 8) obtained when applying the
given algorithm (calculated by processing circuit S5 fig. 8) and
establishing a relationship (a polynomial equation may be obtained to indicate the relationship between the ellipse parameters (e.g., major axis and the minor axis) and polar coordinates of the corresponding gazing points paragraph [0045]) between the one or more hypothetically optimal values (calculating estimated pupil region S301 fig. 8) of the at least one parameter (ellipse parameters for the pupil region paragraph [0044]) of the further 3D eye model (3D eye model paragraph [0062]) and the characteristic (gazing points paragraph [0045]) of the pupil image (image of the eye S1 fig. 8).
Lin further teaches (paragraphs [0045]-[0046]): "In operation S14, the processing circuit 110 is
configured to obtain the eye model according to the calibration vectors and calibration viewing directions corresponding to the calibration vectors. Specifically, in some embodiments, a polynomial
equation may be obtained to indicate the relationship between the ellipse parameters (e.g., major axis and the minor axis) and polar coordinates of the corresponding gazing points, which represent the
viewing directions of the eye…
Since the calibration of the eye model is performed in operation S1 to meet one or more users'
specific pupil shape, the accuracy of eye tracking is improved."
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of making a calibrated eye model as taught by Lin which
establishes a relationship between one or more hypothetically optimal parameter values and the gaze
vector into the eye tracking method of Horesh. One would have been motivated to perform this
calibrating step because Lin teaches that the accuracy of the eye tracking is improved by this calibration
process (Lin paragraphs [0045]-[0046]).
Further Wilhelm discloses in at least figure 4a, wherein the characteristic (the outline or contour can be used to determine the calibration of the image paragraph [0047]) of the image of the pupil (image 37 includes pupil 41 fig. 4a) is a measure of the circularity of the pupil area outline (outline or contour 42 of the pupil 41fig. 4a)
wherein the at least one parameter of at least one of the first 3D eye (3D model of eyeball 31 paragraph [0032] can determine the refractive index of the cornea paragraph [0040]) and the further 3D eye model (calibrated model paragraph [0057]) is selected from the list of:
a distance between a center of an eyeball and a center of a pupil (not required by the claim), a size measure of the eyeball (not required by the claim), a size measure of an iris (features of the 3D models include size and shape of the iris contour paragraph [0007]), a distance pupil center to cornea center (the 3d model includes a distance d fig. 2 between the pupil 33 and a distal point (distal pole) of the cornea surface paragraph [0032]), a distance cornea center to eyeball center (not required by the claim), a distance pupil center to limbus center (not required by the claim), a distance crystalline lens to eyeball center (not required by the claim), a distance crystalline lens to cornea center (not required by the claim), a distance crystalline lens to corneal apex (not required by the claim), a refractive property of a vitreous humor (not required by the claim), a refractive property of the crystalline lens (not required by the claim), an ellipsoidal shape measure of an eyeball (the 3d model includes a radius R fig. 2 of the eyeball 31 paragraph [0033]), an ellipsoidal shape measure of the cornea (the 3d model includes a radius r fig. 2 of an imagined sphere 34 aligned with the cornea 32 paragraph [0034]), and a degree of astigmatism (not required by the claim).
Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the features in the 3D models as taught by Wilhelm in the eye tracking method of Horesh. These parameters are used to show the actual detection of an eye pose (paragraph [0041]).
Regarding Claim 32, the combination of Horesh, Trail, Lin and Wilhelm discloses all the limitations of claim 31.
Horesh does not explicitly disclose, wherein the characteristic of the image of the pupil is
a ratio of minor to major axis length of an ellipse fit to the pupil image area or outline, a measure of variation of the curvature of the pupil outline, a measure of elongation or a measure of the bounding box of the pupil area.
However Lin further discloses, wherein the characteristic (gazing points paragraph [0045]) of the
image of the pupil (image of the eye S1 fig. 8) is a ratio of minor to major axis length of an ellipse fit to the pupil image area or outline (the ellipse parameters include the major and minor axis of the pupil paragraph [0045] and a ratio is used to calculate a viewpoint paragraph [0053]), a measure of variation of the curvature of the pupil outline (not required by claim), a measure of elongation or a measure of the bounding box of the pupil area (not required by claim).
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of making a calibrated eye model as taught by Lin which
establishes a relationship between one or more hypothetically optimal parameter values and the gaze
vector into the eye tracking method of Horesh. One would have been motivated to perform this
calibrating step because Lin teaches that the accuracy of the eye tracking is improved by this calibration process (Lin paragraphs [0045]-[0046]).
Regarding Claim 33, the combination of Horesh, Trail, Lin and Wilhelm discloses all the limitations of claim 31.
Horesh does not explicitly disclose, wherein the relationship between the hypothetically optimal
values of the at least one further 3D eye model parameter and the characteristic of the pupil image is
chosen from the list of a constant value, a linear relationship, a polynomial relationship, or another non-linear relationship.
However Lin further discloses, wherein the relationship (a polynomial equation may be obtained
to indicate the relationship between the ellipse parameters (e.g., major axis and the minor axis) and
polar coordinates of the corresponding gazing points paragraph [0045]) between the hypothetically
optimal values (calculating estimated pupil region S301 fig. 8) of the at least one further 3D eye model
parameter and the characteristic (3D eye model paragraph [0062]) parameter (ellipse parameters for
the pupil region paragraph [0044]) and the characteristic (gazing points paragraph [0045]) of the pupil
image (image of the eye S1 fig. 8) is chosen from the list of a constant value (not required by claim), a linear relationship (not required by claim), a polynomial relationship (polynomial equation paragraph [0045]), or another non-linear relationship (not required by claim).
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of making a calibrated eye model as taught by Lin which
establishes a relationship between one or more hypothetically optimal para meter values and the gaze
vector into the eye tracking method of Horesh. One would have been motivated to perform this calibrating step because Lin teaches that the accuracy of the eye tracking is improved by this calibration process (Lin paragraphs [0045]-[0046]).
Regarding Claim 34, the combination of Horesh and Lin discloses a II the limitations of claim 31
and Horesh further discloses, wherein the further 3D eye model (a 3D position of the pupil may be
determined from the 3D position of the cornea I center of curvature paragraph [0023]) has at most (the
disparity analysis of the pupil edge can be the only parameter used to calculate the distance paragraph
[0024]) one parameter (pupil edge paragraph [0024]).
Regarding Claim 35, the combination of Horesh, Trail, Lin and Wilhelm discloses all the limitations of claim 31 and Horesh further discloses, wherein the further 3D eye model (a 3D position of the pupil may be determined from the 3D position of the corneal center of curvature paragraph [0023]) has multiple
parameters (pupil edge and corneal center of curvature paragraph [0023]) and a relationship is
established for more than one of them (the distance to the pupil edge can be estimated by adding the
distance from the pupil plane to the corneal center of curvature paragraph [0023]).
Regarding Claim 37, the combination of Horesh, Trail, Lin and Wilhelm discloses all the limitations of claim 31.
Horesh does not explicitly disclose, wherein said relationship is the same for all eye state
variables, or wherein a different relationship between a parameter of the further 3D eye model and the characteristic of the pupil image is established for each eye state variable or for groups of eye state
variables.
However Lin further discloses, wherein said relationship (a polynomial equation may be
obtained to indicate the relationship between the ellipse parameters (e.g., major axis and the minor
axis) and polar coordinates of the corresponding gazing points paragraph [0045]) is the same (the cost
function shows the relationship between the estimated pupil region and the pupil region of interest to
optimize the model which has a matrix relationship between the viewpoint of the eye and the gaze
vector of the pupil region of interest paragraphs [0066-0068]) for all eye state variables (gaze vector
from pupil region of interest S4 fig. 8), or wherein a different relations hip between a para meter of the
further 3D eye model and the characteristic of the pupil image is established for each eye state variable
or for groups of eye state variables.
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of making a calibrated eye model as taught by Lin which
establishes a relationship between one or more hypothetically optimal para meter values and the gaze
vector into the eye tracking method of Horesh. One would have been motivated to perform this
calibrating step because Lin teaches that the accuracy of the eye tracking is improved by this calibration process (Lin paragraphs [0045]-[0046]).
Regarding Claim 38, the combination of Horesh, Trail, Lin and Wilhelm discloses all the limitations of claim 31.
Horesh does not disclose, wherein the eye state variable is selected from the list of a pose of an eye, a 3D circle center line, a 3D eye intersecting line, and a size measure of a pupil of an eye.
However Wilhelm further discloses, wherein the eye state variable (features of the optical projection paragraph [0041]) is selected from the list of a pose of an eye (each feature is based on the known eye pose paragraph [0048]), a 3D circle center line (not required by claim), a 3D eye intersecting line (not required by claim), and a size measure of a pupil of an eye (the features include an outline or contour of the pupil paragraph [0047]).
Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the features in the 3D models as taught by Wilhelm in the eye tracking method of Horesh. These parameters are used to show the actual detection of an eye pose (paragraph [0041]).
Regarding claim 46, the combination of Horesh, Trail, Lin and Wilhelm discloses all the limitations of claim 31.
Horesh does not explicitly disclose, wherein the given algorithm does not take into account a glint from the eye for calculating the at least one eye state variable, wherein the algorithm is glint-free, and/or wherein the algorithm does not require structured light and/or special purpose illumination to derive eye state variables, and/or wherein the given algorithm calculates the at least one eye state variable in a non-iterative way.
However Lin discloses in at least fig. 8, wherein the given algorithm (calculated by processing
circuit S5 fig. 8) does not take into account a glint from the eye for calculating the at least one eye state
variable (glint is not used to calculate the eye state vector fig. 8), wherein the algorithm is glint-free,
and/or wherein the algorithm does not require structured light and/or special purpose illumination to
derive eye state variables, and/or wherein the given algorithm calculates the at least one eye state
variable in a non-iterative way.
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of making a calibrated eye model as taught by Lin which
establishes a relationship between one or more hypothetically optimal para meter values and the gaze
vector into the eye tracking method of Horesh. One would have been motivated to perform this
calibrating step because Lin teaches that the accuracy of the eye tracking is improved by this calibration process (Lin paragraphs [0045]-[0046]).
Regarding claim 50, Horesh discloses in at least figure 1, a computer program product or a nonvolatile computer-readable storage medium comprising instructions (the machine may be a wearable electronic device, an onboard vehicle system, personal computer (PC), a tablet PC, a hybrid tablet, a personal digital assistant (PDA), a mobile telephone, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine paragraph [0046]) which, when executed by a one or more processors of a system (the machine paragraph [0046]), cause the system to carry out the following steps (which a set or sequence of instructions may be executed to cause the machine to perform any one of the methodologies discussed herein paragraph [0046]):
providing a first 3D eye model modeling corneal refraction 3D position of corneal curvature
paragraph [0022] and refractive index paragraph [0023]);
using a given algorithm to calculate (the processing unit 700 executes instructions to perform
any of the methodologies paragraph [0046]) the at least one eye state variable (gaze vector 110 fig. 1)
using one or more of the synthetic images (an intermediate image may be synthesized paragraph
[0025]) and a further 3D eye model (a 3D position of the pupil may be determined from the 3D position
of the corneal center of curvature paragraph [0023]) having at least one parameter (pupil edge
paragraph [0023]);
determining a characteristic (glints paragraph [0026]) of the image of the pupil (originally
obtained images paragraph [0027]) within each of the synthetic images (an intermediate image may be
synthesized paragraph [0025]),
Horesh does not explicitly disclose, generating, using a model for the camera of the known camera intrinsics synthetic images, of several model eyes according to the first 3D eye model, for a plurality of given values of the at least one eye state variable;
wherein the characteristic of the image of the pupil is a measure of the circularity of the pupil area outline,
determining one or more hypothetically optimal values of the at least one parameter of the further 3D eye model that minimize the error between the value(s) of the at least one given eye state variable and the value(s) of the corresponding eye state variable obtained when applying the given algorithm and
establishing a relationship between the one or more hypothetically optimal values of the at least
one parameter of the further 3D eye model and the characteristic of the pupil image
wherein the at least one parameter of at least one of the first 3D eye and the further 3D eye model is selected from the list of:
a distance between a center of an eyeball and a center of a pupil, a size measure of the eyeball, a size measure of an iris, a distance pupil center to cornea center, a distance cornea center to eyeball center, a distance pupil center to limbus center, a distance crystalline lens to eyeball center, a distance crystalline lens to cornea center, a distance crystalline lens to corneal apex, a refractive property of a vitreous humor, a refractive property of the crystalline lens, an ellipsoidal shape measure of an eyeball, an ellipsoidal shape measure of the cornea, and a degree of astigmatism.
However Trail discloses in at least figure 3, generating, using a model for the camera (this image generating function, IM(α, β, P), may be implemented by rendering a mathematical model of the structured light pattern (e.g., a ray model) on the 3D model of the eye and the image output by IM(α, β, P) may be an approximation of the image expected to be captured by the camera 320 for the given values of α, β, and P paragraph [0058]) of the known camera intrinsics (the camera 320 has properties to capture an image of the eye that is approximated by the 3D model of the eye paragraph [0058], described as “camera properties (typically including camera intrinsics) of the camera intended to be used in a corresponding device for producing image data of a subject's eye” in current application paragraph [00168]) synthetic images (the image output by IM(α, β, P) is an approximation from the structured light pattern (e.g., a ray model) paragraph [0058], described as “the synthetic images may be achieved by raytracing an arrangement of a camera model” in current application paragraph [00168]), of several model eyes (the model store 410 may contain two model's M1 and M2 one for each eye paragraph [0052]) according to the first 3D eye model (M is a 3D model which approximates the surface geometry of the eye paragraph [0053]), for a plurality of given values (the 3D model may incorporate values, α and β to correspond to the angular direction of the foveal axis, the roll angle γ, and the pupil diameter d paragraph [0053]).
Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the camera model as taught by Trail in the in the eye tracking method of Horesh. The model M of a user's eye to estimate the current orientation of the eye (paragraph [0052]).
Additionally Lin discloses in at least figure 8, determining one or more hypothetically optimal
values (calculating estimated pupil region S301 fig. 8) of the at least one parameter (ellipse parameters
for the pupil region paragraph [0044]) of the further 3D eye model (3D eye model paragraph [0062])
that minimize the error between the value(s) (the processing circuit 110 may be configured to measure
multiple error values and optimize the eye model paragraph [0065]) of the at least one given eye state
variable (gaze vector from pupil region of interest S4 fig. 8) and the value(s) of the corresponding eye
state variable (viewpoint of the eye according to the gaze vector S5 fig. 8) obtained when applying the
given algorithm (calculated by processing circuit S5 fig. 8) and
establishing a relationship (a polynomial equation may be obtained to indicate the relationship between the ellipse parameters (e.g., major axis and the minor axis) and polar coordinates of the corresponding gazing points paragraph [0045]) between the one or more hypothetically optimal values (calculating estimated pupil region S301 fig. 8) of the at least one parameter (ellipse parameters for the pupil region paragraph [0044]) of the further 3D eye model (3D eye model paragraph [0062]) and the characteristic (gazing points paragraph [0045]) of the pupil image (image of the eye S1 fig. 8).
Lin further teaches (paragraphs [0045]-[0046]): "In operation S14, the processing circuit 110 is
configured to obtain the eye model according to the calibration vectors and calibration viewing directions corresponding to the calibration vectors. Specifically, in some embodiments, a polynomial
equation may be obtained to indicate the relationship between the ellipse parameters (e.g., major axis and the minor axis) and polar coordinates of the corresponding gazing points, which represent the
viewing directions of the eye…
Since the calibration of the eye model is performed in operation S1 to meet one or more users'
specific pupil shape, the accuracy of eye tracking is improved."
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of making a calibrated eye model as taught by Lin which
establishes a relationship between one or more hypothetically optimal parameter values and the gaze
vector into the eye tracking method of Horesh. One would have been motivated to perform this
calibrating step because Lin teaches that the accuracy of the eye tracking is improved by this calibration
process (Lin paragraphs [0045]-[0046]).
Further Wilhelm discloses in at least figure 4a, wherein the characteristic (the outline or contour can be used to determine the calibration of the image paragraph [0047]) of the image of the pupil (image 37 includes pupil 41 fig. 4a) is a measure of the circularity of the pupil area outline (outline or contour 42 of the pupil 41fig. 4a)
wherein the at least one parameter of at least one of the first 3D eye (3D model of eyeball 31 paragraph [0032] can determine the refractive index of the cornea paragraph [0040]) and the further 3D eye model (calibrated model paragraph [0057]) is selected from the list of:
a distance between a center of an eyeball and a center of a pupil (not required by the claim), a size measure of the eyeball (not required by the claim), a size measure of an iris (features of the 3D models include size and shape of the iris contour paragraph [0008]), a distance pupil center to cornea center (the 3d model includes a distance d fig. 2 between the pupil 33 and a distal point (distal pole) of the cornea surface paragraph [0032]), a distance cornea center to eyeball center (not required by the claim), a distance pupil center to limbus center (not required by the claim), a distance crystalline lens to eyeball center (not required by the claim), a distance crystalline lens to cornea center (not required by the claim), a distance crystalline lens to corneal apex (not required by the claim), a refractive property of a vitreous humor (not required by the claim), a refractive property of the crystalline lens (not required by the claim), an ellipsoidal shape measure of an eyeball (the 3d model includes a radius R fig. 2 of the eyeball 31 paragraph [0033]), an ellipsoidal shape measure of the cornea (the 3d model includes a radius r fig. 2 of an imagined sphere 34 aligned with the cornea 32 paragraph [0034]), and a degree of astigmatism (not required by the claim).
Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the features in the 3D models as taught by Wilhelm in the eye tracking method of Horesh. These parameters are used to show the actual detection of an eye pose (paragraph [0041]).
Claims 39 and 42 are rejected under 35 U.S.C.103 as being unpatentable over Horesh et al. (US 20160249800 Al) in view of Koehler et al. (US 20120141006 A1).
Regarding Claim 39, Horesh discloses in at least figure 1, a method for generating data suitable
for determining (eye tracking method paragraph [0054]) at least one eye state variable (gaze vector 110
fig. 1) of at least one eye of a subject (human eye paragraph [0016]), the eye (human eye paragraph
[0016]) comprising an eyeball (fluid filled portion of the eye 120 fig. 1), an iris (iris 102 fig. 1) defining a
pupil (pupil 100 fig. 1), and a cornea (cornea 103 fig. 1), the at least one eye state variable (gaze vector
110 fig. 1) being derivable from at least one image (a plurality of eye images are obtained from an array
camera paragraph [0054] to determine a gaze vector based on pupil center position paragraph [0063])
of the eye (human eye paragraph [0016]) taken with a camera (array camera 200 fig. 2) of known
camera intrinsics (an array camera 200 includes an image sensor having a plurality of lenses paragraph [0019]), the method comprising:
receiving image data (the multiple images may be obtained substantially simultaneously from
the array camera paragraph [0021]) of the at least one eye (human eye paragraph [0016]) from a
camera (array camera 200 fig. 2) of known camera intrinsics (an array camera 200 includes an image
sensor having a plurality of lenses paragraph [0019]) and defining an image plane (image sensor
paragraph [0019]);
determining a characteristic (glints paragraph [0026]) of the image of the pupil (originally
obtained images paragraph [0027]) within the image data (the multiple images may be obtained
substantially simultaneously from the array camera paragraph [0021]);
providing a 3D eye model (3D position of corneal curvature paragraph [0022]) having at least
one parameter (pupil edge paragraph [0023]), the at least one parameter (pupil edge paragraph [0023])
depending in a pre-determined relationship on (a three dimensional distance from the array camera to
the pupil edge can be estimated based on the glint positions paragraph [0054]) the characteristic (glints
paragraph [0026]);
Horesh does not disclose, using a given algorithm to non-iteratively calculate the at least one eye state variable.
However Koehler discloses in at least figure 4, using a given algorithm to non-iteratively calculate (the FOV reconstruction is performed 206 using a non-iterative reconstruction algorithm paragraph [0025]) the at least one eye state variable (FOV of the simulated eyes 406 paragraph [0025]).
Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to a non-iterative algorithm as taught by Koehler in the gaze tracking method of Horesh. An image of the full FOV reconstruction can be represented with an ROI removed paragraph [0026]).
Regarding claim 42, the combination of Horesh and Koehler discloses all the limitations of claim 39 and Horesh further discloses, wherein the further 3D eye model (a 3D position of the pupil may be determined from the 3D position of the corneal center of curvature paragraph [0023]) has either only one parameter (the disparity analysis of the pupil edge can be the only parameter used to calculate the distance paragraph [0024])one parameter (pupil edge paragraph [0024]), or
wherein the further 3D eye model (a 3D position of the pupil may be determined from the 3Dposition of the corneal center of curvature paragraph [0023]) has multiple parameters (pupil edge and corneal center of curvature paragraph [0023]) a pre-determined relationship between any of them (a three dimensional distance from the array camera to the pupil edge can be estimated based on the glint positions paragraph [0054]) and the characteristic (glints paragraph [0026]) is used for at least one of the parameters (pupil edge paragraph [0023]).
Claims 43 and 45 are rejected under 35 U.S.C.103 as being unpatentable over Horesh et al.
(US 20160249800 A1) in view of Koehler et al. (US 20120141006 A1) as applied to claim 39 above in view of Wilhelm (US 20200342201 A1).
Regarding claim 43, the combination of Horesh and Koehler discloses all the limitations of claim 39.
Horesh does not disclose, wherein the respective parameter of the 3D eye model is selected from the list of;
a distance between a center of an eyebaII, and a center of a pupil, a size measure of an eyeball, a size measure of an iris, a distance pupil center to cornea center, a distance cornea center to eyebaII center, a distance pupil center to limbus center, a distance crystalline lens to eyeball center, a distance crystalline lens to cornea center, a distance crystalline lens center to corneal apex, a refractive property of the crystalline lens, an ellipsoidal shape measure of an eyeball, an ellipsoidal shape measure of cornea, and a degree of astigmatism.
However, Wilhelm discloses in at least figure 4a, wherein the respective parameter the 3D eye (3D model of eyeball 31 paragraph [0032]) is selected from the list of:
a distance between a center of an eyeball and a center of a pupil (not required by the claim), a size measure of the eyeball (not required by the claim), a size measure of an iris (features of the 3D models include size and shape of the iris contour paragraph [0008]), a distance pupil center to cornea center (the 3d model includes a distance d fig. 2 between the pupil 33 and a distal point (distal pole) of the cornea surface paragraph [0032]), a distance cornea center to eyeball center (not required by the claim), a distance pupil center to limbus center (not required by the claim), a distance crystalline lens to eyeball center (not required by the claim), a distance crystalline lens to cornea center (not required by the claim), a distance crystalline lens to corneal apex (not required by the claim), a refractive property of a vitreous humor (not required by the claim), a refractive property of the crystalline lens (not required by the claim), an ellipsoidal shape measure of an eyeball (the 3d model includes a radius R fig. 2 of the eyeball 31 paragraph [0033]), an ellipsoidal shape measure of the cornea (the 3d model includes a radius r fig. 2 of an imagined sphere 34 aligned with the cornea 32 paragraph [0034]), and a degree of astigmatism (not required by the claim).
Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the features in the 3D models as taught by Wilhelm in the eye tracking method of Horesh. These parameters are used to show the actual detection of an eye pose (paragraph [0041]).
Regarding claim 45, the combination of Horesh and Koehler discloses all the limitations of claim 39.
Horesh does not disclose, wherein the eye state variable is selected from the list of a pose of an eye, a 3D circle center line, a 3D eye intersecting line, and a size measure of a pupil of an eye.
However Wilhelm further discloses, wherein the eye state variable (features of the optical projection paragraph [0041]) is selected from the list of a pose of an eye (each feature is based on the known eye pose paragraph [0048]), a 3D circle center line (not required by claim), a 3D eye intersecting line (not required by claim), and a size measure of a pupil of an eye (the features include an outline or contour of the pupil paragraph [0047]).
Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the features in the 3D models as taught by Wilhelm in the eye tracking method of Horesh. These parameters are used to show the actual detection of an eye pose (paragraph [0041]).
Claims 40-41 and 44 are rejected under 35 U.S.C.103 as being unpatentable over Horesh et al.
(US 20160249800 Al) in view of Koehler et al. (US 20120141006 A1) as applied to claim 39 above in view of Lin (US 20180335839 Al).
Regarding claim 40, the combination of Horesh and Koehler discloses all the limitations of claim 39.
Horesh does not disclose, wherein the characteristic of the image of the pupil is a measure of
a ratio of minor to major axis length of an ellipse fit to the pupil image area or outline, a measure of variation of the curvature of the pupil outline, a measure of elongation or a measure of the bounding box of the pupil area.
However, Lin discloses in at least figure 8, wherein the characteristic (gazing points paragraph
[0045]) of the image of the pupil (image of the eye S1 fig. 8) is a measure of a ratio of minor to major axis length of an ellipse fit to the pupil image area or outline (the ellipse parameters include the major and minor axis of the pupil paragraph [0045] and a ratio is used to calculate a viewpoint paragraph [0053]), a measure of variation of the curvature of the pupil outline, a measure of elongation or a measure of the bounding box of the pupil area.
Lin further teaches (paragraphs [0045]-[0046]): "In operation S14, the processing circuit 110 is
configured to obtain the eye model according to the calibration vectors and calibration viewing
directions corresponding to the calibration vectors .... Specifically, in some embodiments, a polynomial
equation may be obtained to indicate the relationship between the ellipse parameters (e.g., major axis
and the minor axis) and polar coordinates of the corresponding gazing points, which represent the
viewing directions of the eye ....
Since the calibration of the eye model is performed in operation S1 to meet one or more users'
specific pupil shape, the accuracy of eye tracking is improved."
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of making a calibrated eye model as taught by Lin which
establishes a relationship between one or more hypothetically optimal para meter values and the gaze
vector into the eye tracking method of Horesh. One would have been motivated to perform this
calibrating step because Lin teaches that the accuracy of the eye tracking is improved by this calibration process (Lin paragraphs [0045]-[0046]).
Regarding claim 41, the combination of Horesh and Koehler discloses all the limitations of claim 39,
Horesh does not disclose, wherein the pre-determined relationship between the at least one
parameter of the 3D eye model and the characteristic of the pupil image is chosen from the list of a constant value, a linear relationship, a polynomial relationship, or another non-linear relationship.
However, Lin discloses in at least figure 8, wherein the relationship (a polynomial equation may
be obtained to indicate the relationship between the ellipse parameters (e.g., major axis and the minor axis) and polar coordinates of the corresponding gazing points paragraph [0045]) between the
hypothetically optimal values (calculating estimated pupil region S301 fig. 8) of the at least one further
3D eye model parameter and the characteristic (3D eye model paragraph [0062]) parameter (ellipse
para meters for the pupil region paragraph [0044]) and the characteristic (gazing points paragraph
[0045]) of the pupil image (image of the eye S1 fig. 8) is chosen from the list of a constant value (not required by claim), a linear relationship (not required by claim), a polynomial relationship (polynomial equation paragraph [0045]), or another non-linear relationship (not required by claim).
Lin further teaches (paragraphs [0045]-[0046]): "In operation S14, the processing circuit 110 is
configured to obtain the eye model according to the calibration vectors and calibration viewing
directions corresponding to the calibration vectors .... Specifically, in some embodiments, a polynomial
equation may be obtained to indicate the relationship between the ellipse parameters (e.g., major axis
and the minor axis) and polar coordinates of the corresponding gazing points, which represent the
viewing directions of the eye ....
Since the calibration of the eye model is performed in operation S1 to meet one or more users'
specific pupil shape, the accuracy of eye tracking is improved."
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of making a calibrated eye model as taught by Lin which
establishes a relationship between one or more hypothetically optimal para meter values and the gaze
vector into the eye tracking method of Horesh. One would have been motivated to perform this calibrating step because Lin teaches that the accuracy of the eye tracking is improved by this calibration process (Lin paragraphs [0045]-[0046]).
Regarding claim 44 the combination of Horesh and Koehler discloses all the limitations of claim 39.
Horesh does not explicitly disclose, wherein said relationship is the same for all eye state
variables, or wherein a different pre-determined relations hip between a para meter of the 3D eye model and the characteristic of the pupil image is used for each eye state variable or for groups of eye state variables.
However Lin discloses in at least figure 8, wherein said relationship (a polynomial equation may
be obtained to indicate the relationship between the ellipse parameters (e.g., major axis and the minor
axis) and polar coordinates of the corresponding gazing points paragraph [0045]) is the same (the cost
function shows the relationship between the estimated pupil region and the pupil region of interest to
optimize the model which has a matrix relationship between the viewpoint of the eye and the gaze
vector of the pupil region of interest paragraphs [0066-0068]) for all eye state variables (gaze vector
from pupil region of interest S4 fig. 8), or wherein a different relationship between a parameter of the
further 3D eye model and the characteristic of the pupil image is established for each eye state variable
or for groups of eye state variables.
Lin further teaches (paragraphs [0045]-[0046]): "In operation S14, the processing circuit 110 is
configured to obtain the eye model according to the calibration vectors and calibration viewing
directions corresponding to the calibration vectors .... Specifically, in some embodiments, a polynomial
equation may be obtained to indicate the relationship between the ellipse parameters (e.g., major axis
and the minor axis) and polar coordinates of the corresponding gazing points, which represent the
viewing directions of the eye ....
Since the calibration of the eye model is performed in operation S1 to meet one or more users'
specific pupil shape, the accuracy of eye tracking is improved."
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of making a calibrated eye model as taught by Lin which
establishes a relationship between one or more hypothetically optimal para meter values and the gaze
vector into the eye tracking method of Horesh. One would have been motivated to perform this
calibrating step because Lin teaches that the accuracy of the eye tracking is improved by this calibration
process (Lin paragraphs [0045]-[0046]).
Claim 47-48 are rejected under 35 U.S.C. 103 as being unpatentable over Horesh et al. (US
20160249800 Al) in view of Trail et al. (US 20180246590 A1), Lin (US 20180335839 Al) and Wilhelm (US 20200342201 A1) as applied to claim 31 above and in further view of Miller et al. {US 20190243448 Al).
Regarding claim 47, the combination of Horesh, Trail, Lin and Wilhelm discloses all the limitations of claim 31.
Horesh does not explicitly disclose, the given algorithm including:
determining a first ellipse in the image data, the first ellipse at least substantially representing a
border of the pupil of the at least one eye at a first time;
using the camera intrinsics and the first ellipse to determine a 3D orientation vector of a first
circle in 3D and a first center line on which a center of the first circle is located in 3D, so that a projection
of the first circle, in a direction parallel to the first center line, onto the image plane is expected to
reproduce the first ellipse; and
determining a first eye intersecting line in 3D expected to intersect a 3D center of the eyeball at
the corresponding time as a line which is, in the direction of the orientation vector, parallel-shifted to
the first center line by an expected distance between the center of the eyeball and a center of the pupil.
However Miller discloses in at least fig. 21 A, the given algorithm (an eye image can be obtained from a video using any appropriate process, for example, using a video processing algorithm that can extract an image from one or more sequential frames paragraph [0559]) including:
determining a first ellipse (first projection ellipse 2100 fig. 21 A) in the image data (first ellipse
projected onto the image of the users limbus paragraph [0498], the first ellipse (first projection ellipse
2100 fig. 21 A) at least substantially representing a border of the pupil (a boundary of the eye's limbus
may be determined paragraph [0735] The display system may then determine the eye's CoR based on
the pupil in similar ways as described above for determining the CoR based on the limbus paragraph
[0747]) of the at least one eye (an image of the user's eye may be obtained paragraph [0735]) at a first
time (the first ellipse is based on the first gaze of the user paragraph [0735]);
using the camera intrinsics (rays forming a cone 2104 are illustrated as extending from a camera
point 2102 paragraph [0735]) and the first ellipse (first projection ellipse 2100 fig. 21 A) to determine a
3D orientation vector (vector 2110 fig. 21A) of a first circle in 3D (circular cross-section 2106 fig. 21A)
and a first center line (first center line as shown below in fig. 21A) on which a center (center 2108 fig.
21A) of the first circle is located in 3D (circular cross-section 2106 fig. 21A), so that a projection of the
first circle (circular cross-section 2106 fig. 21A), in a direction parallel (the projection of the circle is
parallel to the first center line fig. 21A) to the first center line (first center line as shown below in fig.
21A), onto the image plane (image plane 2112 fig. 21A) is expected to reproduce (the rays forming
circular cross section 2106 travel though points on the first ellipse paragraph [0735]) the first ellipse
(first projection ellipse 2100 fig. 21 A); and
determining a first eye intersecting line in 3D (line C as shown below in fig. 21) expected to intersect (the line C intersects point C as shown below in fig. 21A) a 3D center of the eyeball (the center
of the eyeball at point C as shown below in fig. 21A) at the corresponding time (the line C is at the same
time as the first center line as shown below in fig. 21A) as a line which is (the line C is parallel to the
vector 2110 as shown below in fig. 21 A), in the direction of the orientation vector (vector 2110 fig. 21 D), parallel-shifted (the line C can be parallel shifted to the first center line by the distance Das shown
below in fig. 21 A) to the first center line (first center line as shown below in fig. 21A) by an expected
distance between (Dis the distance between point C and center 2108 as shown below in fig. 21A) the
center of the eyeball (point C as shown below in fig. 21A) and a center of the pupil (center 2108 fig.
21A).
Miller further teaches (paragraph [0732]): "Each of the images the display system may
determine a vector providing a location of a center of a selected circular cross-section ... , to identify the
varying optical axes of the eye according to eye poses represented in the images."
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of determining a vector based on a circular cross section as
taught by Miller which into the eye tracking method of Horesh. One would have been motivated to
perform this vector determining step because Miller teaches that the display system may identify optical
axes of the eye as represented in the images. The display system may utilize the optical axes to
determine the CoR. (Miller paragraph [0740]).
Regarding claim 48, the combination of Horesh, Lin, Wilhelm and Miller discloses all the limitations of claim 47.
Horesh does not disclose, further comprising at least one of:
receiving image data of a further eye of the subject at a time, substantially corresponding to the
first times, from a camera of known camera intrinsics and defining an image plane, the further eye
comprising a further eyeball, a further iris defining a further pupil, and a further cornea, the given
algorithm further including:
determining a further ellipse in the image data, the further ellipse at least substantially
representing the border of the further pupil of the further eye at the corresponding time;
using the camera intrinsics and the further ellipse to determine a 3D orientation vector of a
further circle in 3D and a further center line on which a center of the further circle is located in 3D, so
that a projection of the further circle, in a direction parallel to the further center line, onto the image
plane is expected to reproduce the further ellipse;
determine a further eye intersecting line in 3D expected to intersect a 3D center of the further
eyeball at the corresponding time as a line which is, in the direction of the 3D orientation vector of the further circle, parallel-shifted to the further center line by an expected distance between the center of the further eyeball and a center of the further pupil;
receiving second image data of the at least one eye at a second time from the camera;
the given algorithm further including:
determining a second ellipse in the second image data, the second ellipse at least substantially
representing the border of the pupil at the second time;
using the camera intrinsics and the second ellipse to determine an orientation vector of a
second circle and a second center line on which a center of the second circle is located, so that a
projection of the second circle, in a direction parallel to the second center line, onto the image plane is
expected to reproduce the second ellipse; and
determine a second eye intersecting line expected to intersect the center of the eyeball at the
second time as a line which is, in the direction of the orientation vector of the second circle, parallel shifted to the second center line by the expected distance.
Miller further discloses, further comprising at least one of:
receiving image data (an eye image can be obtained from a video using any appropriate process,
for example, using a video processing algorithm that can extract an image from one or more sequential
frames paragraph [0559]) of a further eye of the subject (the display system of any of the Examples
above, wherein said one or more eye tracking camera is configured to image said pupil of said eye
paragraph [0019]) at a time, substantially corresponding to the first times (first gaze direction paragraph [0735]), from a camera of known camera intrinsics and defining an image plane (image plane 2112 fig. 21 A), the further eye (the display system of any of the Examples above, wherein said one or more eye tracking camera is configured to image said pupil of said eye paragraph [0019]) comprising a further eyeball (eyeball taught above by Horesh), a further iris (iris taught above by Horesh) defining a further pupil (pupil taught above by Horesh), and a further cornea (cornea taught above by Horesh), the given algorithm an eye image can be obtained from a video using any appropriate process, for example, using a video processing algorithm that can extract an image from one or more sequential frames paragraph [0559]) further including:
determining a further ellipse (further ellipse shown as first projection ellipse 2100 fig. 21 A) in the image data (first ellipse projected onto the image of the users limbus paragraph [0498], the further
ellipse (further ellipse shown as first projection ellipse 2100 fig. 21 A) at least substantially representing
a border of the further pupil (a further pupil is shown by a boundary of the eye's limbus may be
determined paragraph [0735] The display system may then determine the eye's CoR based on the pupil
in similar ways as described above for determining the CoR based on the limbus paragraph [0747]) of
the at further eye (further eye shown by an image of the user's eye may be obtained paragraph [0735])
at a corresponding time (the further ellipse shown as the first ellipse is based on the first gaze of the
user paragraph [0735]);
using the camera intrinsics (rays forming a cone 2104 are illustrated as extending from a camera
point 2102 paragraph [0735]) and the further ellipse (further ellipse shown as the first projection ellipse
2100 fig. 21 A) to determine a 3D orientation vector (vector 2110 fig. 21A) of a further circle in 3D
(further circle shown as circular cross-section 2106 fig. 21A) and a further center line (further center line
shown as first center line as shown below in fig. 21A) on which a center (center 2108 fig. 21A) of the
further circle is located in 3D (further circle shown as circular cross-section 2106 fig. 21A), so that a
projection of the further circle (further circle shown as circular cross-section 2106fig. 21A), in a direction
parallel (the projection of the circle is parallel to the first center line fig. 21A) to the further center line
(further center line shown as first center line as shown below in fig. 21A), onto the image plane (image
plane 2112 fig. 21A) is expected to reproduce (the rays forming circular cross section 2106 travel though
points on the first ellipse paragraph [0735]) the further ellipse (further ellipse shown as first projection
ellipse 2100 fig. 21 A); and
determine a further eye intersecting line in 3D (further eye intersecting line shown as first
intersecting line C as shown below in fig. 21) expected to intersect (the line C intersects point C as
shown below in fig. 21A) a 3D center of the further eyeball (the center of the further eyeball is shown as
the center of the eyebaII at point C as shown below in fig. 21A) at the corresponding time (the line C is at the same time as the first center line as shown below in fig. 21A) as a line which is (the line C is parallel to the vector 2110 as shown below in fig. 21 A), in the direction of the orientation vector (vector 2110 fig. 21 D), parallel-shifted (the line C can be parallel shifted to the first center line by the distance as shown below in fig. 21 A) to the further center line (further center line shown as first center line as
shown below in fig. 21A) by an expected distance between (Dis the distance between point C and
center 2108 as shown below in fig. 21A) the center of the further eyeball (the center of the further
eyeball is shown as the center of the first eyeball point C as shown below in fig. 21A) and a center of the
further pupil (the center of the further pupil is shown as the canter of the first pupil center 2108 fig.
21A);
receiving second image data (an image subsequent to the image described in FIG. 21A is obtained paragraph [0737]) of the at least one eye (user eye paragraph [0737]) at a second time (second gaze paragraph [0737]) from the camera (camera point 2102 fig. 21 A);
the given algorithm further (an eye image can be obtained from a video using any appropriate process, for example, using a video processing algorithm that can extract an image from one or more
sequential frames paragraph [0559]) including:
determining a second ellipse (second projection ellipse 2114 fig. 21 B) in the second image data (an image subsequent to the image described in FIG. 21A is obtained paragraph [0737]), the second ellipse (second projection ellipse 2114 fig. 21 B) at least substantially representing the border of the
pupil (a boundary of the eye's limbus may be determined paragraph [0735] The display system may then
determine the eye's CoR based on the pupil in similar ways as described above for determining the CoR
based on the limbus paragraph [0747]) at the second time (second gaze paragraph [0737]);
using the camera intrinsics (rays forming a cone 2104 are illustrated as extending from a camera
point 2102 paragraph [0735]) and the second ellipse (second projection ellipse 2114 fig. 21 B) to
determine an orientation vector (vector 2116 fig. 21B) of a second circle (second circle as shown below
in fig. 21 B) and a second center line (second center line as shown below in fig. 21 B) on which a center
of the second circle is located (second center as shown below in fig. 21 B), so that a projection of the
second circle (second circle as shown below in fig. 21B), in a direction parallel to the second center line
(the projection of the second circle is parallel to the second center line as shown below in fig. 21 B), onto
the image plane (image plane 2112 fig. 21B) is expected to reproduce (the second circle is a projection
of points from the second ellipse fig. 21 B) the second ellipse (second projection ellipse 2114 fig. 21 B); and
determine a second eye intersecting line (vector 2116 is the second eye intersecting line fig. 21 B) expected to intersect the center of the eyeball (the center of the eyeball and the pupil are in the same location fig. 21 B) at the second time (second gaze paragraph [0737]) as a line which is (the line is
the vector 2116 fig. 21 B), in the direction of the orientation vector (vector 2116 fig. 21 B) of the second
circle (second circle as shown below in fig. 21B), parallel-shifted to the second center line (the vector 2116 already intersects the second center line fig. 21 B) by the expected distance (the expected distance is zero because it is the same line fig. 21 B).
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the steps of determining a vector based on a circular cross section as
taught by Miller which into the eye tracking method of Horesh. One would have been motivated to
perform this vector determining step because Miller teaches that the display system may identify optical
axes of the eye as represented in the images. The display system may utilize the optical axes to
determine the CoR. (Miller paragraph [0740]).
Claim 49 is rejected under 35 U.S.C. 103 as being unpatentable over Horesh et al. {US
20160249800 Al) in view of Trail et al. (US 20180246590 A1), Lin (US 20180335839 Al), Wilhelm (US 20200342201 A1) and Miller et al. (US 20190243448 A1) as applied to claim 48 above and in further view of Li (CN 110750157 A).
Regarding claim 49, the combination of Horesh, Lin, and Miller discloses all the limitations of
claim 48.
Horesh does not explicitly disclose wherein the given algorithm further includes using the first
eye intersecting line and the second eye intersecting line, respectively the first eye intersecting line and
the further eye intersecting line to determine one or more of co-ordinates of the center of the eyeball of the at least one eye respectively of the at least one eye and the further eye, a gaze direction, an optical axis, an orientation, a visual axis, a size of the pupil and/or a radius of the pupil of the at least one eye and/or of the further eye, wherein the expected distance between the center of the eyeball and the center of the pupil is a parameter of the 3D eye model respectively of the further 3D eye model, depending in the pre-determined relationship on the characteristic of the image of the pupil of the corresponding eye, wherein the respective center line and/or the respective eye intersecting line
is determined using a model of the camera and/or the 3D eye model respectively the further 3D eye
model, wherein the camera is modeled as a pinhole camera, and/or wherein the model of the camera
comprises at least one of a focal length, a shift of a central image pixel, a shear parameter, and a
distortion para meter.
However Miller further discloses, wherein the given algorithm (an eye image can be obtained
from a video using any appropriate process, for example, using a video processing algorithm that can
extract an image from one or more sequential frames paragraph [0559]) further includes using the first eye intersecting line (line C as shown below in fig. 21) and the second eye intersecting line (vector 2116
fig. 21B), respectively the first eye intersecting line (line C as shown below in fig. 21) and the further eye
intersecting line (further eye intersecting line shown as first eye intersecting line C as shown below
in fig. 21) to determine one or more of coordinates of the center of the eyeball of the at least one eye
respectively of the at least one eye and the further eye (the CoR is determined from the vectors
2110, 21166 and 2120 paragraph [0739]), a gaze direction (the vectors represent gaze direction
paragraphs [0736] - [0738], an optical axis (the display system may identify optical axes of the eye
paragraph [0740]), an orientation, a visual axis, a size of the pupil and/or a radius of the pupil of the at
least one eye and/or of the further eye,
wherein the camera is modeled as a pinhole camera (pin hole camera model paragraph [0748], and/or wherein the model of the camera comprises at least one of a focal length, a shift of a central image pixel, a shear parameter, and a distortion parameter,
wherein the respective center line (center line as shown below in fig. 21 A) and/or the
respective eye intersecting line is determined using a model of the camera (the center line is determined
by the center point 2108 from the image obtained from camera point 2102 as shown below I fig. 21A)
and/or the 3D eye model respectively the further 3D eye model.
Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to incorporate the steps of determining a vector based on a circular cross section as taught by Miller which into the eye tracking method of Horesh. One would have been motivated to
perform this vector determining step because Miller teaches that the display system may identify optical
axes of the eye as represented in the images. The display system may utilize the optical axes to determine the CoR. (Miller paragraph [0740]).
Additionally Li discloses in at least figure 3, wherein the expected distance between (k is the
distance between eyeball center C and pupil center P fig. 3) the center of the eyeball (eyeball center C
fig. 3) and the center of the pupil (pupil center P fig. 3) is a parameter of the 3D eye model (3D eye
model paragraph [0063] of translation) respectively of the further 3D eye model (further 3D model taught above by Horesh), depending in the pre-determined relationship (formula for the coordinates of
the iris center paragraphs [0100] - [0103]) on the characteristic (iris center paragraph [0103] of
translation) of the image of the pupil of the corresponding eye (the left eye pupil in the image taken by
the left camera and the image taken by the right camera paragraph [0104] of translation).
LI further teaches (paragraph [0128] of translation): "This method can determine the direction
of the line of sight and the coordinates of the line of sight and the screen gaze point based on the
human eyeball and eye features, and obtain the content that the user wants to input ... improving the
convenience of user use and the friendliness of human-computer interaction."
Therefore it would be obvious for one skilled in the art before the effective filling date of the
claimed invention to incorporate the eye tracking for input steps as taught by Li into the eye tracking
method of Horesh. One would have been motivated to perform this step because Li teaches that eye
tracking for input improves the convenience of the user (Li paragraph [0128] of translation).
Claim 51 is rejected under 35 U.S.C. 103 as being unpatentable over Horesh et al. (US 20160249800 A1) in view of Trail et al. (US 20180246590 A1), Lin (US 20180335839 A1) and Wilhelm (US 20200342201 A1) as applied to claim 33 above and in further view of Jha et al. (US 20190290118 A1).
Regarding claim 51, the combination of Horesh, Trail, Lin and Wilhelm discloses all the limitations of claim 33. Horesh does not disclose, wherein the constant value is smaller or larger than the corresponding average parameter of the first 3D eye model or the relationship is derived via a regression fit.
However Jha discloses in at least figure 2 , wherein the constant value is smaller or larger than the corresponding average parameter of the first 3D eye model (not required by claim) or the relationship (pupil size paragraph [0071]) is derived via a regression fit (the pupil size estimation unit 206 uses a pupil size estimation model/hierarchical regression model to estimate the variation in the pupil size of the user paragraph [0071]).
Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the regression model for pupil size as taught by Jha in the gaze tracking method of Horesh. The regression model is a way to estimate the variation (paragraph [0071]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sun et al. (US 20210223859 A1) discloses a device for determining gaze placement with a regression model.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW R WRIGHT whose telephone number is (703)756-5822. The examiner can normally be reached Mon-Thurs 7:30-5 Friday 8-12.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pinping Sun can be reached at 1-571-270-1284. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW R WRIGHT/Examiner, Art Unit 2872
/PINPING SUN/Supervisory Patent Examiner, Art Unit 2872