Prosecution Insights
Last updated: April 19, 2026
Application No. 18/294,787

Authentication by Habitual Eye Tracking Data

Final Rejection §102§103
Filed
Feb 02, 2024
Examiner
JONES, ANDREW B
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Hewlett-Packard Development Company, L.P.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
53 granted / 74 resolved
+9.6% vs TC avg
Strong +19% interview lift
Without
With
+18.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
99
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
49.3%
+9.3% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
17.6%
-22.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 74 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed 18 February, 2026 has been entered. The amendment of claims 1, 9, 10, and 12 – 15 has been acknowledged. The cancellation of claim 6 has been acknowledged. The addition of new claim 16 - 21 has been acknowledged. Response to Arguments Applicant’s arguments, see page 6, section “Claim Rejections – 35 U.S.C. § 102”, filed 18 February, 2026 with respect to the rejection of claims 1 – 4, 6, 7, and 10 – 12 have been fully considered but they are not persuasive. Applicant states on page 7 that Geiss et al (U.S. Patent Publication No. 2015/0084864 A1, hereinafter “Geiss”) fails to teach the newly amended claim limitation of “wherein sensing the habitual eye tracking data comprises detecting a variation of a pupil size of the user in response to the display of a plurality of images in the different areas on the display device of the HMD”. The examiner respectfully disagrees. Geiss teaches in ¶ 0024 “The eye-tracking system 102 may include hardware such as an infrared camera 116 and at least one infrared light source 118. The infrared camera 116 may be utilized by the eye-tracking system 102 to capture images of an eye of the wearer. The images may include either video images or still images or both. The images obtained by the infrared camera 116 regarding the eye of the wearer may help determine where the wearer may be looking within a field of view of the HMD included in the system 100, for instance, by ascertaining a location of the eye pupil of the wearer.”. Additionally, in ¶ 0036 Geiss states “In one example, an infrared light source or sources integrated into the eye tracking system 230 may illuminate the eye 214 of the wearer, and a reflected infrared light may be collected with an infrared camera to track eye or eye-pupil movement.”. The reflected light from the eye would be dependent on the size of the pupil as the light would not reflect from the area of the eye which comprises the pupil. The larger the pupil is, the less light would reflect back to the infrared camera during the detection steps. Additionally, as the pupil moves within the recording area of the infrared camera, the shape and size of the pupil would change with respect to the static camera. This change in size of the pupil as it travels from various locations within the area of the eye would correspond to the location of the pupil within the area of the eye (see ¶ 0031 “Calibrated wearer eye pupil positions may include, for instance, information regarding extents or range of an eye pupil movement (right/left and upwards/downwards), and relative position of eyes of the wearer with respect to the HMD.”). In light of both reasons listed above, the examiner believes under broadest reasonable interpretation that Geiss does teach the newly amended limitation of “wherein sensing the habitual eye tracking data comprises detecting a variation of a pupil size of the user in response to the display of the plurality of images in the different areas on the display device of the HMD”. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for a patent. (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States. Claims 1 – 4, 7, 10, 11, and 16 - 18 are rejected under pre-AIA 35 U.S.C. 102(a)(2) as being anticipated by Geiss et al (U.S. Patent Publication No. 2015/0084864 A1, hereinafter “Geiss”). Regarding claim 1, Geiss teaches a method of authorizing a user of a head mountable device (HMD) (¶ 0017: A wearable computing system may include a head mounted display (HMD).), comprising: maintaining a database indicating habitual eye tracking data for the user (¶ 0031: In addition to instructions that may be executed by the processor 112, the memory 114 may store data that may include a set of calibrated wearer eye pupil positions and a collection of past eye pupil positions. Thus, the memory 114 may function as a database of information related to gaze direction and location.); displaying a plurality of images in different areas on a display device of the HMD (¶ 0017: To authenticate the user, the wearable computing system may generate a display of a random content on the HMD.; ¶ 0018: The content personalized to the user may include names and pictures associated with the user such as names and pictures of the user or people or objects related to the user (e.g., wife, children, etc.).); sensing habitual eye tracking data for the user while the user sequentially views a set of images of the plurality of images (¶ 0018: The wearable computing system may determine a responsiveness metric that includes a time period elapsed between generating the display of the random content and determining that the gaze location of the eye of the user substantially matches the predetermined location on the HMD of the content personalized to the user.), wherein sensing the habitual eye tracking data comprises detecting a variation of a pupil size of the user in response to the display of the plurality of images in the different areas on the display device of the HMD (¶ 0031: Calibrated wearer eye pupil positions may include, for instance, information regarding extents or range of an eye pupil movement (right/left and upwards/downwards), and relative position of eyes of the wearer with respect to the HMD.;¶ 0036: In one example, an infrared light source or sources integrated into the eye tracking system 230 may illuminate the eye 214 of the wearer, and a reflected infrared light may be collected with an infrared camera to track eye or eye-pupil movement.); and authenticating the user by comparing the sensed habitual eye tracking data with the indicated habitual eye tracking data for the user (¶ 0018: The responsiveness metric may be determined to be less than a predetermined threshold indicating that the user identified the content personalized to the user within a predetermined time period. Identifying the content personalized to the user within the predetermined time period that may indicate familiarity with the content personalized to the user and the user may be authenticated.). Regarding claim 2, Geiss teaches the method of claim 1. Additionally, Geiss teaches wherein sensing the habitual eye tracking data comprises detecting a pattern scanning sequence of the user in response to the display of the plurality of images in the different areas on the display device of the HMD (¶ 0056: The processor may also receive information associated with temporal characteristics of eye movement of the user between gaze locations of the sequence of gaze locations. The temporal characteristics may include time periods elapsed between the gaze locations. The processor may determine that the sequence of gaze locations and temporal characteristics of the eye movement between the gaze locations substantially match a predetermined spatial-temporal sequence of locations associated with the content personalized to the user on the HMD, and authenticate the user.). Regarding claim 3, Geiss teaches the method of claim 1. Additionally, Geiss teaches wherein sensing the habitual eye tracking data comprises detecting at least one of a speed, velocity, acceleration, and momentum of sight movement of the user in response to the display of the plurality of images in the different areas on the display device of the HMD (¶ 0019: The processor may generate the display of the plurality of moving objects such that speeds associated with motion of the moving objects on the HMD may be less than a predetermined threshold speed. Onset of rapid eye pupil movements may occur if a speed of a moving object tracked by the eye of the wearer is equal to or greater than the predetermined threshold speed. Alternatively, the speed associated with the moving object may be independent of correlation to eye blinks or rapid eye movements.; ¶ 0020: The speed associated with the motion of the moving object may change, i.e., the moving object may accelerate or decelerate. The processor may track the eye movement of the eye of the wearer to detect if the eye movement may indicate that the eye movement may be correlated with changes in the speed associated with the motion of the moving object and may authenticate the user accordingly.). Regarding claim 4, Geiss teaches the method of claim 1. Additionally, Geiss teaches wherein sensing the habitual eye tracking data comprises detecting a duration of a pause of the user in response to the display of the plurality of images in the different areas on the display device of the HMD (¶ 0056: The processor may also receive information associated with temporal characteristics of eye movement of the user between gaze locations of the sequence of gaze locations. The temporal characteristics may include time periods elapsed between the gaze locations. The processor may determine that the sequence of gaze locations and temporal characteristics of the eye movement between the gaze locations substantially match a predetermined spatial-temporal sequence of locations associated with the content personalized to the user on the HMD, and authenticate the user.). Regarding claim 7, Geiss teaches the method of claim 1. Additionally, Geiss teaches wherein a different plurality of images is displayed in different areas on the display device of the HMD each time the user is authorized for the HMD (¶ 0044: The grid of random names or random pictures may include different pictures or names every time the wearable computing system may authenticate the user.; ¶ 0056: For example, the random content may be a grid of nine pictures; three of the nine pictures may be associated with the user. The user may gaze at the three pictures associated with the user in a given sequence.). Regarding claim 10, Geiss teaches a computing system, comprising: A display device (¶ 0023: Referring now to the figures, FIG. 1 is a block diagram of an example wearable computing and head-mounted display (HMD) system 100 that may include several different components and subsystems. Components coupled to or included in the system 100 may include an eye-tracking system 102, a HMD-tracking system 104, an optical system 106, peripherals 108, a power supply 110, a processor 112, a memory 114, and a user interface 115.); A gaze tracking device (¶ 0023: Referring now to the figures, FIG. 1 is a block diagram of an example wearable computing and head-mounted display (HMD) system 100 that may include several different components and subsystems. Components coupled to or included in the system 100 may include an eye-tracking system 102, a HMD-tracking system 104, an optical system 106, peripherals 108, a power supply 110, a processor 112, a memory 114, and a user interface 115.); and A processor operatively coupled with a computer readable storage medium storing instructions (¶ 0030: The processor 112 may execute instructions stored in a non-transitory computer readable medium, such as the memory 114, to control functions of the system 100.) that, when read and executed by the processor, direct the processor to: Display, by the display device, a pattern of images to a user (¶ 0017: To authenticate the user, the wearable computing system may generate a display of a random content on the HMD.; ¶ 0018: The content personalized to the user may include names and pictures associated with the user such as names and pictures of the user or people or objects related to the user (e.g., wife, children, etc.).); Capture, by the gaze tracking device, involuntary eye movements of the user viewing the pattern of images on the display device; (¶ 0024: The eye tracking system 230 may, for example, track movements of an eye pupil 404 and a gaze axis 406 associated with the eye 214 and eye pupil 404. As the eye 214 or eye pupil 404 moves, the eye tracking system 230 may track a gaze location 408 on the HMD associated with the gaze axis 406.; ¶ 0056: The processor may also receive information associated with temporal characteristics of eye movement of the user between gaze locations of the sequence of gaze locations. The temporal characteristics may include time periods elapsed between the gaze locations.), wherein sensing the habitual eye tracking data comprises detecting a variation of a pupil size of the user in response to the display of the plurality of images in the different areas on the display device of the HMD (¶ 0031: Calibrated wearer eye pupil positions may include, for instance, information regarding extents or range of an eye pupil movement (right/left and upwards/downwards), and relative position of eyes of the wearer with respect to the HMD.;¶ 0036: In one example, an infrared light source or sources integrated into the eye tracking system 230 may illuminate the eye 214 of the wearer, and a reflected infrared light may be collected with an infrared camera to track eye or eye-pupil movement.); and Authenticate the user based on the involuntary eye movements of the user matching a stored user preference information (¶ 0018: The responsiveness metric may be determined to be less than a predetermined threshold indicating that the user identified the content personalized to the user within a predetermined time period. Identifying the content personalized to the user within the predetermined time period that may indicate familiarity with the content personalized to the user and the user may be authenticated.; ¶ 0056: The processor may determine that the sequence of gaze locations and temporal characteristics of the eye movement between the gaze locations substantially match a predetermined spatial-temporal sequence of locations associated with the content personalized to the user on the HMD, and authenticate the user.). Regarding claim 11, Geiss teaches the system of claim 10. Additionally, Geiss teaches wherein the pattern of images includes images relating to difference sceneries, colors, topics, or sizes that are of interest to the user (¶ 0044: One of the pictures in the grid may be associated with the user such as a picture of the user as a child, a picture of a wife, child, relative, or a friend of the user, a picture of a school where the user may have studied, a picture of an intersection close to where the user may have lived, or a picture of logos from institutions associated with the user (university logos, corporate logos, etc.).). Regarding claim 16, claim 16 has been analyzed with regard to respective claim 11 and is rejected for the same reasons of obviousness as used above. Regarding claim 17, claim 17 has been analyzed with regard to respective claim 2 and is rejected for the same reasons of obviousness as used above. Regarding claim 18, claim 18 has been analyzed with regard to respective claim 2 and is rejected for the same reasons of obviousness as used above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5, 12, and 20 is rejected under 35 U.S.C. 103 as being unpatentable over Geiss et al (U.S. Patent Publication No. 2015/0084864 A1, hereinafter “Geiss”) in view of foreign patent publication WO 2018/115543 A1 Espinosa et al (employed using the provided machine translation, hereinafter “Espinosa”). Regarding claim 5, Geiss teaches the method of claim 1. Geiss does not explicitly teach wherein sensing the habitual eye tracking data comprises detecting a blink count on each image of the user in response to the display of the plurality of images in the different areas on the display device of the HMD. However, Espinosa does teach wherein sensing the habitual eye tracking data comprises detecting a blink count on each image of the user in response to the display of the plurality of images in the different areas on the display device of the HMD (¶ 29: The energy contained in each region of interest of each frame is obtained by adding the value of gray levels of each pixel of it. The amount of intensity reflected by the eye is almost constant when the eyelid is open. Blinks appear as rapid increases and decreases in intensity: when the eyelid closes, the light diffused by the eyelid changes and the same happens with the intensity recorded by the camera. The peaks in intensity represent the moment when the eyelid is completely closed.; ¶ 31: In FIG. 3, the normalized power curve (9) obtained for a sample blink is shown. The normalized power curve (9) makes it possible to clearly define the beginning at the first moment when it ceases to be zero and the end at the last, which returns to zero. Likewise, it is possible to locate the instants in which local maximums and minima occur (8), as well as the intersections with zero (7). All these, together with the values of the local maximums and minimums (8) provide information about the flicker and are used as characteristics to describe it.). Geiss and Espinosa are considered to be analogous art as both pertain to user authentication using optical measurements. Therefore, it would have been obvious to one of ordinary skill in the art to combine the input method (as taught by Geiss) and the method for biometric authentication by means of blink recognition (as taught by Espinosa) before the effective filing date of the claimed invention. The motivation for this combination of references would be the method of Espinosa utilizes multiclass classification during the classification stage, thus improving classification results (See ¶ 14). This motivation for the combination of Geiss and Espinosa is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III). Regarding claim 12, Geiss teaches the computing system of claim 10. Additionally, Geiss teaches wherein the involuntary eye movements include a duration of pause of the user or a blink count of the user in response to the display of the pattern of images on the display device (¶ 29: The energy contained in each region of interest of each frame is obtained by adding the value of gray levels of each pixel of it. The amount of intensity reflected by the eye is almost constant when the eyelid is open. Blinks appear as rapid increases and decreases in intensity: when the eyelid closes, the light diffused by the eyelid changes and the same happens with the intensity recorded by the camera. The peaks in intensity represent the moment when the eyelid is completely closed.; ¶ 31: In FIG. 3, the normalized power curve (9) obtained for a sample blink is shown. The normalized power curve (9) makes it possible to clearly define the beginning at the first moment when it ceases to be zero and the end at the last, which returns to zero. Likewise, it is possible to locate the instants in which local maximums and minima occur (8), as well as the intersections with zero (7). All these, together with the values of the local maximums and minimums (8) provide information about the flicker and are used as characteristics to describe it.).. Regarding claim 20, the Geiss and Gordon combination teaches the non-transitory computer-readable storage medium of claim 15. Additionally, Geiss teaches wherein the involuntary eye tracking data comprises a blink count of the user in response to displaying the sequence of images to the user of the HMD (¶ 29: The energy contained in each region of interest of each frame is obtained by adding the value of gray levels of each pixel of it. The amount of intensity reflected by the eye is almost constant when the eyelid is open. Blinks appear as rapid increases and decreases in intensity: when the eyelid closes, the light diffused by the eyelid changes and the same happens with the intensity recorded by the camera. The peaks in intensity represent the moment when the eyelid is completely closed.; ¶ 31: In FIG. 3, the normalized power curve (9) obtained for a sample blink is shown. The normalized power curve (9) makes it possible to clearly define the beginning at the first moment when it ceases to be zero and the end at the last, which returns to zero. Likewise, it is possible to locate the instants in which local maximums and minima occur (8), as well as the intersections with zero (7). All these, together with the values of the local maximums and minimums (8) provide information about the flicker and are used as characteristics to describe it.). Claims 8, 9, 13 – 15, 19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Geiss et al (U.S. Patent Publication No. 2015/0084864 A1, hereinafter “Geiss”) in view of Gordon et al (U.S. Patent Publication No. 2017/0346817 A1, hereinafter “Gordon”). Regarding claim 8, Geiss teaches the method of claim 1. Geiss does not explicitly teach wherein the database indicating the habitual eye tracking data for the user is stored in a cloud-based data repository to be ingested by a machine learning computing system. However, Gordon does teach wherein the database indicating the habitual eye tracking data for the user is stored in a cloud-based data repository to be ingested by a machine learning computing system (¶ 0085: The machine learning models may be stored locally, in memory of the computing device 800, or remotely such as in memory of a service provider (e.g., service provider 102). The model repository 818 may include one or more models for one or more users. For instance, in some examples, the model repository 818 may include a separate model for each of multiple different users. Additionally or alternatively, model repository 818 may include multiple different models for multiple different resources.). Geiss and Gordon are considered to be analogous art as both pertain to user authentication using optical measurements. Therefore, it would have been obvious to one of ordinary skill in the art to combine the input method (as taught by Geiss) and system for authentication based on gaze (as taught by Gordon) before the effective filing date of the claimed invention. The motivation for this combination of references would be the method of Gordon reduces the ability for the response of the user to be spoofed by storing neurological or physiological responses that are unique and personal to the user. (See ¶ 0005). This motivation for the combination of Geiss and Gordon is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III). Regarding claim 9, Gordon teaches the method of claim 1, and further comprising: collecting habitual eye tracking data for a plurality of users (¶ 0029: In such instances, a generic (i.e., non-user specific) background model may be established in advance based on offline training data of multiple users experiencing a predefined set of stimuli (e.g., looking at a predefined set of images) and recording their gaze tracking data and physiological data in response to the predefined set of stimuli.); determining a habitual profile for each subset of the plurality of users (¶ 0030: In some examples, during the offline training, users may be instructed to look at one or more predefined locations in the images or other stimuli. However, in other examples, the users need not be instructed to look at predefined locations within the images or other stimuli.); and identifying a habitual profile for the user based on the maintained habitual eye tracking data for the user (¶ 0032: When the user next attempts to access the resource, a new instance of the user data (i.e., login data) can be compared against the background model and/or the user specific model, or alternatively a new model can be built for comparison to the trained model. The comparison can be done using, for example, a maximum likelihood of the respective models, comparison in model space (e.g., via ivectors), or the like.); wherein the user is authorized based the sensed eye tracking data and the identified habitual profile for the user (¶ 0032: A determination of whether or not to authenticate the user to access the resource can then be made based on whether or not a result of the comparison exceeds a threshold of similarity.). Regarding claim 13, Geiss teaches the computing system of claim 10. Additionally, Gordon teaches wherein the user is authenticated by querying a machine learning computing system to authorize the user of the computing system based on the involuntary eye movements and the stored user preference information (¶ 0061: The computing device 108(3) obtains login gaze tracking data 306(1)-306(5)(collectively "gaze tracking data 306") corresponding to gaze of the user, and physiological data 308(1)-308(5)(collectively "physiological data 308") including measurements of a physiological condition of the user at times that the user is viewing each image via the authentication interface 302. The gaze tracking data 306 and physiological data 308 comprise at least part of login data 310 collected during the authentication process 300.; ¶ 0062: All or part of the login data 310 may then be compared to the machine learning model 202 of the user.). Regarding claim 14, Geiss teaches the computing system of claim 10. Additionally, Gordon teaches wherein the stored user preference information is maintained in a cloud-based data repository (¶ 0085: The machine learning models may be stored locally, in memory of the computing device 800, or remotely such as in memory of a service provider (e.g., service provider 102). The model repository 818 may include one or more models for one or more users. For instance, in some examples, the model repository 818 may include a separate model for each of multiple different users. Additionally or alternatively, model repository 818 may include multiple different models for multiple different resources.). Regarding claim 15, Geiss teaches a non-transitory computer-readable storage medium storing instructions that, when executed by a processor (¶ 0074: Computer storage media can include volatile and nonvolatile, non-transitory, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.), cause the processor to: maintaining user preference data (¶ 0031: In addition to instructions that may be executed by the processor 112, the memory 114 may store data that may include a set of calibrated wearer eye pupil positions and a collection of past eye pupil positions. Thus, the memory 114 may function as a database of information related to gaze direction and location.); receive involuntary eye tracking data in response to displaying a sequence of images to a user of a head mountable device (HMD); (¶ 0024: The eye tracking system 230 may, for example, track movements of an eye pupil 404 and a gaze axis 406 associated with the eye 214 and eye pupil 404. As the eye 214 or eye pupil 404 moves, the eye tracking system 230 may track a gaze location 408 on the HMD associated with the gaze axis 406.; ¶ 0056: The processor may also receive information associated with temporal characteristics of eye movement of the user between gaze locations of the sequence of gaze locations. The temporal characteristics may include time periods elapsed between the gaze locations.), wherein sensing the habitual eye tracking data comprises detecting a variation of a pupil size of the user in response to the display of the plurality of images in the different areas on the display device of the HMD (¶ 0031: Calibrated wearer eye pupil positions may include, for instance, information regarding extents or range of an eye pupil movement (right/left and upwards/downwards), and relative position of eyes of the wearer with respect to the HMD.;¶ 0036: In one example, an infrared light source or sources integrated into the eye tracking system 230 may illuminate the eye 214 of the wearer, and a reflected infrared light may be collected with an infrared camera to track eye or eye-pupil movement.); and query the machine learning computing system to authorize the user of the HMD based on the received involuntary eye tracking data and the user preference data (¶ 0018: The responsiveness metric may be determined to be less than a predetermined threshold indicating that the user identified the content personalized to the user within a predetermined time period. Identifying the content personalized to the user within the predetermined time period that may indicate familiarity with the content personalized to the user and the user may be authenticated.) Additionally, Gordon teaches maintaining user preference data in a cloud-based repository to be ingested by a machine learning computing system (¶ 0085: The machine learning models may be stored locally, in memory of the computing device 800, or remotely such as in memory of a service provider (e.g., service provider 102). The model repository 818 may include one or more models for one or more users. For instance, in some examples, the model repository 818 may include a separate model for each of multiple different users. Additionally or alternatively, model repository 818 may include multiple different models for multiple different resources.); and query the machine learning computing system to authorize the user of the HMD based on the received involuntary eye tracking data and the user preference data maintained in the cloud-based data repository (¶ 0061: The computing device 108(3) obtains login gaze tracking data 306(1)-306(5)(collectively "gaze tracking data 306") corresponding to gaze of the user, and physiological data 308(1)-308(5)(collectively "physiological data 308") including measurements of a physiological condition of the user at times that the user is viewing each image via the authentication interface 302. The gaze tracking data 306 and physiological data 308 comprise at least part of login data 310 collected during the authentication process 300… In one example, the gaze tracking data 306 may be obtained from a user-facing camera 312 of the computing device 108(3) and the physiological data may be obtained from a touch surface 314 of the computing device 108(3) and/or a wearable device 108(4) in wireless communication with the computing device 108(3).; ¶ 0062: All or part of the login data 310 may then be compared to the machine learning model 202 of the user.) Regarding claim 19, the Geiss and Gordon combination teaches the non-transitory computer-readable storage medium of claim 15. Additionally, Geiss teaches wherein the involuntary eye tracking data comprises a duration of a pause of the user in response to displaying the sequence of images to the user of the HMD (¶ 0018: The user may be able to identify the content personalized to the user faster than another person who may not be as familiar as the user with the content personalized to the user. The wearable computing system may determine a responsiveness metric that includes a time period elapsed between generating the display of the random content and determining that the gaze location of the eye of the user substantially matches the predetermined location on the HMD of the content personalized to the user. The responsiveness metric may be determined to be less than a predetermined threshold indicating that the user identified the content personalized to the user within a predetermined time period. (emphasis added)). Regarding claim 21, claim 21 has been analyzed with regard to respective claim 11 and is rejected for the same reasons of obviousness as used above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bradski et al (U.S. Patent Publication No. 2016/0358181 A1) teaches a head mounted device which performs user authentication based on capturing biometric data of the user. The device is capable of eye tracking of the user including eye movement patterns, blinking patterns, eye vergence, eye color, iris patterns, retinal patterns, etc. This information is used to verify the identity of the user. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW JONES whose telephone number is (703)756-4573. The examiner can normally be reached Monday - Friday 8:00-5:00 EST, off Every Other Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW B. JONES/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Dec 02, 2025
Non-Final Rejection — §102, §103
Feb 18, 2026
Response Filed
Mar 25, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599285
ANALYSIS OF IN-VIVO IMAGES USING CONNECTED GRAPH COMPONENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12587607
CORRECTION OF COLOR TINTED PIXELS CAPTURED IN LOW-LIGHT CONDITIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12586201
ORAL IMAGE PROCESSING DEVICE AND ORAL IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12586213
METHOD AND SYSTEM FOR SUPPORTING MOVEMENT OF MOBILE OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12573222
DETECTING RELIABILITY USING AUGMENTED REALITY
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
90%
With Interview (+18.9%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 74 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month