Prosecution Insights
Last updated: April 19, 2026
Application No. 17/290,740

PASSWORDLESS AUTHENTICATION SYSTEMS AND METHODS

Final Rejection §103
Filed
Apr 30, 2021
Examiner
RASHID, HARUNUR
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
Orchid Authentication Systems Inc.
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
473 granted / 620 resolved
+18.3% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
25 currently pending
Career history
645
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
5.0%
-35.0% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 620 resolved cases

Office Action

§103
DETAILED ACTION 1. Claims 22-25 are pending in this examination. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments 4.1. Applicant’s arguments filed 12/31/2025 have been fully considered but they are not persuasive. 4.2. Applicant’s Response applicant argues, in substance that “The Office Action does not articulate a sufficient motivation, with rational underpinning, to modify and combine the cited references in the particular manner required by independent claim” … “An obviousness rejection must be supported by articulated reasoning with rational underpinning and may not be premised on hindsight reconstruction or a retrospective assembly of features drawn from multiple references...”…“This reflects a post hoc mapping of claim elements to disparate disclosures, rather than a reasoned analysis of what the prior art as a whole would have taught or suggested to a skilled artisan. Accordingly, the rejection is deficient under § 103”…” Action does not identify any teaching or suggestion in the art that would motivate a skilled artisan to combine Dewan's device-level facial recognition flow, Alikhani's platform-centric concurrency, and Karmarkar's eye-tracking behavioral history into the precise device-level architecture as claimed. Instead, the rejection assumes the claimed architecture and then retrofits pieces of it to disparate disclosures in the prior art, clearly hindsight reconstruction” (remark, pages 9-13). The Examiner respectfully disagrees with Applicant’s arguments; Sufficient motivation has been provided that one of ordinary skill in the art would find it obvious to combine the teachings of Dewan, Alikhani and Karmarkar. 4.3. In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Rejections relied on a combination of references where the sole motivation to combine came from the references not from the applicant's Applicant’s specification (please see below). 4.4. Dewan, Alikhani and Karmarkar are an analogous art, it has been held that a prior art reference must either be in the field of applicant’s endeavor or, if not, then be reasonably pertinent to the particular problem with which the applicant was concerned, in order to be relied upon as a basis for rejection of the claimed invention. See In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992). Furthermore, The Supreme Court has determined that the conclusion of obviousness can be based on the interrelated teachings of multiple patents, the effects of demands known to the design community or present in the marketplace, and the background knowledge possessed by a person having ordinary skill in the art. KSRInt'l Co. v. TeleflexInc., 550 U.S. 398,416 (2007). The skilled artisan would "be able to fit the teachings of multiple patents together like pieces of a puzzle" since the skilled artisan is "a person of ordinary creativity, not an automaton." Id. at 420-21. Combining the arts was "uniquely challenging or difficult for one of ordinary skill in the art." Leapfrog Enters., Inc. v. Fisher-Price, Inc., 485 F.3d 1157, 1162 (Fed. Cir. 2007) (citing KSR at 418). The Examiner's proffered combination of familiar prior art elements according to their established functions (see below) would have conveyed a reasonable expectation of success to a person of ordinary skill having common sense at the time of the application filing. Dewan discloses in paragraph ([0020], also see 27 and 35 The device 105 can include an image capture section 205 for capturing one or more images of the user's face 120 using the camera 110., para [0027] After matching the face of the user 120 with the previously collected face template 230, the face of the user can be continually tracked within the purview 130 of the camera 110. Alikhani discloses (6:20-30, The live verification process shown in FIG. 3 involves voice verification, liveliness verification and face verification. The voice verification, liveliness verification and face verification are performed at the same time. When live verification is initiated, the electronic device begins tracking the user's eye movement in step 304. Concurrently, the user speaks the displayed phrases/digits in step 306. This audio is captured by a verification agent running on the user device. The verification agent running on the user device also captures one or more images of the user's face in step 308. Verification is performed in steps 310, 312 and 314, also see 4:5-15, claim 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Dewan with the teaching of Alikhani by including the feature of concurrent, in order for Dewan’s system for granting or denying verification request by the verification platform based at least in part on the verification input. A method comprises receiving, from a verification platform, a notification regarding a verification request and providing, to the verification platform, verification input responsive to the notification. The notification comprises one or more patterns for display at one or more specified locations on a screen of a device and the verification input comprises one or more captured images, captured audio and tracked eye movement. The verification request is granted or denied by the verification platform based at least in part on the verification input (Alikhani, 1:60-67). Karmarkar discloses ([0014], [0067], The eye-tracking data can be obtained with an eye-tracking system. In step 606, a user attribute is determined based on the eye-tracking data. Example user attributes that can be determined with eye-tracking data include, inter alia: whether the user is a person (e.g. not an internet bot); … a demographic/cultural characteristic of a user can be determined by presenting an image(s) to a user and then comparing the user's eye-tracking data while viewing the image with pre-obtained eye-tracking data sets of various demographic/ cultural groups. In step 608, the user can be enabled to access a digital resource when the user attribute is associated with a permission to access the digital resource. In one example, associations can be implemented with tables that match user attributes (e.g. a user's identity, a user's authenticated state, etc.) with a particular digital resource. It is noted that in some embodiments, eye-tracking data can be combined with other bioresponse data (e.g. galvanic skin response (GSR), heart rate, etc.) to determine an attribute of a user. For example, both eye-tracking data and a user's heart rate can be utilized to determine a user attribute. Various types of bioresponse sensors can be utilized to obtain the bioresponse data (e.g. digital imaging processes that provide information as to user's body temperature and/or heart rate, heat-rate monitors, body temperature sensors, GSR sensors, brain-computer interfaces such as an Emotiv.RTM., a Neurosky BCI.RTM. and/or another electroencephalographic system, ascertaining a user's bioimpedance value, iris scanners, fingerprint scanners, other biometric sensors and the like, also see [0066]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Dewan with the teaching of Alikhani by including the feature of concurrent, in order for Dewan’s system for using eye-tracking data in a challenge/response test to authenticate a human user. A user is instructed to answer a query about the digital image. A user's eye-tracking data is received for a period that the user views die digital image. The user's eye-tracking data is compared with one or more baseline datasets. A file or a service is provided to the user when the user's eye-tracking data substantially matches the one or more baseline datasets. Optionally, a user's bioresponse data can be received for the period that the user views the digital image. The user's eye-tracking data and the user's bioresponse data can be compared with the one or more baseline datasets. The file or the service can be provided to the user when the user's eye-tracking data and the user's bioresponse data substantially matches the one or more baseline datasets (Karmarkar, [0014]). By this rationale, the Examiner has provided sufficient motivation for the combination of Dewan, Alikhani and Karmarka. 4.5. Applicant argues, “Dewan does not disclose "imaging... face of a user to authenticate the user... based upon the face; and concurrently... verifying... tracked facial movement is consistent with (a) the challenge command and (b) a user record... " as required by independent claim 22. Equally important, Dewan does not disclose any persistent, "user record containing behavioral biometric characteristics of the user ". Any head/face movement in Dewan is evaluated only to determine immediate challenge success or failure; Dewan does not teach storing such movement as a "user record containing behavioral biometric characteristics of the user" or "verifying the tracked facial movement is consistent with... user record". Dewan therefore teaches neither the concurrent facial-movement tracking required by independent claim 22, nor verification of that movement against a stored behavioral biometric record, as also required by independent claim 22.” (remark, pages 10-11). 4.6. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The Examiner disagrees; the examiner submits that the combination of Dewan, Alikhani and Karmarkar discloses above features. Dewan discloses imaging, at a mobile device, face of a user to authenticate the user to the mobile device based upon the face, [taking] the step of imaging, tracking facial movement of the user in response to a challenge command ([0020] The device 105 can include an image capture section 205 for capturing one or more images of the user's face 120 using the camera 110., para [0027] After matching the face of the user 120 with the previously collected face template 230, the face of the user can be continually tracked within the purview 130 of the camera 110.); ([0027] After matching the face of the user 120 with the previously collected face template 230, the face of the user can be continually tracked within the purview 130 of the camera 11 0.ln other words, the device can implement face tracking to keep the association with the particular user 120 as they move freely within the camera's view., para [0035] The challenge section 235 displays the secret image on the PAVP enabled display 115and moves the secret image 135 toward the modified version 140 of the secret image, as shown by arrow 170, in response to movement of the user's face 120. For example, if the captured images show that the user's face 120 is moving, then the secret image 115 will move in accordance with those movements., para [0038] the user 120 has pre-knowledge of the secret image and must also have a real human face capable of directing the secret image across the display 115 to the correctly designated location.). 4.7. Applicant’s Response applicant argues, in substance that “ Alikhani does not disclose storing eye-movement trajectories as "a user record containing behavioral biometric characteristics" for subsequent sessions or treating eye movement as an ongoing behavioral template.”… “it does not teach authentication "to the mobile device based upon the face, " as recited in independent claim 22, nor "verifying the tracked facial movement is consistent with... user record containing behavioral biometric characteristics of the user"(remark, pages 11-12). 4.8. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The Examiner respectfully disagrees with Applicant’s arguments; the examiner submits that the combination of Dewan, Alikhani and Karmarkar discloses above features. Alikhani discloses concurrently with the step of imaging, tracking facial movement of the user in response by the mobile device to a challenge command communicated to the mobile device (6:20-30, The live verification process shown in FIG. 3 involves voice verification, liveliness verification and face verification. The voice verification, liveliness verification and face verification are performed at the same time. When live verification is initiated, the electronic device begins tracking the user's eye movement in step 304. Concurrently, the user speaks the displayed phrases/digits in step 306. This audio is captured by a verification agent running on the user device. The verification agent running on the user device also captures one or more images of the user's face in step 308. Verification is performed in steps 310, 312 and 314, also see 4:5-15, claim 1). 4.9. Applicant argues, ”Karmarkar does not teach "imaging, at a mobile device, face of a user to authenticate the user to the mobile device based upon the face," as required by independent claim 22, or a layered architecture in which such authentication is performed "concurrently with... tracking facial movement, "and then "verifying the tracked facial movement is consistent with... the challenge command... and... a user record containing behavioral biometric characteristics of the user ", as recited in independent claim 22. Karmarkar's authentication is built around eye-tracking behavior; it does not use "imaging... face of a user to authenticate " as a prerequisite, and it does not broaden behavioral metrics to encompass the full range of facial movements expressly recited in the present claims."(remark, pages 11-12). 4.10. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). 4.11. The Examiner respectfully disagrees with Applicant’s arguments; the examiner submits that the combination of Dewan, Alikhani and Karmarkar discloses above features. Karmarkar discloses wherein the authenticating the user further comprises verifying the tracked facial movement is consistent with (a) the challenge command and (b) a user record containing behavioral biometric characteristics of the user ([0014], [0067], The eye-tracking data can be obtained with an eye-tracking system. In step 606, a user attribute is determined based on the eye-tracking data. Example user attributes that can be determined with eye-tracking data include, inter alia: whether the user is a person (e.g. not an internet bot); … a demographic/cultural characteristic of a user can be determined by presenting an image(s) to a user and then comparing the user's eye-tracking data while viewing the image with pre-obtained eye-tracking data sets of various demographic/ cultural groups. In step 608, the user can be enabled to access a digital resource when the user attribute is associated with a permission to access the digital resource. In one example, associations can be implemented with tables that match user attributes (e.g. a user's identity, a user's authenticated state, etc.) with a particular digital resource. It is noted that in some embodiments, eye-tracking data can be combined with other bioresponse data (e.g. galvanic skin response (GSR), heart rate, etc.) to determine an attribute of a user. For example, both eye-tracking data and a user's heart rate can be utilized to determine a user attribute. Various types of bioresponse sensors can be utilized to obtain the bioresponse data (e.g. digital imaging processes that provide information as to user's body temperature and/or heart rate, heat-rate monitors, body temperature sensors, GSR sensors, brain-computer interfaces such as an Emotiv.RTM., a Neurosky BCI.RTM. and/or another electroencephalographic system, ascertaining a user's bioimpedance value, iris scanners, fingerprint scanners, other biometric sensors and the like, also see [0066]). 4.12. Applicant argues, ”Dewan does not teach gaze-direction tracking in this manner; Alikhani's transient eye-movement data is confined to application-level liveness confirmation during a verification session; and Karmarkar's eye-tracking baselines are not combined with device-level face authentication and concurrent tracking as in independent claim 22." (remark, pages 13-14). 4.13. The Examiner disagrees; claim 23 recited “communicating, to an authenticator, (a) an indication of an identity obtained in the step of imaging and (b) a recording of facial movement obtained in the step of tracking” which is discloses by Dewan in paragraphs [0022], [0025]-([0027]). 4.14. Applicant’s Response applicant argues, in substance that “The cited art does not disclose or suggest using 'facial movement" (including those recited in claim 25) as "a behavioral biometric ", that is captured concurrently with authenticating "user to the mobile device based upon the face ", "in response... to a challenge command... "and "verifying the tracked facial movement is consistent with (a) the challenge command and (b) a user record containing behavioral biometric characteristics of the user ", as required by independent claim 22 ” (remark, pages 9-13). 4.15. The Examiner respectfully disagrees with Applicant’s arguments; claim 25 recited the facial movement comprising one or more of blinking, smiling, speaking, mouthing, head tilting, head shaking, nodding, and yawning which is discloses by Alikhani, in column 4 lines 5-40). “In some embodiments, the input includes one or more captured images, captured audio, and tracked eye movement. The captured images may be of a face of the user providing the verification input. The captured audio may be a recording of the user speaking the one or more patterns displayed on the screen, while the tracked eye movement includes data indicating a location on the screen that the user was viewing while speaking the one or more patterns. The primary or secondary device encrypts and sends the verification input to the verification platform API, which relays the information to the verification platform server. (20) The verification platform server compares the verification input to determine whether to grant or deny a verification request. The captured images are analyzed to determine if the captured images match a facial profile of the user. It is to be appreciated that the verification input sent from the primary or secondary device need not contain the actual captured images. Instead, the verification input may comprise some data or other information derived from such images, such as one or more points of comparison sufficient to determine if the captured images match a facial profile of a user. (21) The captured audio is analyzed to determine if the captured audio matches a voice profile for the user. The captured audio is further analyzed to determine whether the user spoke the phrases or digits of the one or more patterns. Similar to the captured images, the verification input need not contain the actual captured audio. Instead, the verification input may comprise some data or other information derived from the captured audio such as one or more points of comparison sufficient to determine if the captured audio matches a voice profile of a user. (22) The tracked eye movement is analyzed to determine what location or locations of a screen of the primary or secondary device that a user was viewing while speaking the phrases or digits of the one or more patterns. The one or more patterns sent to the primary or secondary device are for display at specified locations on a screen. Thus, the tracked eye movement can be compared to see if the locations viewed by the user match the specified locations.” Therefore, in view of the above reasons, the rejections are maintained. Claim Rejections - 35 USC § 103 5.1. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. 5.2. Claims 22-25 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application No. 20140230046 to Dewan et al (“Dewan”) in view of US Patent No. 9294474 issued to Alikhani et al (“Alikhani”) and in view of US Patent Application No. 20130044055 to Karmarkar et al (“Karmarkar”). As per claim 22, Dewan discloses a facial-movement tracking method, comprising: imaging, at a mobile device, face of a user to authenticate the user to the mobile device based upon the face ([0020] The device 105 can include an image capture section 205 for capturing one or more images of the user's face 120 using the camera 110., para [0027] After matching the face of the user 120 with the previously collected face template 230, the face of the user can be continually tracked within the purview 130 of the camera 110.); and [taking] the step of imaging, tracking facial movement of the user in response to a challenge command ([0027] After matching the face of the user 120 with the previously collected face template 230, the face of the user can be continually tracked within the purview 130 of the camera 11 0.ln other words, the device can implement face tracking to keep the association with the particular user 120 as they move freely within the camera's view., para [0035] The challenge section 235 displays the secret image on the PAVP enabled display 115and moves the secret image 135 toward the modified version 140 of the secret image, as shown by arrow 170, in response to movement of the user's face 120. For example, if the captured images show that the user's face 120 is moving, then the secret image 115 will move in accordance with those movements., para [0038] the user 120 has pre-knowledge of the secret image and must also have a real human face capable of directing the secret image across the display 115 to the correctly designated location.). Dewan discloses captured images show that the user's face 120 is moving, but Dewan does not explicitly disclose however in the same field of endeavor, Alikhani discloses concurrently with the step of imaging, tracking facial movement of the user in response by the mobile device to a challenge command communicated to the mobile device (6:20-30, The live verification process shown in FIG. 3 involves voice verification, liveliness verification and face verification. The voice verification, liveliness verification and face verification are performed at the same time. When live verification is initiated, the electronic device begins tracking the user's eye movement in step 304. Concurrently, the user speaks the displayed phrases/digits in step 306. This audio is captured by a verification agent running on the user device. The verification agent running on the user device also captures one or more images of the user's face in step 308. Verification is performed in steps 310, 312 and 314, also see 4:5-15, claim 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Dewan with the teaching of Alikhani by including the feature of concurrent, in order for Dewan’s system for granting or denying verification request by the verification platform based at least in part on the verification input. A method comprises receiving, from a verification platform, a notification regarding a verification request and providing, to the verification platform, verification input responsive to the notification. The notification comprises one or more patterns for display at one or more specified locations on a screen of a device and the verification input comprises one or more captured images, captured audio and tracked eye movement. The verification request is granted or denied by the verification platform based at least in part on the verification input (Alikhani, 1:60-67). Dewan and Alikhani do not explicitly disclose however in the same field of endeavor, Karmarkar discloses wherein the authenticating the user further comprises verifying the tracked facial movement is consistent with (a) the challenge command and (b) a user record containing behavioral biometric characteristics of the user ([0014], [0067], The eye-tracking data can be obtained with an eye-tracking system. In step 606, a user attribute is determined based on the eye-tracking data. Example user attributes that can be determined with eye-tracking data include, inter alia: whether the user is a person (e.g. not an internet bot); … a demographic/cultural characteristic of a user can be determined by presenting an image(s) to a user and then comparing the user's eye-tracking data while viewing the image with pre-obtained eye-tracking data sets of various demographic/ cultural groups. In step 608, the user can be enabled to access a digital resource when the user attribute is associated with a permission to access the digital resource. In one example, associations can be implemented with tables that match user attributes (e.g. a user's identity, a user's authenticated state, etc.) with a particular digital resource. It is noted that in some embodiments, eye-tracking data can be combined with other bioresponse data (e.g. galvanic skin response (GSR), heart rate, etc.) to determine an attribute of a user. For example, both eye-tracking data and a user's heart rate can be utilized to determine a user attribute. Various types of bioresponse sensors can be utilized to obtain the bioresponse data (e.g. digital imaging processes that provide information as to user's body temperature and/or heart rate, heat-rate monitors, body temperature sensors, GSR sensors, brain-computer interfaces such as an Emotiv.RTM., a Neurosky BCI.RTM. and/or another electroencephalographic system, ascertaining a user's bioimpedance value, iris scanners, fingerprint scanners, other biometric sensors and the like, also see [0066]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Dewan with the teaching of Alikhani by including the feature of concurrent, in order for Dewan’s system for using eye-tracking data in a challenge/response test to authenticate a human user. A user is instructed to answer a query about the digital image. A user's eye-tracking data is received for a period that the user views die digital image. The user's eye-tracking data is compared with one or more baseline datasets. A file or a service is provided to the user when the user's eye-tracking data substantially matches the one or more baseline datasets. Optionally, a user's bioresponse data can be received for the period that the user views the digital image. The user's eye-tracking data and the user's bioresponse data can be compared with the one or more baseline datasets. The file or the service can be provided to the user when the user's eye-tracking data and the user's bioresponse data substantially matches the one or more baseline datasets (Karmarkar, [0014]). As per claim 23, the combination of Dewan, Alikhani and Karmarkar discloses the facial-movement tracking method of claim 22, further comprising: communicating, to an authenticator, (a) an indication of an identity obtained in the step of imaging and (b) a recording of facial movement obtained in the step of tracking (Dewan, [0022], [0025]-([0027]). As per claim 24, the combination of Dewan, Alikhani and Karmarkar discloses the facial-movement tracking method of claim 23, further comprising: displaying a plurality of visual elements in a respective plurality of different local regions of a screen of the mobile device (Alikhani, 3:23-55). and in the step of tracking, tracking gaze direction, of the user, at the different local regions of the screen, the recording of facial movement comprising eye movements (Alikhani, 4:35-25, also see 4:1-24). The motivation regarding the obviousness of claim 22 is also applied to claim 25. As per claim 25, the combination of Dewan, Alikhani and Karmarkar discloses the facial-movement tracking method of claim 24, the facial movement comprising one or more of blinking, smiling, speaking, mouthing, head tilting, head shaking, nodding, and yawning (Alikhani, 4:5-40). The motivation regarding the obviousness of claim 22 is also applied to claim 25. 6.1. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure as the prior art discloses many of the claim features (See PTO-form 892). 6.2. a). US Patent Application No. 20160127359 to Minter et al., discloses in paragraph [0035] FIG. 3 is a drawing showing the method used in this invention to ensure that the on-line user 301 is a “living breathing” person. The methodology requires the online user 301 to face the camera on the pervasive device 100 and be prompted to verbally utter concurrently a combination of randomly selected alphanumeric characters while facing a live video camera in the Pervasive Mobile Device 100, allowing the capture of the user's lip and eye movements. This “live” data is processed by the authentication engine 101 aliveness test algorithm 303 to determine whether the introduced biometric sample is coming from a “live source or person” and not a pervasive mobile device 100 generated (fake) biometric sample such as a still picture or a fake video facial image. The captured live video frame sequence is instantaneously submitted to the authentication engine 101 for “aliveness” determinations whereby the head, lip and eye movements are algorithmically analyzed to establish probabilistically that the user is without any doubt a living person. This makes the method virtually tamper-proof even by sophisticated and financially-motivated hackers. b). US Patent Application No. 20130267204 to Schultz et al., discloses in paragraph [0089] In addition to facing the camera, the user 401 is also presented with an instruction 411 to recite all the digits ranging from 0 to 9, as shown in FIG. 4B. Alternatively, as an additional enrollment requirement and/or as a randomly generated authentication challenge, the instruction may be for the user to recite various random digits, as shown in FIG. 4C. The user recites the digits while simultaneously facing the camera of the device 403. In FIG. 4A, the recitation is depicted as transmission of an audio signal 413/sound from the user's mouth. As a result, biometric data is captured accordingly, and the user interface renders a status message 415 for indicating enrollment is underway. A microphone icon 417 is also presented to the user interface 405 for indicating an audio signal 413 is currently being detected and recorded concurrent with the capture of the video data 407 per the enrollment procedure. This video data may be of the user's face 409, but in other use cases, may include other images or parts of a user's body (e.g. mouth, eyes, hand, etc.). For example, a user's mouth movements may be recorded in conjunction with the user's audio speech as they say the numbers 0-9. This information may be parsed and used later in the authentication of the user similar to use under FIG. 4G below. By parsing the information the random user command of saying "3-7" may be used to authenticate the user by verifying the mouth movements in conjunction with the audio speech. Alternatively, the icon 417 may be a video camera icon for indicating video footage of the user is being captured per the enrollment process. While not shown, additional instructions and/or commands relative to the authentication procedure may be presented to the user interface accordingly, including those instructions pertaining to the capture of retinal, iris or vein characteristics. c). US Patent No. 9600069 issued to Publicover et al., discloses Apparatus, systems, and methods are provided for substantially continuous biometric identification (CBID) of an individual using eye signals in real time. The apparatus is included within a wearable computing device with identification of the device wearer based on iris recognition within one or more cameras directed at one or both eyes, and/or other physiological, anatomical and/or behavioral measures. Verification of device user identity can be used to enable or disable the display of secure information. Identity verification can also be included within information that is transmitted from the device in order to determine appropriate security measures by remote processing units. The apparatus may be incorporated within wearable computing that performs other functions including vision correction, head-mounted display, viewing the surrounding environment using scene camera(s), recording audio data via a microphone, and/or other sensing equipment. Conclusion 7. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARUNUR RASHID whose telephone number is (571)270-7195. The examiner can normally be reached 9 AM to 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A. Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. HARUNUR . RASHID Primary Examiner Art Unit 2497 /HARUNUR RASHID/Primary Examiner, Art Unit 2497
Read full office action

Prosecution Timeline

Apr 30, 2021
Application Filed
Mar 22, 2024
Non-Final Rejection — §103
Sep 26, 2024
Response Filed
Dec 14, 2024
Final Rejection — §103
Jun 18, 2025
Request for Continued Examination
Jun 22, 2025
Response after Non-Final Action
Jun 27, 2025
Non-Final Rejection — §103
Dec 31, 2025
Response Filed
Jan 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603869
PRIVACY SOLUTION FOR IMAGES LOCALLY GENERATED AND STORED IN EDGE SERVERS
2y 5m to grant Granted Apr 14, 2026
Patent 12603758
METHOD, APPARATUS, AND COMPUTER PROGRAM FOR SETTING ENCRYPTION KEY IN WIRELESS COMMUNICATION SYSTEM, AND RECORDING MEDIUM FOR SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12593211
SELECTIVE VEHICLE SECURITY LOG DATA COMMUNICATION CONTROL
2y 5m to grant Granted Mar 31, 2026
Patent 12592952
GRAPHICS PROCESSING UNIT OPTIMIZATION
2y 5m to grant Granted Mar 31, 2026
Patent 12578927
METHOD FOR CALCULATING A TRANSITION FROM A BOOLEAN MASKING TO AN ARITHMETIC MASKING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+36.9%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 620 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month