Prosecution Insights
Last updated: April 19, 2026
Application No. 17/940,205

Method and System for Authentication of a Subject by a Trusted Contact

Non-Final OA §103
Filed
Sep 08, 2022
Examiner
ALMEIDA, DEVIN E
Art Unit
2492
Tech Center
2400 — Computer Networks
Assignee
Lever Dynamics LLC
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
82%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
421 granted / 592 resolved
+13.1% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
35 currently pending
Career history
627
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 592 resolved cases

Office Action

§103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/21/2025 has been entered. Claims 1-20 are pending with claims 1 and 11 having been amended. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy has been received. Response to Arguments Applicant's arguments filed 11/21/2025 have been fully considered. Applicant's arguments with respect to the 103 rejection of claims 1 and 11 that Headley in view of Steelberg does not teach “that the trusted contact authenticate the subject to allow the subject or another authorized individual access to protected material” have been fully considered but they are not persuasive. Headley teaches “that the trusted contact authenticate the subject to allow the subject or another authorized individual access to protected material” in column 5 lines 52-63 i.e. Within the context of a social network, the invention can also allow one user to authenticate a certified user to help the first user assess the certified user's trustworthiness before accepting messages, communications, content, or the like. For example, stored biometric templates or subsequent biometric responses to prompts made by the certified user can be viewed, in whole or in part, by other users before accepting contact or content form the certified user. For instance, when a user (certified or not) receives an invitation to join a group or share photographs and column 20 lines 7-15 i.e. An optional step 740 further comprises restricting some users to communicate only with certified users. This can comprise, for example, restricting those users to communicate only with certified users that match a criterion like a gender or an age or age range. Step 740 can be implemented, for instance, in the context of parental controls so that a child is restricted to communicating with, and exchanging content with, only those other users that are certified to be children below a certain age or within a specified range of ages). This clearly teaches teach the claim limitation since certified users (i.e. user that have been authenticated been another) are given access to join a group or share photographs (i.e. protected material). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 4, 11, 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Headley (US 9,712,526) in view of Steelberg et al (US 2022/0269761). With respect to claim 1 Headley teaches a computer-implemented method for authentication of a subject by a trusted contact, the method including processes carried out by a server system, the processes comprising: causing, by the server system, prompts to the subject to perform a set of actions, each action in the set of actions selected from the group consisting of stating a current date, stating a current time, stating a name of the subject, stating a current location of the subject, stating information known to the subject and the trusted contact, stating a phrase, performing a hand gesture, performing a head or body movement, and combinations thereof (see Headley figure 2 and column 9 line 49 - column 10 line 18 i.e. The user ID is further associated with a plurality of prompts in step 230. The prompts can include common prompts such as “Say your mother's maiden name,” and “Sign your name on the signature pad.” In some embodiments, the user selects some or all of the plurality of prompts from a list of predefined prompts such as the common prompts noted above. The prompts selected by the user are then associated with the user ID. In other embodiments, a plurality of predefined prompts is automatically assigned to the user. In some embodiments, still other prompts that can be associated with the user ID are personalized prompts), receiving, by the server system, data, the data including a set of video recordings of the subject performing the set of actions (see Headley figure 2 and column 10 lines 19-34 i.e. In step 240 each of the plurality of prompts is associated with a biometric template of the enrollee user. For example, where the prompt is an instruction to say some word or phrase, the biometric template can be a voice template derived from the user saying the word or phrase. Here, associating the prompt with the biometric template can include providing the prompt to the user and receiving audio data (e.g., a .wav file) of the user's response); storing, by the server system, the set of recordings in a database (see Headley column 10 lines 35-47 i.e. Both biometric templates and prompts can be stored in association with the user ID in a database, for example); transmitting, by the server system to a computing device of the trusted contact, an authentication request (see Headley column 19 lines 11-25 i.e. Method 600 also comprises a step 620 of providing a prompt to the first user and storing a biometric response of the first user thereto in association with the user ID. Step 620 can comprise the steps 340 and 350 of method 300 (FIG. 3), in some embodiments. It will be appreciated that various embodiments of method 600 can include some or all of the other steps of methods 200 and 300. Each user of the social network that follows the steps 210, 610, and 620 provides the social network with a user ID associated with two biometric samples, one recorded as a template, the other provided in response to a prompt, for example, while logging into the social network to access an account) that the trusted contact authenticate the subject to allow the subject or another authorized individual access to protected material (see Headley column 5 lines 52-63 i.e. Within the context of a social network, the invention can also allow one user to authenticate a certified user to help the first user assess the certified user's trustworthiness before accepting messages, communications, content, or the like. For example, stored biometric templates or subsequent biometric responses to prompts made by the certified user can be viewed, in whole or in part, by other users before accepting contact or content form the certified user. For instance, when a user (certified or not) receives an invitation to join a group or share photographs and column 20 lines 7-15 i.e. An optional step 740 further comprises restricting some users to communicate only with certified users. This can comprise, for example, restricting those users to communicate only with certified users that match a criterion like a gender or an age or age range. Step 740 can be implemented, for instance, in the context of parental controls so that a child is restricted to communicating with, and exchanging content with, only those other users that are certified to be children below a certain age or within a specified range of ages); in response to the server system receiving an acceptance of the authentication request from the computing device of the trusted contact: causing display, by the server system on the computing device of the trusted contact, of the set of video recordings retrieved from the database together with a set of authentication evaluation requests to evaluate an authenticity of each one of the set of video recordings; receiving, by the server system from the computing device of the trusted contact, a set of authentication evaluation responses, each one of the set of authentication evaluation responses responsive to an authentication evaluation request of the set of authentication evaluation requests; and determining, by the server system, an authentication result based on the set of authentication evaluation responses (see Headley column 19 lines 37-51 i.e. Method 600 also comprises a step 640 of sending to the second user at least a portion of the biometric response of the first user, or at least a portion of the biometric template of the first user. In those embodiments in which only the biometric template or the biometric response is associated with the user ID, step 640 reduces to sending at least a portion of whichever biometric sample was associated with the user ID. It will be appreciated that for certain purposes either of the biometric response or the biometric template may be more relevant. For example, to verify that a user is not an imposter, the biometric response from the most recent login event would be more relevant than a biometric template recorded when the an account was first established. Steps 630 and 640 can be performed by the inter-user authentication logic 430 (FIG. 4) in some embodiments). Headley does not disclose receiving, by the server system, video data, the video data including a set of video recordings of the subject performing the set of actions. Steelberg teaches receiving, by the server system, video data, the video data including a set of video recordings of the subject performing the set of actions (see paragraph 0033-0034 i.e. At subprocess 115, once the user face is verified, the CMFA system can request the user to perform an action via instructions delivered aurally or visually. The instructions can request the user to repeat a sentence being aurally or visually presented. The instructions can also request the user to perform an action such as, but not limited to, holding up an object, making a certain facial expression, doing something with part of the user's body (e.g., wink, smile, look to the left) while the live video stream is active …At subprocess 120, the CMFA system can analyze, using an image or object classification neural network, the video data portion of the live multi-media stream (or video only data stream) to determine whether the user has performed the requested action such as to smile, look to the left, pick up an object, etc. The CMFA system can also analyze, using an audio classification neural network) the audio data portion of the live multi-media stream (or audio only data stream) to determine whether the user has read the requested words or sentence. For example, subprocess 120 can display one or more words on the user's display and instruct the user to read the one or more words. Subprocess 120 can also instructs the user by playing an audio through the user's device. Alternatively, subprocess 120 can instruct the user using both aural and visual presentation methods. For instance, subprocess 120 can instruct the user aurally to repeat the sentence “hello word, my name is Joe Smith” and/or display the sentence on the user's screen and instruct the user to read it out loud into the microphone). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Steelberg to have used video data when performing a set of actions for as a way to authenticate the user (see Steelberg paragraph 0031). Therefore one would have been motivated to have a live data stream such as a multi-media stream, a video only stream, or an audio only stream as a way to authenticate the user. With respect to claim 3 Headley teaches a computer-implemented method for authentication according to claim 1, but does not disclose wherein the receiving video data including the set of video recordings of the subject and the causing display of the set of video recordings on the device of the trusted contact is performed substantially in real-time. Steelberg teaches wherein the receiving video data including the set of video recordings of the subject and the causing display of the set of video recordings on the device of the trusted contact is performed substantially in real-time (see Steelberg paragraph 0014 i.e. he user to enable a real-time stream of data from the user's device; analyze the real-time stream of data from the user's device to verify the user's identity). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Steelberg to have used video data when performing a set of actions for as a way to authenticate the user (see Steelberg paragraph 0031). Therefore one would have been motivated to have a live data stream such as a multi-media stream, a video only stream, or an audio only stream as a way to authenticate the user. With respect to claim 4 Headley teaches a computer-implemented method for authentication according to claim 1, wherein the server system causes the prompts to the subject to perform a set of actions to be displayed on a computing device of the subject (see Headley figure 2 and column 9 line 49 - column 10 line 18 i.e. The user ID is further associated with a plurality of prompts in step 230. The prompts can include common prompts such as “Say your mother's maiden name,” and “Sign your name on the signature pad.” In some embodiments, the user selects some or all of the plurality of prompts from a list of predefined prompts such as the common prompts noted above. The prompts selected by the user are then associated with the user ID. In other embodiments, a plurality of predefined prompts is automatically assigned to the user. In some embodiments, still other prompts that can be associated with the user ID are personalized prompts). With respect to claim 11 Headley teaches a system for authentication of a subject by a trusted contact, the system comprising: a server system including a processor, the server system coupled to a database, the server system further coupled to a computing device of the trusted contact over a network; wherein the processor is configured to cause prompts to the subject to perform a set of actions, each action in the set of actions selected from the group consisting of stating a current date, stating a current time, stating a name of the subject, stating a current location of the subject, stating information known to the subject and the trusted contact, stating a phrase, performing a hand gesture, performing a head or body movement, and combinations thereof (see Headley figure 2 and column 9 line 49 - column 10 line 18 i.e. The user ID is further associated with a plurality of prompts in step 230. The prompts can include common prompts such as “Say your mother's maiden name,” and “Sign your name on the signature pad.” In some embodiments, the user selects some or all of the plurality of prompts from a list of predefined prompts such as the common prompts noted above. The prompts selected by the user are then associated with the user ID. In other embodiments, a plurality of predefined prompts is automatically assigned to the user. In some embodiments, still other prompts that can be associated with the user ID are personalized prompts), receive data, the data including a set of video recordings of the subject performing the set of actions (see Headley figure 2 and column 10 lines 19-34 i.e. In step 240 each of the plurality of prompts is associated with a biometric template of the enrollee user. For example, where the prompt is an instruction to say some word or phrase, the biometric template can be a voice template derived from the user saying the word or phrase. Here, associating the prompt with the biometric template can include providing the prompt to the user and receiving audio data (e.g., a .wav file) of the user's response); store the set of recordings in a database (see Headley column 10 lines 35-47 i.e. Both biometric templates and prompts can be stored in association with the user ID in a database, for example); transmit to a computing device of the trusted contact, an authentication request (see Headley column 19 lines 11-25 i.e. Method 600 also comprises a step 620 of providing a prompt to the first user and storing a biometric response of the first user thereto in association with the user ID. Step 620 can comprise the steps 340 and 350 of method 300 (FIG. 3), in some embodiments. It will be appreciated that various embodiments of method 600 can include some or all of the other steps of methods 200 and 300. Each user of the social network that follows the steps 210, 610, and 620 provides the social network with a user ID associated with two biometric samples, one recorded as a template, the other provided in response to a prompt, for example, while logging into the social network to access an account) that the trusted contact authenticate the subject to allow the subject or another authorized individual access to protected material (see Headley column 5 lines 52-63 i.e. Within the context of a social network, the invention can also allow one user to authenticate a certified user to help the first user assess the certified user's trustworthiness before accepting messages, communications, content, or the like. For example, stored biometric templates or subsequent biometric responses to prompts made by the certified user can be viewed, in whole or in part, by other users before accepting contact or content form the certified user. For instance, when a user (certified or not) receives an invitation to join a group or share photographs and column 20 lines 7-15 i.e. An optional step 740 further comprises restricting some users to communicate only with certified users. This can comprise, for example, restricting those users to communicate only with certified users that match a criterion like a gender or an age or age range. Step 740 can be implemented, for instance, in the context of parental controls so that a child is restricted to communicating with, and exchanging content with, only those other users that are certified to be children below a certain age or within a specified range of ages); in response to the server system receiving an acceptance of the authentication request from the computing device of the trusted contact: cause display, on the computing device of the trusted contact, of the set of video recordings retrieved from the database together with a set of authentication evaluation requests to evaluate an authenticity of each one of the set of video recordings; receive from the computing device of the trusted contact, a set of authentication evaluation responses, each one of the set of authentication evaluation responses responsive to an authentication evaluation request of the set of authentication evaluation requests; and determine an authentication result based on the set of authentication evaluation responses (see Headley column 19 lines 37-51 i.e. Method 600 also comprises a step 640 of sending to the second user at least a portion of the biometric response of the first user, or at least a portion of the biometric template of the first user. In those embodiments in which only the biometric template or the biometric response is associated with the user ID, step 640 reduces to sending at least a portion of whichever biometric sample was associated with the user ID. It will be appreciated that for certain purposes either of the biometric response or the biometric template may be more relevant. For example, to verify that a user is not an imposter, the biometric response from the most recent login event would be more relevant than a biometric template recorded when the an account was first established. Steps 630 and 640 can be performed by the inter-user authentication logic 430 (FIG. 4) in some embodiments). Headley does not disclose receiving, by the server system, video data, the video data including a set of video recordings of the subject performing the set of actions. Steelberg teaches receiving, by the server system, video data, the video data including a set of video recordings of the subject performing the set of actions (see paragraph 0033-0034 i.e. At subprocess 115, once the user face is verified, the CMFA system can request the user to perform an action via instructions delivered aurally or visually. The instructions can request the user to repeat a sentence being aurally or visually presented. The instructions can also request the user to perform an action such as, but not limited to, holding up an object, making a certain facial expression, doing something with part of the user's body (e.g., wink, smile, look to the left) while the live video stream is active …At subprocess 120, the CMFA system can analyze, using an image or object classification neural network, the video data portion of the live multi-media stream (or video only data stream) to determine whether the user has performed the requested action such as to smile, look to the left, pick up an object, etc. The CMFA system can also analyze, using an audio classification neural network) the audio data portion of the live multi-media stream (or audio only data stream) to determine whether the user has read the requested words or sentence. For example, subprocess 120 can display one or more words on the user's display and instruct the user to read the one or more words. Subprocess 120 can also instructs the user by playing an audio through the user's device. Alternatively, subprocess 120 can instruct the user using both aural and visual presentation methods. For instance, subprocess 120 can instruct the user aurally to repeat the sentence “hello word, my name is Joe Smith” and/or display the sentence on the user's screen and instruct the user to read it out loud into the microphone). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Steelberg to have used video data when performing a set of actions for as a way to authenticate the user (see Steelberg paragraph 0031). Therefore one would have been motivated to have a live data stream such as a multi-media stream, a video only stream, or an audio only stream as a way to authenticate the user. With respect to claim 13 Headley teaches system for authentication according to claim 11,but does not disclose wherein the processor is configured to receive video data including the set of video recordings of the subject and to cause display of the set of video recordings on the device of the trusted contact substantially in real-time. Steelberg teaches wherein the receiving video data including the set of video recordings of the subject and the causing display of the set of video recordings on the device of the trusted contact is performed substantially in real-time (see Steelberg paragraph 0014 i.e. he user to enable a real-time stream of data from the user's device; analyze the real-time stream of data from the user's device to verify the user's identity). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Steelberg to have used video data when performing a set of actions for as a way to authenticate the user (see Steelberg paragraph 0031). Therefore one would have been motivated to have a live data stream such as a multi-media stream, a video only stream, or an audio only stream as a way to authenticate the user. With respect to claim 14 Headley teaches a system for authentication according to claim 11, further comprising a computing device of the subject coupled to the server system over the network, wherein the processor is configured to cause the prompts to the subject to perform a set of actions to be displayed on the computing device of the subject (see Headley figure 2 and column 9 line 49 - column 10 line 18 i.e. The user ID is further associated with a plurality of prompts in step 230. The prompts can include common prompts such as “Say your mother's maiden name,” and “Sign your name on the signature pad.” In some embodiments, the user selects some or all of the plurality of prompts from a list of predefined prompts such as the common prompts noted above. The prompts selected by the user are then associated with the user ID. In other embodiments, a plurality of predefined prompts is automatically assigned to the user. In some embodiments, still other prompts that can be associated with the user ID are personalized prompts). Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Headley (US 9,712,526) in view of Steelberg et al (US 2022/0269761) in view of Wyss (US 2021/0320801). With respect to claim 2 Headley teaches a computer-implemented method for authentication according to claim 1, but does not disclose wherein determining an authentication result includes calculating, by the server system, a confidence score based on the set of authentication evaluation responses. Wyss teaches wherein determining an authentication result includes calculating, by the server system, a confidence score based on the set of authentication evaluation responses (see Wyss paragraph 0063-0066 i.e. the method may include the step of determining a confidence score for verifying the identity of the individual. The confidence score may be based on the similarity scores associated with one or more biometric proof. The confidence score also may be based on whether the recognized value of the authentication sequence was found to match the expected value of the authentication sequence). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Wyss to have determined a confidence score for verifying the identity of the individual and granting the individual access to the computer system if the confidence score is above a predetermined threshold (see Wyss paragraph 0066). Therefore one would have been motivated to have determined a confidence score. With respect to claim 12 Headley teaches a system for authentication according to claim 11, but does not disclose wherein determining an authentication result includes calculating a confidence score based on the set of authentication evaluation responses. Wyss teaches wherein determining an authentication result includes calculating a confidence score based on the set of authentication evaluation responses (see Wyss paragraph 0063-0066 i.e. the method may include the step of determining a confidence score for verifying the identity of the individual. The confidence score may be based on the similarity scores associated with one or more biometric proof. The confidence score also may be based on whether the recognized value of the authentication sequence was found to match the expected value of the authentication sequence). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Wyss to have determined a confidence score for verifying the identity of the individual and granting the individual access to the computer system if the confidence score is above a predetermined threshold (see Wyss paragraph 0066). Therefore one would have been motivated to have determined a confidence score. Claims 5-10 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Headley (US 9,712,526) in view of Steelberg et al (US 2022/0269761) in view of Irwin et al (US 2023/0350999). With respect to claim 5 Headley teaches a computer-implemented method for authentication according to claim 4, but does not disclose wherein the server system causes the display of the prompts by use of a virtual assistant executing on the computing device of the subject. Irwin teaches wherein the server system causes the display of the prompts by use of a virtual assistant executing on the computing device of the subject (see Irwin paragraph 0062 i.e. In another example of interactive task 230, the text is not displayed, but rather the instructions are output as audio 236 (e.g. read by Siri or other virtual assistant) from one or both of root and witness client devices 108(1) and 108(2). Both of root and witness client devices 108(1) and 108(2) capture user 102 movements (e.g. facial movements) as user 102 performs interactive task 230. Root client device 108(1) captures movement data 238 that defines only movements (e.g. facial movements and/or facial expressions) detected by root client device 108(1). Using the same hardware and/or software that facially authenticates user 102, movement tracker 314 (in FIG. 3) can capture movements (e.g. head movements and facial expressions) made by user 102, such as through use of the IR projector/scanner 218 and/or camera 216). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to output the instructions for the interactive task as one of many ways the instruction could be presented to the user (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 6 Headley teaches a computer-implemented method for authentication according to claim 1, but does not disclose wherein the server system transmits the authentication request to a virtual assistant executing on the computing device of the trusted contact. Irwin teaches wherein the server system transmits the authentication request to a virtual assistant executing on the computing device of the trusted contact (see Irwin paragraph 0061-0062 i.e. Root and witness client devices 108(1) and 108(2) can then cooperate to interact with user 102 and provide witnessed authentication to authentication server 104. In a first step, one or both of root and witness client devices 108(1) and 108(2) can generate interactive task 230 based upon task code 232 received in messages 226 and 228 and paragraph 0058). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to output the instructions for the interactive task as one of many ways the instruction could be presented to the witness and user device (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 7 Headley teaches a computer-implemented method for authentication according to claim 6, but does not disclose further comprising prompting, by the virtual assistant executing on the computing device of the trusted contact, the trusted contact to accept the evaluation request. Irwin teaches further comprising prompting, by the virtual assistant executing on the computing device of the trusted contact, the trusted contact to accept the evaluation request (see Irwin paragraph 0062 i.e. In another example of interactive task 230, the text is not displayed, but rather the instructions are output as audio 236 (e.g. read by Siri or other virtual assistant) from one or both of root and witness client devices 108(1) and 108(2). Both of root and witness client devices 108(1) and 108(2) capture user 102 movements (e.g. facial movements) as user 102 performs interactive task 230. Root client device 108(1) captures movement data 238 that defines only movements (e.g. facial movements and/or facial expressions) detected by root client device 108(1). Using the same hardware and/or software that facially authenticates user 102, movement tracker 314 (in FIG. 3) can capture movements (e.g. head movements and facial expressions) made by user 102, such as through use of the IR projector/scanner 218 and/or camera 216). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to output the instructions for the interactive task as one of many ways the instruction could be presented to the witness and user device (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 8 Headley teaches a computer-implemented method for authentication according to claim 1, but does not disclose wherein the server system causes the display of the video recordings retrieved from the database on the computing device of the trusted contact by use of a virtual assistant executing on the computing device of the trusted contact. Irwin teaches wherein the server system causes the display of the video recordings retrieved from the database on the computing device of the trusted contact by use of a virtual assistant executing on the computing device of the trusted contact (see Irwin paragraph 0062 i.e. In another example of interactive task 230, the text is not displayed, but rather the instructions are output as audio 236 (e.g. read by Siri or other virtual assistant) from one or both of root and witness client devices 108(1) and 108(2). Both of root and witness client devices 108(1) and 108(2) capture user 102 movements (e.g. facial movements) as user 102 performs interactive task 230. Root client device 108(1) captures movement data 238 that defines only movements (e.g. facial movements and/or facial expressions) detected by root client device 108(1). Using the same hardware and/or software that facially authenticates user 102, movement tracker 314 (in FIG. 3) can capture movements (e.g. head movements and facial expressions) made by user 102, such as through use of the IR projector/scanner 218 and/or camera 216). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to input/output the instructions/results for the interactive task as one of many ways the instruction could be presented to the witness and user device (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 9 Headley teaches a computer-implemented method for authentication according to claim 1, wherein the server system causes the display of the set of authentication evaluation requests by use of a virtual assistant executing on the computing device of the trusted contact. Irwin teaches wherein the server system causes the display of the set of authentication evaluation requests by use of a virtual assistant executing on the computing device of the trusted contact (see Irwin paragraph 0062 i.e. In another example of interactive task 230, the text is not displayed, but rather the instructions are output as audio 236 (e.g. read by Siri or other virtual assistant) from one or both of root and witness client devices 108(1) and 108(2). Both of root and witness client devices 108(1) and 108(2) capture user 102 movements (e.g. facial movements) as user 102 performs interactive task 230. Root client device 108(1) captures movement data 238 that defines only movements (e.g. facial movements and/or facial expressions) detected by root client device 108(1). Using the same hardware and/or software that facially authenticates user 102, movement tracker 314 (in FIG. 3) can capture movements (e.g. head movements and facial expressions) made by user 102, such as through use of the IR projector/scanner 218 and/or camera 216). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to input/output the instructions/results for the interactive task as one of many ways the instruction could be presented to the witness and user device (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 10 Headley teaches a computer-implemented method for authentication according to claim 1, but does not disclose wherein the server system receives the set of authentication evaluation responses from a virtual assistant executing on the computing device of the trusted contact. Irwin teaches wherein the server system receives the set of authentication evaluation responses from a virtual assistant executing on the computing device of the trusted contact (see Irwin paragraph 0063 i.e. When interactive task 230 is complete, root client device 108(1) can be configured to send a message 242 to authentication server 104 containing results of the one or more authentications (e.g. user recognition routines) performed by root client device 108(1) during interactive task 230 and movement data 238, and witness client device 108(2) can be configured to send a message 244 to authentication server 104 containing movement data 240. Authentication software 212 can process messages 242 and 244 to determine authentication results 246 that indicate whether access to website 106 (or the protected resource, transaction, transfer, document, and the like to be performed and/or delivered) is granted for user 102). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to input/output the instructions/results for the interactive task as one of many ways the instruction could be presented to the witness and user device (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 15 Headley teaches a system for authentication according to claim 14, but does not disclose further comprising a virtual assistant executing on the computing device of the subject, wherein the processor is configured to cause the display of the prompts by use of the virtual assistant executing on the computing device of the subject. Irwin teaches further comprising a virtual assistant executing on the computing device of the subject, wherein the processor is configured to cause the display of the prompts by use of the virtual assistant executing on the computing device of the subject (see Irwin paragraph 0062 i.e. In another example of interactive task 230, the text is not displayed, but rather the instructions are output as audio 236 (e.g. read by Siri or other virtual assistant) from one or both of root and witness client devices 108(1) and 108(2). Both of root and witness client devices 108(1) and 108(2) capture user 102 movements (e.g. facial movements) as user 102 performs interactive task 230. Root client device 108(1) captures movement data 238 that defines only movements (e.g. facial movements and/or facial expressions) detected by root client device 108(1). Using the same hardware and/or software that facially authenticates user 102, movement tracker 314 (in FIG. 3) can capture movements (e.g. head movements and facial expressions) made by user 102, such as through use of the IR projector/scanner 218 and/or camera 216). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to output the instructions for the interactive task as one of many ways the instruction could be presented to the user (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 16 Headley teaches a system for authentication according to claim 11, but dos not disclose further comprising a virtual assistant executing on the computing device of the trusted contact, wherein the processor is configured to transmit the authentication request to the virtual assistant executing on the computing device of the trusted contact. Irwin teaches further comprising a virtual assistant executing on the computing device of the trusted contact, wherein the processor is configured to transmit the authentication request to the virtual assistant executing on the computing device of the trusted contact (see Irwin paragraph 0061-0062 i.e. Root and witness client devices 108(1) and 108(2) can then cooperate to interact with user 102 and provide witnessed authentication to authentication server 104. In a first step, one or both of root and witness client devices 108(1) and 108(2) can generate interactive task 230 based upon task code 232 received in messages 226 and 228 and paragraph 0058). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to output the instructions for the interactive task as one of many ways the instruction could be presented to the witness and user device (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 17 Headley teaches a system for authentication according to claim 16, but does not disclose wherein the processor is further configured to cause a prompt, by the virtual assistant executing on the computing device of the trusted contact, to he trusted contact to accept the evaluation request. Irwin teaches wherein the processor is further configured to cause a prompt, by the virtual assistant executing on the computing device of the trusted contact, to he trusted contact to accept the evaluation request (see Irwin paragraph 0062 i.e. In another example of interactive task 230, the text is not displayed, but rather the instructions are output as audio 236 (e.g. read by Siri or other virtual assistant) from one or both of root and witness client devices 108(1) and 108(2). Both of root and witness client devices 108(1) and 108(2) capture user 102 movements (e.g. facial movements) as user 102 performs interactive task 230. Root client device 108(1) captures movement data 238 that defines only movements (e.g. facial movements and/or facial expressions) detected by root client device 108(1). Using the same hardware and/or software that facially authenticates user 102, movement tracker 314 (in FIG. 3) can capture movements (e.g. head movements and facial expressions) made by user 102, such as through use of the IR projector/scanner 218 and/or camera 216). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to output the instructions for the interactive task as one of many ways the instruction could be presented to the witness and user device (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 18 Headley teaches a system for authentication according to claim 11, further comprising a virtual assistant executing on the computing device of the trusted contact, wherein the processor is configured to cause the display of the video recordings retrieved from the database by use of the virtual assistant executing on the computing device of the trusted contact. Irwin teaches further comprising a virtual assistant executing on the computing device of the trusted contact, wherein the processor is configured to cause the display of the video recordings retrieved from the database by use of the virtual assistant executing on the computing device of the trusted contact (see Irwin paragraph 0062 i.e. In another example of interactive task 230, the text is not displayed, but rather the instructions are output as audio 236 (e.g. read by Siri or other virtual assistant) from one or both of root and witness client devices 108(1) and 108(2). Both of root and witness client devices 108(1) and 108(2) capture user 102 movements (e.g. facial movements) as user 102 performs interactive task 230. Root client device 108(1) captures movement data 238 that defines only movements (e.g. facial movements and/or facial expressions) detected by root client device 108(1). Using the same hardware and/or software that facially authenticates user 102, movement tracker 314 (in FIG. 3) can capture movements (e.g. head movements and facial expressions) made by user 102, such as through use of the IR projector/scanner 218 and/or camera 216). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to input/output the instructions/results for the interactive task as one of many ways the instruction could be presented to the witness and user device (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 19 Headley teaches a system for authentication according to claim 11, but does not disclose further comprising a virtual assistant executing on the computing device of the trusted contact, wherein the processor is configured to cause the display of the set of authentication evaluation requests by use of the virtual assistant executing on the computing device of the trusted contact. Irwin teaches further comprising a virtual assistant executing on the computing device of the trusted contact, wherein the processor is configured to cause the display of the set of authentication evaluation requests by use of the virtual assistant executing on the computing device of the trusted contact (see Irwin paragraph 0062 i.e. In another example of interactive task 230, the text is not displayed, but rather the instructions are output as audio 236 (e.g. read by Siri or other virtual assistant) from one or both of root and witness client devices 108(1) and 108(2). Both of root and witness client devices 108(1) and 108(2) capture user 102 movements (e.g. facial movements) as user 102 performs interactive task 230. Root client device 108(1) captures movement data 238 that defines only movements (e.g. facial movements and/or facial expressions) detected by root client device 108(1). Using the same hardware and/or software that facially authenticates user 102, movement tracker 314 (in FIG. 3) can capture movements (e.g. head movements and facial expressions) made by user 102, such as through use of the IR projector/scanner 218 and/or camera 216). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to input/output the instructions/results for the interactive task as one of many ways the instruction could be presented to the witness and user device (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. With respect to claim 20 Headley teaches a system for authentication according to claim 1, but does not disclose further comprising a virtual assistant executing on the computing device of the trusted contact, wherein the processor is configured to receive the set of authentication evaluation responses from the virtual assistant executing on the computing device of the trusted contact. Irwin teaches further comprising a virtual assistant executing on the computing device of the trusted contact, wherein the processor is configured to receive the set of authentication evaluation responses from the virtual assistant executing on the computing device of the trusted contact (see Irwin paragraph 0062 i.e. In another example of interactive task 230, the text is not displayed, but rather the instructions are output as audio 236 (e.g. read by Siri or other virtual assistant) from one or both of root and witness client devices 108(1) and 108(2). Both of root and witness client devices 108(1) and 108(2) capture user 102 movements (e.g. facial movements) as user 102 performs interactive task 230. Root client device 108(1) captures movement data 238 that defines only movements (e.g. facial movements and/or facial expressions) detected by root client device 108(1). Using the same hardware and/or software that facially authenticates user 102, movement tracker 314 (in FIG. 3) can capture movements (e.g. head movements and facial expressions) made by user 102, such as through use of the IR projector/scanner 218 and/or camera 216). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Headley in view of Irwin to have used a virtual assistant to input/output the instructions/results for the interactive task as one of many ways the instruction could be presented to the witness and user device (see paragraph 0062). Therefore one would have been motivated to have used an virtual assistant to output the instructions for the interactive task. Prior Art Gibson et al (US 8,832,788) titled “Automated Human Assisted Authentication”. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN E ALMEIDA whose telephone number is (571)270-1018. The examiner can normally be reached on Monday-Thursday from 7:30 A.M. to 5:00 P.M. The examiner can also be reached on alternate Fridays from 7:30 A.M. to 4:00 P.M. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Rupal Dharia, can be reached on 571-272-3880. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /DEVIN E ALMEIDA/Examiner, Art Unit 2492
Read full office action

Prosecution Timeline

Sep 08, 2022
Application Filed
Feb 19, 2025
Non-Final Rejection — §103
May 20, 2025
Response Filed
Aug 21, 2025
Final Rejection — §103
Nov 21, 2025
Request for Continued Examination
Dec 05, 2025
Response after Non-Final Action
Dec 31, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580763
USE OF TENSILE SPHERES FOR EXTENDED SYMMETRIC CRYPTOGRAPHY
2y 5m to grant Granted Mar 17, 2026
Patent 12562886
Fast Polynomial Evaluation Under Fully Homomorphic Encryption by Products of Differences from Roots Using Rotations
2y 5m to grant Granted Feb 24, 2026
Patent 12556512
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR AUTOMATIC CATEGORY 1 MESSAGE FILTERING RULES CONFIGURATION BY LEARNING TOPOLOGY INFORMATION FROM NETWORK FUNCTION (NF) REPOSITORY FUNCTION (NRF)
2y 5m to grant Granted Feb 17, 2026
Patent 12556393
SYSTEMS AND METHODS FOR REAL-TIME TRACEABILITY USING AN OBFUSCATION ARCHITECTURE
2y 5m to grant Granted Feb 17, 2026
Patent 12542682
AUTHENTICATING PACKAGED PRODUCTS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
82%
With Interview (+11.4%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 592 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month