Prosecution Insights
Last updated: April 19, 2026
Application No. 17/575,163

USER IDENTITY AUTHENTICATION USING VIRTUAL REALITY

Non-Final OA §103
Filed
Jan 13, 2022
Examiner
LI, MENG
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
Advanced New Technologies Co. Ltd.
OA Round
8 (Non-Final)
87%
Grant Probability
Favorable
8-9
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
490 granted / 561 resolved
+29.3% vs TC avg
Strong +18% interview lift
Without
With
+17.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
25 currently pending
Career history
586
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 561 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/11/2025 has been entered. Response to Amendment The Amendment filed on 11/19/2025 has been entered. Claims 21, 28 and 35 are amended. Claims 21-40 are pending of which claims 21, 28 and 35 are independent claims. Response to Arguments The applicant's arguments filed on 11/19/2025 have been fully considered but the arguments are essentially directed towards the newly introduced limitations, and they are addressed in this Office Action, below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21-24, 26-31, 33-38 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over US Pub. No. US 2014/0125574 A1 to Scavezze, (hereinafter, “Scavezze”) in view of Anand (Pub. No. : US 2017/0364920), US Pub. No. US 2016/0342782 A1 to Mullins, (hereinafter, “Mullins”), as disclosed in IDS submitted on 01/20/2022 in further view of US Pub. No. US 2015/0352437 A1 to Koseki, (hereinafter, “Koseki”). As per claims 21, 28 and 35, Scavezze teaches a computer-implemented method, a system and one or more non-transitory computer storage media encoded with computer program instructions that when executed by one or more computers cause the one or more computers to perform operations (Scavezze, para. [0056] “The logic subsystem may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the logic subsystem may be single-core or multi-core, and the programs executed thereon may be configured for sequential, parallel or distributed processing. The logic subsystem may optionally include individual components that are distributed among two or more devices, which can be remotely located and/or configured for coordinated processing. Aspects of the logic subsystem may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.” See Figure 7.), respectively, comprising: performing a bioassay process to determine whether the user requesting access to the service interface is a real person (Scavezze, para. [0017] “Selection of an augmented reality feature may be detected and determined based on user 106 performing a movement or command associated with that augmented reality feature. The movement may include the user simply looking at or moving toward the augmented reality feature. Further, in some embodiments, the movement may also include the user looking at the augmented reality feature for a predetermined amount of time, the user looking at the augmented reality feature while performing a specific movement or issuing a specific audio command, and/or other suitable mechanisms for indicating selection of the augmented reality feature.” AND para. [0048] “user authentication inputs may also be entered using blink detection, object recognition (e.g., recognition of a specific object, such as a poster, building, piece of art, etc.), retinal scan, fingerprint detection, and/or other suitable input mechanisms.”); Scavezze teaches all the limitations of claims 21, 28 and 35 above, however fails to explicitly teach but Anand teaches: receiving a request by a user to access a service interface by performing interactive operations with one or more virtual elements in a virtual reality (VR) scenario of a VR application; determining that the interactive operations with the one or more virtual elements match one or more predetermined gestures (Anand - [0055]: the user 102 may gesture or instruct the avatar through various inputs to provide a unique identifier, such as a barcode, within the virtual reality environment 108 to a representation of an access point or point of sale device 122 associated with purchasing the movie ticket 106 to initiate a transaction … the user 102 may initiate the transaction for the movie ticket 106 by directing or gesturing 118 their avatar within the virtual reality environment 108 to access the movie ticket 106) in response to determining that the interactive operations with the one or more virtual elements match the one or more predetermined gestures, determining to trigger a biometric authentication process (Anand - [0055]: the virtual reality hardware 104 may obtain a biometric sample of the user 102 in response to receiving an indication that an avatar of the user 102 has initiated a transaction within the virtual reality environment 108. [0056]: an avatar of the user 102 may be prompted to provide personal authentication information within the virtual reality environment in response to initiating the transaction for movie ticket 106); after triggering the biometric authentication process, performing the biometric authentication process (Anand - [0056]: The virtual reality hardware 104 may provide the partial biometric template and personal authentication information to the authentication computer 114 via the communication networks 112) Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Anand into Scavezze’s user selection gesture is matched for determining the item to grab and trigger an biometric authentication afterwards. However, the combination of Scavezze and Anand doesn’t explicitly teach but Mullins discloses initiating a biometric authentication process comprising: presenting a virtual guidance in the VR scenario for guiding the user to perform biometric authentication using a biometric sensor that identifies one or more biometric characteristics of the user, wherein the one or more biometric characteristics comprise one or more of a fingerprint, iris, or sclera, and (Mullins, para. [0083] “The HMD 101 generates instructions that are displayed in the display 204 of the HMD 101. The instructions may include requesting the user to stare at different virtual objects in the display 204. The instructions may be provided via audio or visual methods. For example, the user of the HMD 101 may see virtual written instructions in the display 204 or hear audio cues that instruct the user to look at different virtual objects in the display 204. In another example, when a user of the HMD 101 walks towards a restricted area, the HMD 101 may generate an audio alert notifying the user that the user be authenticated prior to entering the restricted area by looking at different virtual objects in the HMD 101.” And para. [0084] “The display 204 may be divided into different regions or portions (e.g., top, bottom, left, right, center of the display 204) so that a virtual object is displayed in each region.” And para. [0086] At operation 608, the HMD 101 identifies and authenticates a user based on the pictures of the iris or retina of the user by comparing the structure of the iris or the blood vessels in the retina with a database of images of iris structures or retina blood vessels. In one example embodiment, operation 608 the HMD 101 may be implemented with the biometric authentication application 216.”); and determining that the biometric authentication process is successful (Mullins, para. [0027] “Once the biometric data are recorded for the different locations, the HMD compares the biometric data of the user for each location of the virtual objects against reference biometric data of the user for the corresponding locations of the virtual objects to authenticate the user … The user of the HMD is authenticated if at least one of the first and second reference biometric data matches the recorded first and second set of biometric data”); and in response to determining that the biometric authentication process is successful, presenting the service interface within the VR application (Mullins, para. [0101] “Once the HMD 101 receives confirmation of the authentication of the user … The HMD 101 provides the authenticated user with access to AR content corresponding to the access privilege of the user at operation 814”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Mullins’ biometric authentication into Scavezze and Anand’s user authentication on display device, with a motivation to determine biometric data unique to a user and provide access to the virtual content based on an authentication of the user (Mullins, para. [0066]). The combination of Scavezze, Anand and Mullins teaches all the limitations of claims 21, 28 and 35 above, however fails to explicitly teach but Koseki teaches: the virtual guidance indicates a location of the biometric sensor for biometric authentication of the user's identity (Koseki, para. [0061] “causing the computer to display an object controller as the first guide display…” [=virtual guidance], para. [0064] “…causing the computer to display an object that represents a direction from the virtual position of the HMD toward the virtual position of the controller…” The virtual guidance is the graphical representation of the real-world object, in the case of Koseki, it is the controller. One of ordinary skill in the art can substitute the controller for a biometric sensor or any other switch), wherein the virtual guidance comprises a dynamic mark pointing to a direction of the biometric sensor, wherein presenting the virtual guidance comprises dynamically adjusting a pointing direction of the dynamic mark until the user successfully performs the biometric authentication using the biometric sensor (Koseki, para. [0072]: the direction of the first guide display may be changed corresponding to the distance. Therefore, the user can more easily determine the direction and the degree of stretch required to reach the controller. para. [0218]: The game device main body 1002 changes the display state of the direction guide object 13 corresponding to the distance to the left hand object 14 or the right hand object 16, whichever is closer to the virtual stereo camera 10. For example, the game device main body 1002 changes the display color of the direction guide object 13, or changes the dimensional ratio (e.g., the length of the arrow) of the direction guide object 13. When the direction guide object 13 is an arrow, the number of arrows may be increased as the distance increases, and the arrows may be arranged in series toward the controller object 12); Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to apply the known technique of representing a real world object, such as the well-known biometric sensor for biometric authentication as disclosed in the teachings of Scavezze in view of Anand, Mullins, within a virtual environment by graphically rendering the real world object within the virtual reality display as taught in Koseki for improvement to yield the predictable result of being able to dynamically guide the user to the real-world location of the biometric sensor while the user is wearing a virtual reality headset and unable to have a clear line of vision to the biometric sensor and enable the user to use the biometric sensor without having the discomfort of having to remove the headset and remain immersed in the virtual reality display. As per claims 22, 29 and 36, the combination of Scavezze, Anand, Mullins and Koseki teaches the method of claim 22, the system of claim 29, and the one or more computer storage media of claim 36, respectively, wherein performing the bioassay process comprises detecting, by a VR device worn by the user, eye-blinking by the user (Scavezze, para. [0040] “The set of authentication inputs may be selected from a plurality of recognized authentication inputs for the user, as indicated at 604, wherein the plurality of recognized inputs correspond to inputs made via different sensors. As a more specific example, the plurality of recognized inputs may include eye-tracking inputs, head tracking inputs, arm gesture inputs, and/or user biometric information (e.g. a user's interpupil distance (IPD), gait, height, etc.), and the set received may include eye-tracking input data followed by head motion input data.” And para. [0048] “user authentication inputs may also be entered using blink detection, object recognition (e.g., recognition of a specific object, such as a poster, building, piece of art, etc.), retinal scan, fingerprint detection, and/or other suitable input mechanisms.”). As per claims 23, 30 and 37, the combination of Scavezze, Anand, Mullins and Koseki teaches the method of claim 23, the system of claim 30, and the one or more computer storage media of claim 37, respectively, wherein performing the bioassay process comprises detecting, by a VR device worn by the user, a heartbeat of the user (Mullins, para. [0059] “The ECG sensor 310 includes, for example, electrodes that measure a heart rate of the user 102. In particular, the ECG sensor 310 measures the cardiac rhythm of the user 102. A biometric algorithm is applied to the user 102 to identify and authenticate the user 102.” And para. [0094] “EEG signals from the user 102 may be recorded using the EEG sensor 308. ECG signals from the user 102 may be recorded using the ECG sensor 310. In one example embodiment, the EEG sensor 308 and the ECG sensor 310 may be implemented using a set of electrodes in contact with the head of the user 102 wearing the HMD 101. EEG/ECG signals (e.g., brain activity or heart beat) may be captured at operation 706. In one example embodiment, operation 706 may be implemented with the electrode-based module 404. The electrode-based module 404 captures EEG/ECG signals while the user 102 watches a virtual object in the display 204. Therefore, the electrode-based module 404 captures a set of EEG/ECG signals corresponding to each virtual object.”). The reason to combine is in the same rational as discussed in claim 21. As per claims 24, 31 and 38, the combination of Scavezze, Anand, Mullins and Koseki teaches the method of claim 24, the system of claim 31, and the one or more computer storage media of claim 38, respectively, wherein the virtual guidance prompts the user to enter biometric information (Mullins, para. [0083] “The HMD 101 generates instructions that are displayed in the display 204 of the HMD 101. The instructions may include requesting the user to stare at different virtual objects in the display 204. The instructions may be provided via audio or visual methods. For example, the user of the HMD 101 may see virtual written instructions in the display 204 or hear audio cues that instruct the user to look at different virtual objects in the display 204. In another example, when a user of the HMD 101 walks towards a restricted area, the HMD 101 may generate an audio alert notifying the user that the user be authenticated prior to entering the restricted area by looking at different virtual objects in the HMD 101.” And para. [0084] “At operation 604, the HMD 101 renders virtual objects (one at a time) in different locations of the display 204 and requests that the user (e.g., user 102) stare at the virtual objects for a predefined period of time (e.g., 2 seconds). For example, the virtual objects may include arrows, numbers, letters, symbols, and animated two-dimensional or three-dimensional models. The display 204 may be divided into different regions or portions (e.g., top, bottom, left, right, center of the display 204) so that a virtual object is displayed in each region.”); and the biometric authentication process further comprises authenticating the user based on the biometric information (Mullins, para. [0087] “The user 102 of the HMD 101 is identified and authenticated if the biometric data generated with the biometric authentication application 216 matches the reference biometric data in the storage device 208 or in the biometric dataset 512. In another example embodiment, the biometric authentication application 216 compares composite biometric data with reference composite biometric data in the storage device 208 or in the biometric dataset 512.” And para. [0088] “At operation 610, the HMD 101 provides the user 102 with access to AR content that is based on the user authentication. For example, the biometric authentication application 216 identifies and authenticates the user 102 of the HMD 101 based on biometric data related to the iris or retina of the user 102.”). The reason to combine is in the same rational as discussed in claim 21. As per claims 26, 33 and 40, the combination of Scavezze, Anand, Mullins and Koseki teaches the method of claim 26, the system of claim 33, and the one or more computer storage media of claim 40, respectively, wherein the biometric information comprises an iris scan or a voiceprint of the user (Mullins, para. [0056] “the ocular camera 306 may be a camera configured to capture an image of an iris in the eye of the user 102. In response to the amount of light entering the eye, muscles attached to the iris expand or contract the aperture at the center of the iris, known as the pupil. The expansion and contraction of the pupil depends on the amount of ambient light. The ocular camera 306 may use iris recognition as a method for biometric identification. The complex pattern on the iris of the eye of the user 102 is unique and can be used to identify the user 102. The ocular camera 306 may cast infrared light to acquire images of detailed structures of the iris of the eye of the user 102. Biometric algorithms may be applied to the image of the detailed structures of the iris to identify the user 102.”). As per claims 27 and 34, the combination of Scavezze, Anand, Mullins and Koseki teaches wherein dynamically adjusting the pointing direction of the dynamic mark until the user successfully performs the biometric authentication using the biometric sensor comprises: sensing, using a motion senor, a movement of the user (Koseki - para. [0145]: The left hand object 14 is moved as the player 2 stretches the left arm forward. When the left hand object 14 has reached the left selection target 24, or is situated at a given distance from the left selection target 24, the display state of the left hand object 14 is changed (see the stereoscopic image (game screen) W4 illustrated in FIG. 7A)); and dynamically adjusting the pointing direction of the dynamic mark based on the movement of the user until the user successfully performs the biometric authentication using the biometric sensor (Koseki - para. [0177]: The direction guide object control section 224 disposes the direction guide object 13 in the virtual space, and controls the position (movement) and the posture of the direction guide object 13. Specifically, the direction guide object control section 224 controls the position of the direction guide object 13 so that the direction guide object 13 is always situated at a given position within the field of view (game screen) of the HMD 1310, and controls the posture of the direction guide object 13 so that the direction guide object 13 faces in the direction of the game controller 1200 with respect to (when viewed from) the HMD 1310. See also [0218]). The reason to combine is in the same rational as discussed in claim 21. Claims 25, 32 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Scavezze in view of Anand, Mullins and Koseki, as disclosed above, in further view of US Pub. No. US, (hereinafter, “Lundblade”). As per claims 25, 32 and 39, the combination of Scavezze, Anand, Mullins and Koseki teaches wherein the virtual guidance is a virtual mark indicating a mounting location of a fingerprint sensor (Koseki, para. [0061] “causing the computer to display an object controller as the first guide display…” [=virtual guidance], para. [0064] “…causing the computer to display an object that represents a direction from the virtual position of the HMD toward the virtual position of the controller…” and para. [0203] “The virtual space control data 630 is data that represents the virtual three-dimensional space (virtual space) that forms the game space, and includes data that manages the placement of the player assistance display object. As illustrated in FIG. 11, the virtual space control data 630 includes virtual stereo camera control data 602, controller object control data 604, direction guide object control data 606, left hand object control data 608, right hand object control data 610, guide character control data 612, guide display panel control data 614, and selection target control data 616” The virtual guidance is the graphical representation of the real-world object, in the case of Koseki, it is the controller. One of ordinary skill in the art can substitute the controller for a biometric or fingerprint sensor). However, Scavezze as modified fail to explicitly teach but Lundblade discloses: wherein the biometric information comprises a fingerprint authentication and wherein the virtual guidance comprises a virtual mark indicating a mounting location of a fingerprint sensor for fingerprint authentication of the user's identity (Lundblade, para. [0041] “an explicit user input authentication to “unlock” the mobile device may occur by the mobile device 100 prompting the user to enter a password, scan a finger via a fingerprint sensor 152, or speak a voice print to be heard by microphone 165. Based upon an authorization by the policy engine 210 of the user input, the mobile device 100 may be “unlocked”.” And para. [0052] “In the suspicious state 530, an indicator may be displayed on the display (e.g., a status bar icon) so that the user knows that a suspicious state has been entered and the mobile device may lock soon. Based upon this, a user may be encouraged to touch the touch screen display 120 or a dedicated fingerprint sensor to provide a fingerprint, or to look into camera 170 for a facial recognition or iris recognition process, to ensure quick authentication out of the suspicious state 530. This of course is easier for the user than getting out of a locked state 510 (e.g., the device is locked) and these types of facial and finger recognition do not have to be of such high quality as would be typically used to get out of a locked state 510 scenario.”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Scavezze, Anand, Mullins and Koseki with Lundblade by substituting Koseki’s virtual guidance with one that represents a virtual mark showing the mounting location of a fingerprint sensor to obtain the predictable result of assist the user with the correct placement of the fingerprint on the fingerprint sensor as the user is otherwise unable to see the correct location on the biometric sensor. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MENG LI whose telephone number is (571)272-8729. The examiner can normally be reached on M-F 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s acting supervisor, Alexander Lagor can be reached on (571) 270-5143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MENG LI/Primary Examiner, Art Unit 2437
Read full office action

Prosecution Timeline

Jan 13, 2022
Application Filed
Mar 01, 2022
Response after Non-Final Action
Jun 13, 2023
Non-Final Rejection — §103
Sep 19, 2023
Examiner Interview Summary
Sep 19, 2023
Applicant Interview (Telephonic)
Sep 21, 2023
Response Filed
Oct 06, 2023
Final Rejection — §103
Dec 11, 2023
Examiner Interview Summary
Dec 11, 2023
Applicant Interview (Telephonic)
Dec 13, 2023
Response after Non-Final Action
Dec 20, 2023
Response after Non-Final Action
Jan 05, 2024
Request for Continued Examination
Jan 16, 2024
Response after Non-Final Action
Jan 18, 2024
Non-Final Rejection — §103
Apr 24, 2024
Response Filed
Sep 09, 2024
Non-Final Rejection — §103
Dec 23, 2024
Response Filed
Mar 28, 2025
Final Rejection — §103
Jun 04, 2025
Response after Non-Final Action
Jun 18, 2025
Request for Continued Examination
Jun 22, 2025
Response after Non-Final Action
Jun 30, 2025
Non-Final Rejection — §103
Sep 03, 2025
Response Filed
Sep 17, 2025
Final Rejection — §103
Nov 19, 2025
Response after Non-Final Action
Dec 11, 2025
Request for Continued Examination
Dec 22, 2025
Response after Non-Final Action
Jan 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603944
Location Aware Authorization System
2y 5m to grant Granted Apr 14, 2026
Patent 12598082
CRYPTOGRAPHIC METHOD TO CERTIFY RETENTION LOCK STATUS FOR OPAQUE DATA IN A BACKUP SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12591691
AGENTLESS RUNTIME CYBERSECURITY ANALYSIS
2y 5m to grant Granted Mar 31, 2026
Patent 12585547
CRYPTOGRAPHIC METHOD TO CERTIFY RETENTION LOCK STATUS WITH AN EMBEDDED VERIFICATION LOG IN A BACKUP SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12585792
CRYPTOGRAPHIC METHOD TO CERTIFY RETENTION LOCK STATUS FOR AUDITING IN A BACKUP SYSTEM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

8-9
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+17.8%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 561 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month