Prosecution Insights
Last updated: April 19, 2026
Application No. 18/417,834

BIOMETRIC IMAGE LIVENESS DETECTION

Non-Final OA §103
Filed
Jan 19, 2024
Examiner
HANSEN, CONNOR LEVI
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Secure Identity LLC
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
21 granted / 28 resolved
+13.0% vs TC avg
Strong +29% interview lift
Without
With
+29.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
32 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
19.1%
-20.9% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6, 8-15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Moore et al. (US 20170264608 A1), (hereinafter Moore) in view of Annamalai et al. (US 20160078335 A1), (hereinafter Annamalai). Regarding claim 1, Moore teaches a system for biometric image liveness detection, comprising: a server (Moore, see Fig. 6, remote or centralized authentication system 605) that: generates a series of time generated codes; and records the series of time generated codes and time windows associated with the series of time generated codes (Moore, “FIG . 2 illustrates an exemplary 2FA system 200 employing visual biometric authentication along with QR code authentication where a smartphone of the user displays a time-limited QR identifier code generated for the user based on a security key stored within the smartphone.”, paragraph 0032, lines 1-5, “FIG . 6 illustrates a second exemplary operating environment including a smartphone 602 or other portable device , which is employed by the user to gain access to a local secure system or facility via a local authentication system 604 that is operating in conjunction with a remote or centralized authentication system 605… A QR code generator application (app) 616 generates a time-limited QR identifier code based on previously-stored user specific key(s) stored in a database 618 of the smartphone and based on the current date/time as tracked by a data/time tracker 620.”, pg. 4, paragraph 0042, lines 1-18, “Assuming the remote system 605 recognizes the user based on the biometric indicia/markers, a QR code generator 630 (which may be similar to the corresponding QR code generator of the smartphone) , generates a QR code for comparison with the one received from the local system 604.”, pg. 4, paragraph 0043, lines 13-18, Both the smartphone and remote server generate corresponding time-based QR codes which are recorded along with the current date/time to compare for user authentication.); and a user device that includes an image sensor (Moore, “The signal is sent to the local system where an access controller 640 responds by granting access to the user, such as by presenting suitable menus on an ATM or other local access device controlled by the local authentication system 604.”, paragraph 0043, lines 32-36, see Fig. 6, local authentication system 604, Note the local authentication system, including camera 622 is being interpreted as covering “user device” as embodiments describe the inclusion of a user interface, such as an ATM.) and: receives the series of time generated codes (Moore, “At 718 , the local system 704 receives or captures the QR code”, paragraph 0044, lines 26-27, see Fig. 7, step 718, The local authentication system receives QR codes to facilitate communication between smartphone and the remote server.); captures a first image, the first image corresponding to one of the series of time generated codes whose associated time window corresponds to a current time; captures a second image of a biometric using the image sensor within a time period of the first image (Moore, “An image capture system 624 of the local authentication system 604 captures image (s) of the QR identifier code and the face of the user (or other biometric indicia).”, pg. 4, paragraph 0043, lines 1-3, see Fig. 6, 718, The local authentication system captures an image of the QR code provided by the user and a biometric image of the user, such as a face image. Note the claim does not require an order between capturing the first and second images, but instead the capturing of the second image must occur in a time period of the first image. Here, both images are captured during the user authentication process and thus the limitation is satisfied.); and provides the first image and the second image to the server (Moore, “The image(s), the QR code and the current date/time (as tracked by date/time tracker 634) are sent via any suitable transmission connection line or media 635 to the remote authentication system 605.”, pg. 4, paragraph 0043, lines 4-7, see Fig. 7, 719); wherein the server determines that the first image and the second image were live captured upon determining that the watermark corresponds to the associated time window (Moore, “Assuming the QR code presented by the user was generated within a predetermined acceptable time window, the QR code generated by the remote authentication system 605 should match the QR code presented by the user, as verified by a QR code verifier 636. If verification is achieved, then a biometric/QR code authorization controller 638 generates a signal for authorizing the user to access the local secure system or facility controlled by the local authentication system.”, pg. 4, paragraph 0043, lines 25-33, see Fig. 7, 719, 720, and 722, The QR code and biometric images are received by the remote server, which verifies that the QR code provided by the user corresponds to a current, valid time window. Because the QR code is time-limited and can only be generated during that window, and because the biometric image is associated with the same authentication attempt, the server determines that the QR code and biometric images were captured during a live/current authentication session at the local server. Based on this determination, the remote server authenticates the user and grants access.). Moore does not teach captures a first image of a biometric using the image sensor, the first image overlaid via filter with a watermark corresponding to one of the series of time generated codes whose associated time window corresponds to a current time. However, Annamalai teaches captures a first image of a biometric using the image sensor, the first image overlaid via filter with a watermark corresponding to one of the series of time generated codes whose associated time window corresponds to a current time (Annamalai, “A QR code 1s combined with a base image, preserving useful functionalities of both the base image and the QR code.”, see abstract, lines 1-2, “At process 603, method 600 may include operations for overlaying a generated QR code image on a base image, including: inputting a base image—the image that will become the background of the resulting QR code-overlaid image; inputting the generated QR code image resulting from processes 601, 602; inputting a reference position, defined to be the pixel coordinates in the base image at which to place the upper left corner of the generated QR code image and which may default to pixel coordinates (0, 0); and inputting a size of a bounding square which has the reference position as the pixel coordinates of the bounding square's upper left corner. The generated QR code image may be placed in the bounding square as foreground.”, pg. 4, paragraph 0050, see Fig. 1C). Moore teaches capturing an image of a time-based QR code provided by a user and a biometric image (e.g., facial image) using a user device to authenticate a user (Moore, pg. 4, paragraphs 0042-0045). Moore does not teach that the QR code image contains a biometric or that the QR code is overlaid on a biometric image. Annamalai teaches combining QR codes with biometric images by overlaying QR codes on facial images to preserve both QR code readability and face recognition (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the presentation of the QR code of Moore to be visually combined with a biometric image as taught by Annamalai (Annamalai, pg. 4, paragraph 0050, see Fig. 1C). The motivation for doing so would have been to further validate that the time-based QR code presented for authentication corresponds to the intended user, thereby increasing the security of the system. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Moore with Annamalai to obtain the invention as specified in claim 1. Regarding claim 2, Moore in view of Annamalai teaches the system of claim 1, wherein the series of time generated codes specify a position of at least part of the watermark in the first image, a color of the at least part of the watermark in the first image, or a transparency of the at least part of the watermark in the first image (Annamalai, “The QR code image may be generated with the highest error correction level possible for the chosen version, with squares in the finder and alignment patterns getting rounded corners, and with regions (a), (b), (c), and (d) getting their respective transparency values.”, pg. 4, paragraph 0049, lines 11-15, “…inputting a reference position, defined to be the pixel coordinates in the base image at which to place the upper left corner of the generated QR code image and which may default to pixel coordinates (0, 0); and inputting a size of a bounding square which has the reference position as the pixel coordinates of the bounding square's upper left corner. The generated QR code image may be placed in the bounding square as foreground.”, pg. 4, paragraph 0050, Each QR code is specified in a particular position on the biometric image. The QR codes are further divided to specify different transparency values per regions.). Regarding claim 3, Moore in view of Annamalai teaches the system of claim 2, wherein at least one of the position, the color, a number of dots, or the transparency changes between time generated codes of the series of time generated codes (Moore, “A QR code generator application (app) 616 generates a time - limited QR identifier code based on previously-stored user specific key(s) stored in a database 618 of the smartphone and based on the current date/time as tracked by a data/time tracker 620.”, pg. 4, paragraph 0042, lines 15-19, The QR code is updated for each new time window, presenting a change in position and/or a configuration of the code.). Regarding claim 4, Moore in view of Annamalai teaches the system of claim 3, wherein the position is specified as a pixel position (Annamalai, “…inputting a reference position, defined to be the pixel coordinates in the base image at which to place the upper left corner of the generated QR code image and which may default to pixel coordinates (0, 0); and inputting a size of a bounding square which has the reference position as the pixel coordinates of the bounding square's upper left corner. The generated QR code image may be placed in the bounding square as foreground.”, pg. 4, paragraph 0050). Regarding claim 6, Moore in view of Annamalai teaches the system of claim 1, wherein the time windows are between 2 seconds and 10 minutes (Moore, “As noted above, the time window may be as short as ten to twenty seconds but it might be set to longer values in the range of minutes or hours .”, pg. 7, paragraph 0055, lines 36-39). Claim 8 corresponds to claim 1, with the addition of a non-transitory storage medium that stores instructions and a processor that executes the instructions to perform the functions according to claim 1. Moore in view of Annamalai teaches the addition of a non-transitory storage medium that stores instructions and a processor that executes the instructions (Moore, “The software may reside on machine–readable medium 806. The machine-readable medium 806 may be a non-transitory machine-readable medium.”, pg. 5, paragraph 0049, lines 1-3) to perform the functions according to claim 1. As indicated in the analysis of claim 1, Moore in view of Annamalai teaches all the limitations according to claim 1. Therefore, claim 8 is rejected for the same reasons of obviousness as claim 1. Regarding claim 9, Moore in view of Annamalai teaches the user device of claim 8, wherein the series of time generated codes are unique to the user device and a user associated with the biometric (Moore, “A QR code generator application (app) 616 generates a time-limited QR identifier code based on previously-stored user specific key (s) stored in a database 618 of the smartphone and based on the current date/time as tracked by a data / time tracker 620.”, pg. 4, paragraph 0042, lines 15-19, The system updated user-specific QR codes for each new time window to generate unique codes for authentication. ). Regarding claim 10, Moore in view of Annamalai teaches the user device of claim 8, wherein the watermark comprises a number of dots (Moore, “Thus , various examples have been described with reference to FIGS . 2 - 7 that use QR codes as a visual identifier code .”, pg. 5, paragraph 0045, lines 1-3, see Fig. 3, QR code 312, The QR codes overlayed on the biometric image includes a series of dots in a square grid1.). Regarding claim 11, Moore in view of Annamalai teaches the user device of claim 9, wherein the series of time generated codes specify a first number of dots in the watermark that are fully transparent and a second number of dots in the watermark that are partially transparent (Annamalai, “The QR code may be generated with several different transparency values (which may reflect either or both of opacity and intensity values used with alpha compositing) for different regions of the QR code… In a second stage, process 502, the generated QR code image (e.g., QR code 101 or QR code 201, see FIGS. 1C, 2C) from process 501 may be combined with a given base image such as a face or company logo (e.g., facial image 102 or company logo 202, see FIGS. 1C, 2C) using the transparency values of the QR code image from process 501 to make the two images appear as overlaid on one another with varying degrees of transparency.”, pg. 3, paragraphs, 0041 and 0042, lines 1-9 and 1-8, respectively, The QR code is overlayed on the biometric image using varying degrees of transparency for different regions of the code to maintain readability.). Regarding claim 12, Moore in view of Annamalai teaches the user device of claim 11, wherein at least one of the first number of dots or the second number of dots changes between time generated codes of the series of time generated codes (Moore, “A QR code generator application (app) 616 generates a time - limited QR identifier code based on previously-stored user specific key(s) stored in a database 618 of the smartphone and based on the current date/time as tracked by a data/time tracker 620.”, pg. 4, paragraph 0042, lines 15-19, The QR code is updated for each new time window. This would include a change in the configuration of the dots which make up the code.). Regarding claim 13, Moore in view of Annamalai teaches the user device of claim 8, wherein: the series of time generated codes are received by an application implemented by the processor; and the application isolates the series of time generated codes from other applications implemented by the processor (Moore, “A QR code generator application (app) 616 generates a time - limited QR identifier code based on previously-stored user specific key(s) stored in a database 618 of the smartphone and based on the current date/time as tracked by a data/time tracker 620.”, pg. 4, paragraph 0042, lines 15-19, An application generates and supplies QR codes for the users. This application represents a dedicated program for QR code generation, separate/isolated from other applications of the system. ). Regarding claim 14, Moore in view of Annamalai teaches the user device of claim 8, wherein the biometric comprises an image of at least a face of a person (Moore, “An image capture system 424 of the authentication system 404 concurrently captures an image of the QR code presented on the smartphone and the face of the user (or other presented biometric indicia).”, pg. 3, paragraph 0038, lines 1-4). Claim 15 corresponds to claim 1, with the addition of a non-transitory storage medium that stores instructions and a processor that executes the instructions to perform the functions according to claim 1. Moore in view of Annamalai teaches the addition of a non-transitory storage medium that stores instructions and a processor that executes the instructions (Moore, “The software may reside on machine–readable medium 806. The machine-readable medium 806 may be a non-transitory machine-readable medium.”, pg. 5, paragraph 0049, lines 1-3) to perform the functions according to claim 1. As indicated in the analysis of claim 1, Moore in view of Annamalai teaches all the limitations according to claim 1. Therefore, claim 15 is rejected for the same reasons of obviousness as claim 1. Regarding claim 18, Moore in view of Annamalai teaches the server of claim 15, wherein the processor compares the biometric in the first image to the biometric in the second image prior to determining that the first image and the second image were live captured (Annamalai, “the combined QR code-base image face (e.g., FIG. 1C) can also be used by face recognition programs.”, pg. 2, paragraph 0022, Note the combination of Moore in view of Annamalai is motivated by further validating the QR code corresponds to an intended user (see analysis of claim 1). The combination would use facial recognition to compare a live-captured facial image of the user with the provided combined QR code facial image to further validate the user.). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Moore et al. (US 20170264608 A1) in view of Annamalai et al. (US 20160078335 A1) and further in view of Gopalakrishna et al. (US 20160098698 A1), (hereinafter Gopalakrishna). Regarding claim 5, Moore in view of Annamalai teaches the system of claim 3. Moore in view of Annamalai does not teach wherein the color is specified as an RGB color code. However, Gopalakrishna teaches wherein the color is specified as an RGB color code (Gopalakrishna, “In some cases, the image that encodes checkout information (or other sensitive information) may comprise a color image, a black and white image, an image including text, a color QR code, a black and white QR code, a two dimensional color bar code, or a one-dimensional color bar code that includes lines of various colors. In one example, a color image or a color QR code may use eight different colors, wherein each color of the eight different colors may represent three bits of data.”, pg. 2, paragraph 0023, lines 1-9). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the QR codes of Moore in view of Annamalai to include color QR codes as taught by Gopalakrishna (Gopalakrishna, pg. 2, paragraph 0023, lines 1-9). The motivation for doing so would have been to increase the number of possible QR code combinations, thereby reducing vulnerability to spoofing or hacking. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Moore in view of Annamalai with Gopalakrishna to obtain the invention as specified in claim 5. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Moore et al. (US 20170264608 A1) in view of Annamalai et al. (US 20160078335 A1) and further in view of Wu (US 20200387700 A1). Regarding claim 7, Moore in view of Annamalai teaches the system of claim 1. Moore in view of Annamalai does not teach wherein the time period is less than 100 microseconds. However, Wu teaches wherein the time period is less than 100 microseconds (Wu, “System 100 can include techniques that use an image pattern tracking (or frequency) algorithm for time-resolved measurements of mini- and/or micro-scale attributes of complex object features. For example, this algorithm can work in conjunction with a digital imaging system (e.g., a high-speed imaging system) of client device 102. The imaging system can generate image data 112 by capturing or recording thousands of successive image frames in a short time period (e.g., in seconds, milliseconds, microseconds, or less). In some implementations, pixel-level data for various image patterns of an observed object can be tracked among successively recorded image frames.”, pg. 5, paragraph 0052, lines 1-12). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Moore in view of Annamalai to instead capture QR code and biometric images within a time period less than 100 microseconds from each other as taught by Wu (Wu, pg. 5, paragraph 0052, lines 1-12). The motivation for doing so would have been to capture images separately but in close temporal proximity, thereby reducing the computational requirement of the image sensor. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Moore in view of Annamalai with Wu to obtain the invention as specified in claim 7. Claims 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Moore et al. (US 20170264608 A1) in view of Annamalai et al. (US 20160078335 A1) and further in view of Boic et al (US 20220309837 A1), (hereinafter Boic). Regarding claim 16, Moore in view of Annamalai teaches the server of claim 15. Moore in view of Annamalai does not teach wherein the processor verifies that the first image and the second image are unlikely to have been modified, tampered with, or physically faked prior to determining that the first image and the second image were live captured. However, Boic teaches wherein the processor verifies that the first image and the second image are unlikely to have been modified, tampered with, or physically faked prior to determining that the first image and the second image were live captured (Boic, “In some embodiments, a neural network is trained to detect differences between real or live faces from fake faces displayed or presented via an image projection device, which can be used to determine the broad-face-context probability. The neural network may also be trained to detect live faces from fake faces in printed materials that include visible edges or frames (e.g., a picture frame). In at least one embodiment, the result or output of the neural network is a probability that the face is that of a live person.”, pg. 3, paragraph 0036, lines 1-9). Moore in view of Annamalai teaches capturing a QR code image and a biometric image of a user for user authentication (Moore, pg. 4, paragraph 0043). Moore in view of Annamalai does not teach verifying that the images have not been modified, tampered, or faked. Boic teaches a neural network trained to verify if images have been modified, tampered, or faked (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the image capturing of Moore in view of Annamalai to include a step for verifying images as taught by Boic (Boic, pg. 3, paragraph 0036, lines 1-9). The motivation for doing so would have been to detect and filter out fake images prior to performing user authentication, thereby reducing the risk of a spoofing (as suggested by Boic, “By more accurately determining the liveness of the subject's face , additional processing or user authentication can be performed if the face is determined to be that of a live person. In this way, the risks of a person trying to spoof or fake the face of an authorized user is reduced.”, pg. 1, paragraph 0003, lines 5-10). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Moore in view of Annamalai with Boic to obtain the invention as specified in claim 16. Regarding claim 17, Moore in view of Annamalai and further in view of Boic teaches the server of claim 16, wherein the processor verifies that the first image and the second image are unlikely to have been modified, tampered with, or physically faked using machine learning pixel detection (Boic, “In at least one embodiment, the result or output of the neural network is a probability that the face image within the bounding box is that of a live person.”, pg. 3, paragraph 0032, 13-15). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Moore et al. (US 20170264608 A1) in view of Annamalai et al. (US 20160078335 A1) and further in view of Geosimonian (US 7308581 B1). Regarding claim 19, Moore in view of Annamalai teaches the server of claim 15. Moore in view of Annamalai does not teach wherein the processor uses the second image for biometric identification upon determining that the first image and the second image were live captured. However, Geosimonian teaches wherein the processor uses the second image for biometric identification upon determining that the first image and the second image were live captured (Geosimonian, “A first set of biometric data representative of one or more physical characteristics of the user is obtained using the user's computer at 320. For example, the first set of biometric data could represent an image of the user's face… The next step 340 is to grant access to the study course materials. Thus, a next web page containing course materials will be served to the user's computer. Next, at 345, while the user is accessing study course materials, the biometric reader is used to obtain a second set of biometric data from the user. The second set can be stored in the memory of the user's computer, and it can also be stored in the memory of the central server, if desired. A biometric identification program then compares the first set of biometric data with the second set of biometric data at 347. If there is an identification match at 350, a “yes” outcome at 350, and the user wants further access 355 to course materials, a “yes” outcome at 355, then further access is granted at 340.”, column 11, lines 5-48, Initial biometric data (e.g., face images) is collected and used as a reference to enable access to the system. Once access is provided, additional biometric data is collected and compared to the initial biometric to maintain access for the user or to deny access.). Moore in view of Annamalai teaches capturing a biometric image of a user for biometric identification and determine, based on the biometric image, whether the images are live captured to authenticate the user and grant access to a system (Moore, pg. 4, paragraph 0043, see analysis of claim 1). Moore in view of Annamalai does not teach continuing biometric identification after access has been granted. Geosimonian teaches continuous biometric identification, in which biometric data captured during an initial authentication is compared with newly captured biometric data to maintain authentication over time (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the system of Moore in view of Annamalai to reuse the biometric image captured during an initial authentication as reference biometric data for continuous biometric identification as taught Geosimonian (Geosimonian, column 11, lines 5-48). The motivation for doing so would have been provide additional security after access is granted, thereby reducing the risk of unauthorized user replacements after initial authentication. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Moore in view of Annamalai with Geosimonian to obtain the invention as specified in claim 19. Claims 20 are rejected under 35 U.S.C. 103 as being unpatentable over Moore et al. (US 20170264608 A1) in view of Annamalai et al. (US 20160078335 A1) and further in view of Tussy (US 20180181737 A1). Regarding claim 20, Moore in view of Annamalai teaches the server of claim 15. Moore in view of Annamalai does not teach wherein the processor uses the second image for biometric identification system enrollment upon determining that the first image and the second image were live captured. However, Tussy teaches wherein the processor uses the second image for biometric identification system enrollment upon determining that the first image and the second image were live captured (Tussy, “Accordingly, in one embodiment the system performs a biometrics “handoff” to update the enrollment information with a new facial recognition algorithm based on an application or software update. For example, when the software or application is updated with a new facial recognition algorithm, the application retains the prior facial recognition algorithm. During the next login attempt the images captured are used to authenticate the user along with any and all liveness checks using the older facial recognition algorithm. If the person is authenticated, the images are then authorized to be used by the new facial recognition algorithm to generate new enrollment information with the new biometrics algorithm. The new enrollment biometric information is considered trustworthy because it is based on a successful login attempt using the prior biometrics algorithm. This process may be done a certain number of times (login with old algorithm creating enrollment information with new algorithm) until a sufficient biometric profile on the new facial recognition algorithm is created.”, pg. 23, paragraph 0232, Biometric images which are successfully authenticated are used to continuously update biometric enrollment information.). Moore in view of Annamalai teaches capturing a biometric image of a user for biometric identification and determine, based on the biometric image, whether the images are live captured to authenticate the user and grant access to a system (Moore, pg. 4, paragraph 0043, see analysis of claim 1). Moore in view of Annamalai does not teach using biometric images for enrollment after access has been granted. Tussy teaches updating biometric enrollment information using authenticated biometric images obtained from authenticated users (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the system of Moore in view of Annamalai to reuse the biometric image after authentication to update biometric enrollment information as taught by Tussy (Tussy, pg. 23, paragraph 0232). The motivation for doing so would have been account for changes in a user’s biometric over time, thereby improving the accuracy of subsequent biometric authentications. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Moore in view of Annamalai with Tussy to obtain the invention as specified in claim 20. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CONNOR LEVI HANSEN whose telephone number is (703)756-5533. The examiner can normally be reached Monday-Friday 9:00-5:00 (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CONNOR L HANSEN/Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672 1 “The data in a QR code is a series of dots in a square grid. Each dot represents a one and each blank a zero in binary code, and the patterns encode sets of numbers, letters or both, including URLs.”, Ruoti, Scott “How QR codes work and what makes them dangerous – a computer scientist explains”, The Conversation, 22 Sept. 2025, https://theconversation.com/how-qr-codes-work-and-what-makes-them-dangerous-a-computer-scientist-explains-177217#:~:text=The%20data%20in%20a%20QR,at%20the%20remaining%20visible%20dots.
Read full office action

Prosecution Timeline

Jan 19, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530785
TRACKING DEVICE, TRACKING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Jan 20, 2026
Patent 12524984
HISTOGRAM OF GRADIENT GENERATION
2y 5m to grant Granted Jan 13, 2026
Patent 12518363
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, AND STORAGE MEDIUM WITH PIECEWISE LINEAR FUNCTION FOR TONE CONVERSION ON IMAGE
2y 5m to grant Granted Jan 06, 2026
Patent 12499648
IMAGE PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM FOR DETECTING SUBJECT IN CAPTURED IMAGE
2y 5m to grant Granted Dec 16, 2025
Patent 12482257
REDUCING ENVIRONMENTAL INTERFERENCE FROM IMAGES
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+29.2%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month