DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In a RCE filed on 11/21/2025 and amendments filed on 10/27/2025, applicant(s) amended claims 1, 12 and 13. Claims 1 – 20 are still pending in this application.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/21/2025 has been entered.
Response to Arguments
Applicant’s arguments with respect to claims 1 - 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 09/09/2025, 10/01/2025, 11/21/2025, 12/11/2025 & 12/31/2025 were filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner.
Applicants have not provided an explanation of relevance of cited document(s) discussed below.
Cohen et al. (U.S PreGrant Publication No. 2022/0253135 A1) provides a display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints for thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of rotation of the user's eye using cornea data derived from the glint images. The display system may render virtual image content with a render camera positioned at the determined position of the center of rotation of said eye.
Agrawal et al. (U.S PreGrant Publication No. 2018/0365490 A1) teaches a first image set that includes a plurality of 2D images of an eye of a user is collected. Two or more sets of corresponding pixels are generated from the plurality of 2D images. One or more 3D features of an iris of the user is extracted based on repeatable differences in reflectance among each set of corresponding pixels. A test set of 3D features for a submitted eye is generated based on a second image set, the second image set including a plurality of 2D images of the submitted eye. Based on a comparison of the one or more extracted 3D features and the test set of 3D features, an indication is made as to whether the second image set is representative of the user's eye.
Kaehler (U.S PreGrant Publication No. 2017/0206412 A1) teaches a head mounted display system that comprises an image capture device, which is configured to capture multiple eye images of an eye. A processor is programmed to the eye image from multiple eye images, where the eye image is received from the image capture device. An image quality metric associated with the eye image is determined, where the determined image quality metric is compared with an image quality threshold to determine whether the eye image passes the image quality threshold. The image quality threshold corresponding to an image quality level is provided for generating an iris code.
Castañeda et al. (U.S PreGrant Publication No. 2021/0365535 A1) teaches a system comprises an eyewear device that includes a frame, a temple connected to a lateral side of the frame, an infrared emitter, and an infrared camera. The infrared emitter and the infrared camera are connected to the frame or the temple to emit a pattern of infrared light. The system includes a processor coupled to the eyewear device, a memory accessible to the processor, and programming in the memory. Execution of the programming by the processor configures the system to perform functions, including functions to emit, via the infrared emitter, a pattern of infrared light on an eye of a user of the eyewear device; capture, via the camera, reflection variations in the pattern of infrared light on the eye of the user; and identify a user of the eyewear device based on the reflection variations of the emitted pattern of infrared light on the eye of the user.
Liu et al. (U.S PreGrant Publication No. 20200012105 A1) teaches an eye tracking device and a head-mounted display device are provided. The eye tracking device includes at least one infrared camera, at least one infrared light group and a control circuit. The control circuit is electrically connected with the at least one infrared light group. Each of the at least one infrared light group includes at least two infrared lights which are arranged at different locations. Each of the at least one infrared camera is configured to collect an eye image of a user when the at least two infrared lights are turned on. The control circuit is configured to respectively control the number of infrared lights achieving effective operating brightness in each infrared light group when the eye tracking device performs iris recognition and eye tracking, and the effective operating brightness refers that a brightness is not less than a threshold brightness.
Krichen et al. (U.S PreGrant Publication No. 2019/0347483 A1) teaches a method for verifying the authenticity of the iris of an eye of a biometric recognition candidate, comprising the step of verifying the flatness of the iris from two images of the iris in different orientations with respect to the image sensor.
Smith (U.S PreGrant Publication No. 2019/0387168 A1) provides a head mounted display system can process images by assessing relative motion between the head mounted display and one or more features in a user's environment. The assessment of relative motion can include determining whether the head mounted display has moved, is moving and/or is expected to move with respect to one or more features in the environment. Additionally, or alternatively, the assessment can include determining whether one or more features in the environment have moved, are moving and/or are expected to move relative to the head mounted display. The image processing can further include determining one or more virtual image content locations in the environment that correspond to a location where render able virtual image content appears to a user when the location appears in the display and comparing the one or more virtual image content locations in the environment with a viewing zone.
Yin (U.S PreGrant Publication No. 2018/0150690 A1) teaches a virtual reality (VR) device that includes a housing that has two openings. Each of the two openings hosts a camera lens and a nose groove. The VR device also includes one or more cameras distributed around each of the camera lenses for capturing one or more eye physiological characteristics of a VR device user.
Facense Ltd. (U.S PreGrant Publication No. 2021/0318558 A1) teaches a novel design for untethered smartglasses with wireless connectivity in which electronic components and electric wiring are mounted in a manner than enables at least a portion of temples of the smartglasses to be bent around the ear to improve the smartglasses' fit. In one embodiment, the smartglasses include a front element that supports lenses and two temples, coupled to the front element through hinges that enable folding and unfolding. At least one of the temples includes: a first portion coupled to the front element with first electronic components, a second portion coupled to the first portion with electric wires, and a third portion coupled to the second portion with second electronic components. The second portion is designed to be bent around a human ear to improve the smartglasses' fit, and the first and third portions are not designed to be bent to improve the smartglasses' fit.
Tzivieli et al. (U.S PreGrant Publication No. 2016/0360970 A1) provide wearable devices for taking symmetric thermal measurements. One device includes first and second thermal cameras physically coupled to a frame worn on a user's head. The first thermal camera takes thermal measurements of a first region of interest that covers at least a portion of the right side of the user's forehead. The second thermal camera takes thermal measurements of a second ROI that covers at least a portion of the left side of the user's forehead. Wherein the first and second thermal cameras are not in physical contact with their corresponding ROIs, and as a result of being coupled to the frame, the thermal cameras remain pointed at their corresponding ROIs when the user's head makes angular movements.
Bradski et al. (U.S PreGrant Publication No. 2019/0094981 A1) disclose configurations for presenting virtual reality and augmented reality experiences to users. The system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.
Nakaigawa et al. (U.S Patent No. 7068820 B2) teaches an iris image pickup apparatus includes a recess sinking in the direction of a sight line of a person to be authenticated, the recess provided on an outer panel covering the image pickup unit for picking up the iris image of a person, indicators arranged at the bottom of the recess, external indicators arranged on the periphery of the recess, and a controller for performing lighting control of the indicator as well as keeping the external indicators off in a period where the image pickup timing of the image pickup unit is overlapped. Since the external indicators are kept off during picking up the image, the sight line of a person to be authenticated does not move to the external indicators thus picking up an iris image without blurs.
Publicover (U.S Patent No. 10156900 B2) provides apparatus, systems, and methods for substantially continuous biometric identification (CBID) of an individual using eye signals in real time. The apparatus is included within a wearable computing device with identification of the device wearer based on iris recognition within one or more cameras directed at one or both eyes, and/or other physiological, anatomical and/or behavioral measures. Verification of device user identity can be used to enable or disable the display of secure information. Identity verification can also be included within information that is transmitted from the device in order to determine appropriate security measures by remote processing units. The apparatus may be incorporated within wearable computing that performs other functions including vision correction, head-mounted display, viewing the surrounding environment using scene camera(s), recording audio data via a microphone, and/or other sensing equipment.
Derakhshani et al. (U.S PreGrant Publication No. 2014/0044321 A1, U.S PreGrant Publication No. 2016/0132735 A1 & U.S PreGrant Publication No. 2014/0044318 A1) describe technologies relating to biometric authentication based on images of the eye. In general, one aspect of the subject matter described in this specification can be embodied in methods that include obtaining images of a subject including a view of an eye. The methods may further include determining a behavioral metric based on detected movement of the eye as the eye appears in a plurality of the images, determining a spatial metric based on a distance from a sensor to a landmark that appears in a plurality of the images each having a different respective focus distance, and determining a reflectance metric based on detected changes in surface glare or specular reflection patterns on a surface of the eye. The methods may further include determining a score based on the behavioral, spatial, and reflectance metrics and rejecting or accepting the one or more images based on the score.
Wisniewski (U.S PreGrant Publication No. 2004/0151347 A1) teaches a method and system for automated face recognition overcomes operational difficulties such as: face finding problems, inability to recognize some ethnic groups, inability to enroll users, pose, and being too slow for high volume usage. The automated face recognition system uses a new face finding and an automated eye finding approach combined with an intelligent metric. A small template size of the face biometric (biomatrix), of under 90 bytes, is provided, which allows templates to be placed on any size chip or into two dimensional (2-D) barcodes for self-authenticating documents, as well as for quick easy transmission over the internet, wireless devices, or Ethernet (i.e., LAN, WAN, etc.). The small template also provides quick identification and authentication speed, as well as minimal storage requirements, small processing requirements, and increased processing speed. The system can be included in dolls, games, auto theft deterrent systems and drowsiness detection systems.
Schultz et al. (U.S PreGrant Publication No. 20130227651 A1) describe an approach for enabling multi-factor biometric authentication of a user of a mobile device. A biometric authenticator captures, via a mobile device, first and second biometric data for a user. The biometric authentication further associates the first biometric data and the second biometric data. The biometric authenticator then initiates a multi-factor authentication procedure that utilizes the first biometric data and the second biometric data to authenticate the user based on the association.
Prokoski (U.S PreGrant Publication No. 2002/0136435 A1) teaches a biometric identification system directed toward use of dual-band visual-infrared imaging with appropriate techniques for integrating the analysis of both images to distinguish less reliable from more reliable image components, so as to generate a composite image comprised of layers. Correlation and analysis of the composite layers enables improved reliability in identification. The method and apparatus of the invention provide for efficient and optimized use of dual-band imaging for biometric identification of faces, fingerprints, palm and hand prints, sweat pore patterns, wrist veins, and other anatomical features of humans and animals.
Dulle et al. (U.S PreGrant Publication No. 2020/0127267 A1) Is related to a battery module for use in a vehicle. The battery module may include a housing, a plurality of battery cells disposed within the housing, and solid state pre-charge control circuitry that pre-charges a direct current (DC) bus that may be coupled between the battery module and an electronic component of the vehicle. Furthermore, the solid state pre-charge control circuitry may include solid state electronic components as well as passive electronic components.
Ogawa (U.S PreGrant Publication No. 2008/0037841 A1) teaches an image-capturing apparatus for capturing an image by using a solid-state image-capturing device may include a face detector configured to detect a face of a human being on the basis of an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is close to a specific expression in relation to expressions other than the specific expression; and a notification section configured to notify notification information corresponding to the computed expression evaluation value to an image-captured person.
Magic Lip (KR 102483345 B1) describes systems and methods for eye image set selection, eye image collection, and eye image combination. Embodiments of the systems and methods for eye image collection can include displaying a graphic along a path connecting a plurality of eye pose regions. Eye images at a plurality of locations along the path can be obtained, and an iris code can be generated based at least partly on at least some of the obtained eye images.
EyeFluence (KR 20200127267 A) provides apparatus, systems, and methods for secure mobile communications (SMC) by an individual using biometric signals and identification in real time. The apparatus includes a wearable computing device where identification of the user is based on iris recognition, and/or other physiological and anatomical measures. Biometric identity measures can be combined with other security-based information such as passwords, date/time stamps, and device identification. Identity verification can be embedded within information that is transmitted from the device and/or to determine appropriate security measures. SMC addresses security issues associated with the transmission of eye-signal control and biometric identification data using secure interfaces with network devices within a system of systems (SoS) software architecture.
SENGELAUB et al. (U.S PreGrant Publication No. 2018/0120932 A1) is related to an eye tracking device (10a; 10b; 10c) comprising a processing unit (18) and an optical system (14), which comprises a capturing unit. The optical system (14) provides a first optical path (P; P1) with a first imaging property and a second optical path (P; P2) with a second imaging property and the capturing unit (C; C1, C2) captures a first image (24a) by capturing light that has passed along the first optical path (P; P1) and a second image (24b) by capturing light that has passed along the second optical path (P; P2), so that due to the difference of the first and second imaging property the first (24a) and the second image (24b) comprise a difference related to a characteristic of at least part of the first and the second image (24b), wherein the eye tracking device (10a; 10b; 10c) is configured to determine a property of the eye (12) on the basis of at least one of the first image (24a) and the second image (24b).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 - 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by He et al. (U.S PreGrant Publication No. 2020/0218887 A1, hereinafter ‘He’).
With respect to claim 1, He teaches a system (i.e., a system, ¶0006), comprising: at least one camera (e.g., a camera, ¶0053) configured to capture one or more images of an eye region of a user (e.g., configured to capture one or more images of an eye region of a user, ¶0053, Fig. 4, Figs. 7A/7B); a controller (e.g., a device control module, ¶0006) comprising one or more processors (e.g., at least a processor, ¶0045) configured to: process the one or more images of the eye region captured by the camera to extract two or more biometric aspects of the eye region (e.g., based on the captured images, extract features of said eye region, ¶0006, ¶0045, Figs. 7A/7B); select, from among the extracted two or more biometric aspects extracted from the one or more captured images, one or more of the extracted biometric aspects for biometric authentication of the user (e.g., based on the extracted features, selecting the two or more different optical illumination wavelengths of a probe light so that an optical reflections from pupils of the person's eyes are measurably different in signal strength to allow the measured difference in the detected optical reflections from pupils of the person's eyes as an indicator for whether the detected face is from a live person. In other implementations, the above method may include selecting the two or more different optical illumination wavelengths of the probe light to cause different levels of optical absorption by the facial skin of a live person so that beam spot sizes of the probe light that penetrates into the facial skin due and is scattered by the facial skin at the two or more different optical illumination wavelengths are measurably different; operating the probe detection module to capture images of the beam spots at the optical sensor array; processing the captured images of the beam spots at the two or more different optical illumination wavelengths to measure a difference in the beam spots; and using the measured difference to determine whether the detected face is from a live person as an additional part of facial recognition, ¶0008, ¶0050); and perform biometric authentication for the user based at least in part on the selected one or more extracted biometric aspects of the eye region (e.g., after selecting, perform authentication for the user, ¶0003, ¶0044 - ¶0047, ¶0050, ¶0063 - ¶0065).
With respect to claim 2, He teaches the system as recited in claim 1, wherein the eye region includes one or more of an iris, an eye, a periorbital region, and a portion of the user's face (e.g. Fig. 7).
With respect to claim 3, He teaches the system as recited in claim 1, wherein, to select one or more of the biometric aspects for biometric authentication of the user, the controller is configured to apply objective criteria to the extracted biometric aspects to determine whether the biometric aspects meet thresholds of quality for the biometric authentication (e.g., it simply determining if a face of the user is live; where the user is facing a corresponding direction, ¶0006, ¶0011 with ¶0059 and ¶0064).
With respect to claim 4, He teaches the system as recited in claim 3, wherein the objective criteria include one or more of exposure, contrast, shadows, edges, undesirable streaks, occluding objects, sharpness, uniformity of illumination, and absence of undesired reflections (e.g., one of the situations is the usage of illumination or reflections, ¶0006 - ¶0011, ¶0046).
With respect to claim 5, He teaches the system as recited in claim 1, wherein the biometric aspects include one or more of an eye surface, eye veins, eyelids, eyebrows, skin features, nose features, and iris features (e.g., pupils, facial skins, etc., ¶0008).
With respect to claim 6, He teaches the system as recited in claim 5, wherein the iris features include one or more of colors, patterns, and musculature (e.g., features should include measured difference from pupils of the person’s eyes as an indicator for whether the detected face is from a live person, ¶0006 - ¶0008; it’s well-known in the art to detect whether user is live; therefore, the size/shape/diameter of a pupil varies since the pupil is the center region of an iris).
With respect to claim 7, He teaches the system as recited in claim 5, wherein the biometric aspects further include one or more of feature sizes and geometric relationships of two or more features (e.g., Fig. 7 – both eyes should be positionally linear symmetric, ¶0046, ¶0064 - ¶0065, Fig. 7A).
With respect to claim 8, He teaches the system as recited in claim 1, wherein the controller is further configured to perform anti-spoofing based at least in part on the selected one or more biometric aspects (e.g., anti-spoof is performed to determine liveness and authenticate user, abstract, ¶0005, ¶0016 with ¶0044).
With respect to claim 9, He teaches the system as recited in claim 1, further comprising an illumination source comprising a plurality of light-emitting elements configured to emit light towards the eye region to be imaged by the camera (e.g., light source comprising at least two light emitters (37a & 39a) configured to emit light toward the eyes to be imaged by the camera, ¶0018 - ¶0039, Fig. 3).
With respect to claim 10, He teaches the system as recited in claim 9, wherein the light-emitting elements are light-emitting diodes (LEDs) (e.g., said light emitters are arranged with infrared (IR) sensing diodes, ¶0051).
With respect to claim 11, He teaches the system as recited in claim 9, wherein the light-emitting elements are infrared (IR) light sources, and wherein the camera is an infrared camera (e.g., an infrared (IR) camera, ¶0051).
With respect to claim 12, He teaches the system as recited in claim 1, wherein the system is a component of a head-mounted device (HMD), a handheld device, or a wall-mounted device, and wherein the controller is further configured to process on or more other images capture by the at least one camera for gaze tracking (e.g. at least the electronic device is a hand-held device, ¶0044, emphasizing claim 15; the electronica device has the camera configured to capture image(s) of the eyes, Fig. 7).
With respect to claim 13, this is a method claim corresponding to the system claim 1. Therefore, this is rejected for the same reasons as the system claim 1.
With respect to claims 14 - 20, these are method claims corresponding to the system claims 2 – 8, respectively. Therefore, these are rejected for the same reasons as the system claim 2 – 8, respectively.
Conclusion
The prior art made of record and not relied upon are considered pertinent to applicant's disclosure:
Yiqun et al. (U.S Patent No. 9,767,358 B2)1
Zhou et al. (U.S Patent No. 10,354,158 B2)2
Lee et al. (U.S PG Publication No. 2018/0032815 A1)3
1This reference teaches a device capturing an image of the iris, extracting unique features and encoding the features as a biometric identifier that is indicative of the user's biometric features and/or liveness, via anti-spoofing function, using a mobile device.
2This reference teaches a mobile terminal including an IR camera configured to capture image(s) of irises, extract features from the captured image(s), determine whether an target object is a living body with multiple iris image(s).
3This reference teaches an electronic device (e.g., a head-mounted device (HMD), etc.) capable of photographing image(s) by an infrared camera, extract features (e.g., iris) from the photographed image(s), select a frame to compare with iris images, and execute anti-spoofing (e.g., determine whether biometric eye image is included in an eye area).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUAN M GUILLERMETY whose telephone number is (571)270-3481. The examiner can normally be reached 9:00AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q TIEU can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JUAN M GUILLERMETY/Primary Examiner, Art Unit 2682