DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is a non-final office action in response to applicant’s preliminary amendment filed on 10/17/2024.
Claims 3-5, 7-11, 13-15 are amended claims. Claims 16-20 are new. Claims 1-20 are pending and being considered.
Priority
The instant application is a 371 of PCT/US2022/072896 filed on 6/13/2022.
Claim Objections
Claims 3, 5-8, 12, 15 are objected to because of the following informalities:
Claim 3 line 2, “… are on a back of a computing device …” may read “… are on a back of the computing device …”.
Claim 5 line 1, and claim 6 line 1, “The method as claimed in …” may read “The method as described in …” for consistency.
Claim 7 lines 3-4, “The two or more image capture perspectives” may read “The two or more image-capture perspectives”.
Similarly, for claim 8.
Claim 12 line 2, “responsive to determining that …” may read “responsive to the determining that …”.
Claim 15 line 3, “further configure …” may read “
Appropriate correction is suggested.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 15 is rejected under 35 USC § 101 because the claimed invention is directed to non-statutory subject matter. The claims are not statutory as they are drawn as a whole to a signal per se. The claims does not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to a "computer-readable medium". Absent an explicit definition excluding carrier waves, signals or the like, the claim is broadly interpreted to be a signal per se. In an effort to assist the patent community in overcoming the rejection under 35 U.S.C. § 101, the USPTO suggests the following approach. A claim drawn to such a computer readable medium (or the like) that covers both transitory and non-transitory embodiments maybe amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. § 101 by adding the limitation "non-transitory" to the claim. Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se.
Examiner Notes
Examiner cites particular paragraphs, columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 8, 11-15, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Muramatsu et al (US2020076089A1-IDS, hereinafter, “Muramatsu”), in view of Mequanint et al (US20200082062A1, hereinafter, “Mequanint”).
Regarding claim 1, Muramatsu teaches:
A method (Muramatsu, discloses apparatus and method for fingerprint authentication using combined visible/infrared light source, see [Abstract]) comprising:
capturing one or more images, the one or more images representing topographical features of an object from two or more perspectives, the one or more images captured by one or more image sensors, the one or more image sensors having two or more image-capture perspectives ([Abstract] A fingerprint authentication apparatus has a combined visible/ infrared light source, which illuminates a finger placed on an optical image sensor with both infrared light and visible light. The optical image sensor … generates a fingerprint image from light scattered by the finger (i.e., topographical features of an object). The infrared sensitivity of the infrared-sensitive block of the optical image sensor is such that a clear image is obtained from a living organism, and an unclear image is obtained from a replica. If the finger is an actual living finger, the fingerprint images from both blocks are clear. Figs. 3-4 shows examples of capturing one or more images. And [0093] this embodiment uses a combined visible/infrared light source 32, in which visible light is mixed with infrared light is used (i.e., two or more image-capture perspectives));
determining, based on the one or more images, the topographical features of the object (Refer to e.g., Fig. 8, at S2, And [0074] The image processing section 14 detects minutiae from the thus obtained image (step S2 in FIG. 8));
comparing the topographical features of the object to previously captured topographical features of a previously imaged object to provide a comparison result (e.g., Fig. 8 at 4, And [0077] In performing this fingerprint comparison, a comparison is made between image data of the input fingerprint and image data of a fingerprint priorly stored in the database 16 and a similarity therebetween is calculated from the minutiae, this similarity being expressed as a value known as the score);
determining, based on the comparison result, that the object and the previously imaged object are a same object (e.g., Fig. 8 at S5, and [0078] A judgment is made as to whether or not the score is equal to or greater than a threshold value (step S5 in FIG. 8). Examiner notes, similarity in score suggests same object);
and responsive to the determining that the object and the previously imaged object are the same object ([0079] If the result of this judgment is that the score is equal to or greater than the threshold value, a judgment is made that the fingerprint is that of an authorized person), [altering a permission to a resource associated with a computing device] (See Mequanint below for teaching of limitation in bracket).
While Muramatsu teaches authenticating person using fingerprint but does not specifically teach altering a permission to a resource associated with a computing device, in the same field of endeavor Mequanint teaches:
[responsive to the determining that the object and the previously imaged object are the same object] (see the teachings of Muramatsu shown above. Also see Mequanint, Fig. 1 at 106-110), altering a permission to a resource associated with a computing device (Mequanint, discloses apparatus and method for authenticating a user of a device, see [Abstract]. Refer to Fig. 1 at 110-112, and [0042] For example, at block 110, the similarity score 107 can be compared to a threshold. If the similarity score 107 is greater than the threshold, the device is unlocked at block 112 (i.e., altering a permission)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Mequanint in the fingerprint authentication for a person of Muramatsu by unlocking the device upon user authentication of the device. This would have been obvious because the person having ordinary skill in the art would have been motivated to authenticate user with biometric authentication to allow a user to access to the electronic devices (Mequanint, [Abstract], [0002]).
Regarding claim 14, claim 14 is a computing device claim that encompasses limitations that are similar to those limitations of the method claim 1. Therefore, claim 14 is rejected with the same rationale and motivation as applied against claim 1. In addition, Muramatsu-Mequanint teaches a computing device comprising: one or more image sensors (see e.g., Muramatsu, Fig. 1); one or more processors; and memory storing: instructions (Mequanint, [0008] one or more processor and non-transitory computer-readable medium).
Regarding claim 15, claim 15 is a computer-readable medium claim that encompasses limitations that are similar to those limitations of the method claim 1. Therefore, claim 15 is rejected with the same rationale and motivation as applied against claim 1. In addition, Mequanint teaches a computer-readable medium (Mequanint, [0008] one or more processor and non-transitory computer-readable medium).
Regarding claim 2, Muramatsu-Mequanint combination teaches the method as described in claim 1,
Muramatsu further teaches: wherein: the object having topographical features is a fingertip of a user; and the previously imaged object is the fingertip of the user (Muramatsu teaches authenticating a person using finger with fingerprint image, see [Abstract]).
Regarding claim 8, Muramatsu-Mequanint combination teaches the method as
described in claim 1,
Muramatsu further teaches: wherein the one or more images are received from an image capture device having at least two image sensors configured to provide the two or more image capture perspectives (Muramatsu, [0032] a fingerprint authentication apparatus having an imaging section which forms an image of a fingerprint to be authenticated by an optical image sensor formed by a first optical image sensor having sensitivity in the infrared region and a second optical image sensor having sensitivity in the visible light region, first and second optical image sensors being mutually neighboring).
Regarding claim 11, Muramatsu-Mequanint combination teaches the method as
described in claim 1,
Muramatsu further teaches: further comprising, prior to capturing the one or more images, determining that the object having the topographical features is occluding the one or more image sensors (e.g., [0074] The image processing section 14 detects minutiae from the thus obtained image (step S2 in FIG. 8), and performs a judgment as to whether or not the number thereof is equal to or greater than a prescribed number (FIG. S3 in FIG. 8). Notes, in Muramatsu, detection of minutiae from the thus obtained image suggests object having the topographical features is occluding the one or more image sensors from setup shown in Fig. 3).
Regarding claim 12, Muramatsu-Mequanint combination teaches the method as
described in claim 11,
Muramatsu further teaches: further comprising, responsive to determining that the object having topographical features is occluding the one or more images sensors and prior to or incident with capturing the one or more images, illuminating the object (e.g., Fig. 3, Light source 32 is illuminating at the finger 30 to generate image at Optical image sensor 33).
Regarding claim 13, Muramatsu-Mequanint combination teaches the method as
described in claim 12,
Muramatsu further teaches: wherein illuminating the object is performed at a direction sufficient to alter features from one of the two or more perspectives to a greater amount than another of the two or more perspectives (e.g., [0074] The image processing section 14 detects minutiae from the thus obtained image (step S2 in FIG. 8), and performs a judgment as to whether or not the number thereof is equal to or greater than a prescribed number (FIG. S3 in FIG. 8). And [0076] If, however, the minutiae count is equal to or greater than the prescribed number, the comparison section 15 performs a fingerprint comparison (step S4 in FIG. 8)).
Regarding claim 19, Muramatsu-Mequanint combination teaches the method as described in claim 8,
Muramatsu further teaches: wherein the at least two image sensors are physically separate one from another (e.g., [0032] A second aspect of the present invention is a fingerprint authentication apparatus having an imaging section which forms an image of a fingerprint to be authenticated by an optical image sensor formed by a first optical image sensor having sensitivity in the infrared region and a second optical image sensor having sensitivity in the visible light region, first and second optical image sensors being mutually neighboring (i.e., physically separate one from another)).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Muramatsu-Mequanint as applied above to claim 1, further in view of Devine et al (US20190370448A1, hereinafter, “Devine”).
Regarding claim 3, Muramatsu-Mequanint combination teaches the method as described in claim 1,
The combination of Muramatsu-Mequanint does not specifically teaches the following, in the same field of endeavor Devine teaches:
wherein the one or more image sensors are on a back of a computing device and the one or more image sensors are one or more cameras of the computing device (Devine, discloses techniques for implementation of biometric authentication, see [Abstract]. And [0087] Device 100 optionally also includes one or more optical sensors 164... In conjunction with imaging module 143 (also called a camera module), optical sensor 164 optionally captures still images or video. In some embodiments, an optical sensor is located on the back of device 100, opposite touch screen display 112 on the front of the device).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Devine in the fingerprint authentication for a person of Muramatsu-Mequanint by having optical sensor located on back of device. This would have been obvious because the person having ordinary skill in the art would have been motivated to enable touch screen display for use as a viewfinder for still and/or video image acquisition (Devine, [Abstract], [0087]).
Claims 4, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Muramatsu-Mequanint as applied above to claim 1, further in view of Mostafa et al (US20190042835A1, hereinafter, “Mostafa”).
Regarding claim 4, Muramatsu-Mequanint combination teaches the method as described in claim 1,
The combination of Muramatsu-Mequanint does not specifically teaches the following, in the same field of endeavor Mostafa teaches:
wherein the method is performed responsive to the computing device being in an unlocked state but the resource associated with the computing device being in a locked state (Mostafa, discloses facial recognition authentication on a device having a camera operating with multiple enrollment profiles, see [Abstract]. And [0079] if matching score 260 is below unlock threshold 264 (e.g., not equal to or above the unlock threshold), then device 100 is not unlocked in 268 (e.g., the device remains locked). It should be noted that device 100 may be either locked or unlocked if matching score 260 is equal to unlock threshold 264 depending on a desired setting for the unlock threshold (e.g., tighter or looser restrictions)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Mostafa in the fingerprint authentication for a person of Muramatsu-Mequanint by facial recognition authentication on a device with multiple enrollment profiles. This would have been obvious because the person having ordinary skill in the art would have been motivated to tighter or looser restrictions of accessing to device for authorized user with thresholds based on multiple enrollment profiles (Mostafa, [Abstract], [0079]).
Regarding claim 18, Muramatsu-Mequanint-Mostafa combination teaches the method as described in claim 4,
Mequanint further teaches: wherein altering the permission to the resource unlocks the resource associated with the computing device (Refer to Fig. 1 at 110-112, and [0042] For example, at block 110, the similarity score 107 can be compared to a threshold. If the similarity score 107 is greater than the threshold, the device is unlocked at block 112 (i.e., altering the permission to the resource)). Same motivation as presented in claim 1 would apply.
Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Muramatsu-Mequanint-Mostafa as applied above to claim 4, further in view of Salama et al (US20160132670A1, hereinafter, “Salama”).
Regarding claim 5, Muramatsu-Mequanint-Mostafa combination teaches the method as described in claim 4,
The combination of Muramatsu-Mequanint-Mostafa does not specifically teach the following, in the same field of endeavor Salama teaches:
wherein the resource associated with the computing device is at least one of a financial account or other high-rights resource requiring two-factor authentication and the method further comprises: providing, through the determining that the object and the previously imaged object are the same object, one of two factors of the two-factor authentication (Salama, discloses systems and methods that facilitate two-factor authentication of a user based on a user-defined image and information identifying portions of the image sequentially selected by the user, [Abstract]. And [0045] In some aspects, client device 104 may be configured to store the authentication information in one or more data records that link user 110 with the presented digital image (e.g., as a first biometric factor in a two-factor authentication process) and the captured authentication sequence (e.g., as a second, user-defined factor in the two-factor authentication process). And [0075] In further aspects, client device may perform operations (e.g., in step 416) in response to a request received programmatically from a system associated with an e-commerce retailer, financial institution, governmental entity, or other business entity through a corresponding API. For example, a digital portal associated with an e-commerce retailer (e.g., Amazon.com™) may request, through a corresponding API, that client device 104 execute instructions that perform a two-factor authentication of user 110 prior to completion of a purchase transaction).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Salama in the fingerprint authentication for a person of Muramatsu-Mequanint-Mostafa by using two-factor authentication process based on user-defined image data. This would have been obvious because the person having ordinary skill in the art would have been motivated to allow user to perform operations consistent with user biometric profile data with security (Salama, [Abstract], [0005]).
Regarding claim 6, Muramatsu-Mequanint-Mostafa-Salama combination teaches the method as claimed in claim 5,
Salama further teaches: wherein another of the two factors of the two-factor authentication is a contemporaneous finger-print authentication performed on a different side of the computing device as a side on which the capturing one or more images is performed ([0039] In FIG. 3, client device 104 may execute software instruction that present a digital image 301 to user 110 on a corresponding touchscreen display. Furthermore, client device 104 may present a dialog box, pop-up window, or other interface element prompting user 110 to sequentially select facial and/or physical features of user 110 within the presented image. In some aspects, using a finger 302, user 110 may select a sequence 303 of facial features within the presented image, which may establish the authentication sequence corresponding to the presented image). Same motivation as presented in claim 5 would apply.
Claims 7, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Muramatsu-Mequanint as applied above to claim 1, further in view of Pan et al (US20180324359A1, hereinafter, “Pan”).
Regarding claim 7, Muramatsu-Mequanint combination teaches the method as
described in claim 1,
The combination of Muramatsu-Mequanint does not specifically teach the following, in the similar field of endeavor Pan teaches:
wherein the one or more images are received from an image capture device having one dual-pixel image sensor configured to provide the two or more image capture perspectives (Pan, discloses system and method of compensating image data for phase fluctuations using plurality of pixel positions of image sensor, see [Abstract]. And [0017] the method comprising: capturing, by a sensor of an imaging system, first image data and second image data (i.e., two or more image capture perspectives) for each of a plurality of pixel positions of the sensor, the sensor capturing an object through a wave deforming medium causing a defocus disparity between the first image data and second image data. And [0018] the first image data and the second image data is captured using a dual-pixel autofocus sensor. [0019] the defocus disparity between the first image data and the second image data relates to displacement between left pixel data and right pixel data of the dual-pixel autofocus sensor).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Pan in the fingerprint authentication for a person of Muramatsu-Mequanint by a dual-pixel autofocus sensor to capture two image data. This would have been obvious because the person having ordinary skill in the art would have been motivated to compensate image data for phase fluctuations caused by a wave deforming medium using the determined defocus disparity (Pan, [Abstract]).
Regarding claim 20, Muramatsu-Mequanint combination teaches the method as described in claim 13,
The combination of Muramatsu-Mequanint does not specifically teach the following, in the similar field of endeavor Pan teaches:
wherein the alteration of the features enables greater resolution of the topographical features of the object (Pan, discloses system and method of compensating image data for phase fluctuations using plurality of pixel positions of image sensor, see [Abstract]. And [0148] The phrase disparity can also be referred to as a ‘warp map’. A warp map is a list of left and right shift amounts in units of pixels, including a sign indicating a direction of shift, associated with each pixel in the standard sensor image, whether using a DAF sensor or a stereo camera to capture images. The warp map can be the same resolution as the standard sensor image. Alternatively, the warp map may be created at a lower resolution than the standard sensor image to improve overall speed of processing or reduce noise, in which case each pixel in the lower resolution warp map is associated with multiple pixels in the standard sensor image and the shift amounts in pixels need to be scaled appropriately for the change in resolution).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Pan in the fingerprint authentication for a person of Muramatsu-Mequanint by a dual-pixel autofocus sensor to capture two image data. This would have been obvious because the person having ordinary skill in the art would have been motivated to compensate image data for phase fluctuations caused by a wave deforming medium using the determined defocus disparity (Pan, [Abstract]).
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Muramatsu-Mequanint as applied above to claim 1, further in view of Streit (US20220147602A1, hereinafter, “Streit”).
Regarding claim 9, Muramatsu-Mequanint combination teaches the method as
described in claim 1,
The combination of Muramatsu-Mequanint does not specifically teach the following, in the same field of endeavor Streit teaches:
wherein determining the topographical features of the object further comprises: generating an embedding, the embedding being a numerical representation of the topographical features of the object (Streit, discloses system and method for implementing private identity based on biometric and/or behavior information, see [Abstract]/[Title]. And [0179] In further embodiments, helper networks can be implemented in an identification and/or authentication systems and operate as a gateway for embedding neural networks (e.g., networks that create encrypted feature vectors) that extract encrypted features from authentication information and/or as a gateway for prediction models that predict matches between input and enrolled authentication information… where physical biometric input (e.g., face, iris, etc.) can be processed by another first embedding network trained on the different authentication modality. In some embodiments, first is used to delineate network function—create encrypted feature vectors or embeddings, first network, and classify encrypted feature vectors, second network).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Streit in the fingerprint authentication for a person of Muramatsu-Mequanint by embedding the physical biometric input using embedding neural networks. This would have been obvious because the person having ordinary skill in the art would have been motivated to fully encrypted private identity based on biometric information securely identify any user efficiently (Streit, [Abstract]).
Regarding claim 10, Muramatsu-Mequanint-Streit combination teaches the method as
described in claim 9,
Streit further teaches: wherein comparing the topographical features of the object to previously captured topographical features of the previously imaged object to provide a comparison result further comprises: comparing the embedding to a previously generated embedding, the previously generated embedding being a numerical representation of the previously captured topographical features of the previously imaged object (Streit, e.g., [0099] Shown in FIG. 4B, is the example process 450. Process 450 begins at 452 with an attempt to match generated embeddings to an embedding stored by the system. At 454 a local geometric evaluation is executed to determine if a currently created embedding matches any enrolled embedding). Same motivation as presented in claim 9 would apply.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Muramatsu-Mequanint-Devine as applied above to claim 3, further in view of Pan et al (US20180324359A1, hereinafter, “Pan”).
Regarding claim 16, Muramatsu-Mequanint-Devine combination teaches the method as described in claim 3,
The combination of Muramatsu-Mequanint-Devine does not specifically teach the following, in the similar field of endeavor Pan teaches:
wherein the one or more cameras of the computing device are repurposed to image the object out of focus (Pan, discloses system and method of compensating image data for phase fluctuations using plurality of pixel positions of image sensor, see [Abstract] At least one embodiment of the method comprises capturing, by a sensor of an imaging system, first image data and second image data for each of a plurality of pixel positions of the sensor, the sensor capturing an object through a wave deforming medium causing a defocus disparity between the first image data and second image data; and determining the defocus disparity between the first image data and the second image data, the defocus disparity corresponding to a defocus wavefront deviation of the wave deforming medium).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Pan in the fingerprint authentication for a person of Muramatsu-Mequanint-Devine by a dual-pixel autofocus sensor to capture two image data. This would have been obvious because the person having ordinary skill in the art would have been motivated to compensate image data for phase fluctuations caused by a wave deforming medium using the determined defocus disparity (Pan, [Abstract]).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Muramatsu-Mequanint-Devine as applied above to claim 3, further in view of Figueredo de Santana et al (US20200272717A1, hereinafter, “Figueredo de Santana”).
Regarding claim 17, Muramatsu-Mequanint-Devine combination teaches the method as described in claim 3,
further comprising: prior to capturing the one or more images, displaying a prompt to place a finger on a camera lens cover, the camera lens cover covering a front of the one or more image sensors (Figueredo de Santana, discloses access control using multi authentication factors based on heartbeat measured from finger, see [Abstract]. And [0027] the access control program may compute a first heartbeat signal from one or more facial data received by the front camera component as part of performing the facial recognition of the user. In at least one embodiment, the access control program may prompt the user to position a finger over a flash element of the rear camera component. The access control program may compute a second heartbeat signal from one or more finger data received by the rear camera component).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Figueredo de Santana in the fingerprint authentication for a person of Muramatsu-Mequanint-Devine by having access control program to instruct user to position finger over rear camera component. This would have been obvious because the person having ordinary skill in the art would have been motivated to measure heartbeat signal from one or more finger data received by the rear camera component for access control (Figueredo de Santana, [Abstract], [0027]).
Citation of References
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are cited but not been replied upon for this office action:
Hoyos (US20200293641A1) discloses method for authenticating hand biometrics that begins with a biometric security system receiving a palm-up digital image of a user's hand. The palm section and a fingertip aggregate section can be identified.
Haller et al (US20230379564A1) discloses methods and apparatus for biometric authentication in which two or more biometric features or aspects are captured and analyzed individually or in combination to identify and authenticate a person.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL M LEE whose telephone number is (571)272-1975. The examiner can normally be reached on M-F: 8:30AM - 5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on (571) 272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL M LEE/Primary Examiner, Art Unit 2436