DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/17/2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Prior art Aziz et al., “Face anti-spoofing countermeasure: Efficient 2D materials classification using polarization imaging” (Aziz) has been newly added to assist in teaching the newly added claim amendments.
Claims 1-30 are pending; claims 1, 7, and 26 have been amended.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 5-10, 12-15, 18, 19, 21, 23, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., US 2019/0102608 A1 (Wang), Green et al., US 2017/0337440 A1 (Green), and further in view of Aziz et al., “Face anti-spoofing countermeasure: Efficient 2D materials classification using polarization imaging” (Aziz).
Regarding claim 1, Wang teaches a device (computing device) (Abstract and [0036]) for authenticating a user of the device (the computing device is for physical-entity recognition and authentication) ([0036]), wherein the device includes at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) and at least one neural network processor (wherein the system/device can include a deep-learning neural network; which processes the images) ([0060], [0067], [0077], and [0088]), the device comprising:
- at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) configured to provide one or more pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) and to manipulate the one or more pattern light image(s) (cropping the image so that it contains only or mostly the to-be-recognized entity) ([0060]);
- at least one neural network processor (SVM or a deep-learning neural network) ([0088]) configured to authenticate the user based at least on extracted material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) and an authentication process executed at least in part by the at least one neural network processor (after confirming that the face in the captured images is a real human face, using the deep-learning neural network, the system can authenticate the user using known face-recognition techniques) ([0088-0089]);
wherein the extraction of material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) is performed by the at least one image processor or the at least one neural network processor (which is performed by a SVM and/or deep-learning neural network) ([0060] and [0088]);
and wherein authenticating a user of the device (user authentication of the computing device) ([0036], [0039], and [0042]) comprises biometric authentication sufficient to reduce risk of spoofing (wherein the system reduces the risk of spoofing by determining if the face includes human skin or a computer screen based on different reflection properties) ([0042-0043], [0050], and [0088]), wherein the biometric authentication comprises distinguishing between human skin and a non-skin material (wherein the authentication system distinguishes between human skin and a computer screen) ([0050] and [0088]).
Although Wang does not explicitly state that the deep-learning neural network is used for authentication; it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that since the deep-learning neural network of Wang is used to determine if there is a real human face or a potential screen (Wang; [0088]) and then the authentication is taken place using known face-recognition techniques (Wang; [0089]), that in turn the deep-learning neural network is part of the authentication of the user.
Wang teaches that the illumination conditions include wavelength, intensity, and direction of the light ([0050]); wherein while capturing images of the face (which can be the face of real human or a face displayed on a computer screen), the system can adjust the illumination conditions by shining light of a particular wavelength and intensity on the face ([0050]); wherein because the human skin and a computer screen can have different reflection properties, their images under the different illumination conditions can appear different ([0050]); wherein moreover, different portions of a real human's face reflect light differently (e.g., some facial features may reflect light better than others), whereas a computer screen reflects light more uniformly ([0050]); and wherein by comparing image features of different portions of the face, the system can also determine whether the face is a real human face or a face displayed on a computer screen ([0050]). However, Wang does not explicitly state “based on an intensity distribution of the manipulated pattern light image(s)”.
Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract); and wherein the system can incorporate additional features of anti-spoofing in combination with spectral methods, such as, detecting a spoof iris image with the system based on an intensity distribution of the manipulated pattern light image(s) (wherein a real iris image has a saturated specularity that displays as a flat distribution of intensity across the area; and wherein a specular reflection that shows a peaked distribution of pixel intensities rather than a clipped, flat distribution is likely to result from a facsimile photograph of a specularity rather than a real specularity) ([0147-0148]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s light pattern detection to determine if a face is a real human face or not by using an intensity distribution since it can effectively reduce the possibility of mistaking a spoof image for a real image (Green; [0149]).
Wang teaches by comparing image features of different portions of the face, the system can also determine whether the face is a real human face or a face displayed on a computer screen ([0050]); and wherein a reflected light pattern signal, specific to the material (reflection property of a physical entity can often be different from that of a non-physical entity under certain illumination conditions) ([0064]), relative to a background signal (the system can extract, from each captured image, a smaller area containing the to-be-recognized entity so that background entities don’t interfere with the entity-recognition process) ([0060]) within the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]). Green teaches wherein a real iris image has a saturated specularity that displays as a flat distribution of intensity across the area; and wherein a specular reflection that shows a peaked distribution of pixel intensities rather than a clipped, flat distribution is likely to result from a facsimile photograph of a specularity rather than a real specularity ([0147-0148]).
However, neither explicitly teaches “extracting material information from the manipulated pattern light image(s) by classifying a material depicted in the manipulated pattern light image(s) based on an intensity distribution of a reflected light pattern signal, specific to the material, relative to a background signal within the manipulated pattern light image(s)”.
Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract); wherein extracting material information (extracting physical properties of the materials) (p. 1; Abstract) from the manipulated pattern light image(s) (from distinctive reflection values from polarized light included in the SDOLP generated images) (p. 1; Abstract and p. 5; Section C.) by classifying a material depicted in the manipulated pattern light image(s) (classifying the material as either a printed paper photo or a genuine face) (p.1; Abstract and p. 5; Section C.) based on an intensity distribution of a reflected light pattern signal (based on the pixel intensity data distribution of the SDOLP images) (p. 5; Section C. and Fig. 9), specific to the material (specific to the material; i.e. a genuine face/skin or paper) (p. 5; Section C. and Fig. 9), relative to a background signal within the manipulated pattern light image(s) (classifying the materials within the images; without the background reflectance) (p. 4; last paragraph and p. 5; Section C.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include classifying the material based on an intensity distribution since it provides promising materials classification results (Aziz; p. 6, Section VI., 1st paragraph).
Regarding claim 2, Wang teaches wherein the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) includes one or more central processing unit(s) CPU (wherein a processor 1102 exists within the computer system 1100) (Fig. 11; [0090]), wherein the central processing unit CPU is configured to (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on a processor) (Fig. 11; [0090-0091]) manipulate the pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) and to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) based on the processing of at least one data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]).
Regarding claim 3, Wang teaches wherein the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) includes one or more central processing unit(s) CPU (wherein a processor 1102 exists within the computer system 1100) (Fig. 11; [0090]), wherein the central processing unit CPU is configured to (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on a processor) (Fig. 11; [0090-0091]) manipulate the pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]), wherein at least one neural network processor (a deep-learning neural network) ([0067] and [0088]) is configured to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) based on the processing of at least one data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]).
Regarding claim 5, Wang teaches wherein the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) includes one or more central processing unit(s) CPU (wherein a processor 1102 exists within the computer system 1100) (Fig. 11; [0090]), wherein the one or more central processing unit(s) CPU configured to (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on a processor) (Fig. 11; [0090-0091]) manipulate the pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]), wherein manipulating is executed on one or more central processing unit(s) CPU (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on one or more processor(s)) (Fig. 11; [0090-0091]), wherein the pattern light image(s) are manipulated by performing an image augmentation technique on the pattern light image (wherein the pattern light images are manipulated by cropping out a reduced image that contains only or mostly the to-be-recognized entity) ([0029] and [0060]).
Regarding claim 6, Wang teaches wherein manipulating includes manipulating the one or more pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) to suppress background texture information from a region of interest (wherein the background can be suppressed by reducing the captured image to a smaller image area that contains only or mostly the to-be-recognized entity) ([0060]), wherein manipulating the one or more pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) to suppress background texture information from a region of interest (wherein the background can be suppressed by reducing the captured image to a smaller image area that contains only or mostly the to-be-recognized entity) ([0060]) includes performing an image augmentation technique on the pattern light image (wherein the image is augmented by cropping out the reduced image area) ([0060]).
Regarding claim 7, Wang teaches a method for authenticating a user (the computing device is for physical-entity recognition and authentication) ([0036]) of a device (computing device) (Abstract and [0036]), wherein the device includes at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) and at least one neural network processor (wherein the system/device can include a deep-learning neural network; which processes the images) ([0060], [0067], [0077], and [0088]), the method comprising:
- providing one or more pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) to the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) and manipulating the one or more pattern light image(s) (cropping the image so that it contains only or mostly the to-be-recognized entity) ([0060]), wherein the manipulation (cropping the image so that it contains only or mostly the to-be-recognized entity) ([0060]) is executed at least in part by the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]);
- extracting material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]);
- authenticating the user based at least on the extracted material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) and an authentication process executed at least in part by the at least one neural network processor (after confirming that the face in the captured images is a real human face, using the deep-learning neural network, the system can authenticate the user using known face-recognition techniques) ([0088-0089]);
and wherein authenticating a user of the device (user authentication of the computing device) ([0036], [0039], and [0042]) comprises biometric authentication sufficient to reduce risk of spoofing (wherein the system reduces the risk of spoofing by determining if the face includes human skin or a computer screen based on different reflection properties) ([0042-0043], [0050], and [0088]), wherein the biometric authentication comprises distinguishing between human skin and a non-skin material (wherein the authentication system distinguishes between human skin and a computer screen) ([0050] and [0088]).
Although Wang does not explicitly state that the deep-learning neural network is used for authentication; it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that since the deep-learning neural network of Wang is used to determine if there is a real human face or a potential screen (Wang; [0088]) and then the authentication is taken place using known face-recognition techniques (Wang; [0089]), that in turn the deep-learning neural network is part of the authentication of the user.
Wang teaches that the illumination conditions include wavelength, intensity, and direction of the light ([0050]); wherein while capturing images of the face (which can be the face of real human or a face displayed on a computer screen), the system can adjust the illumination conditions by shining light of a particular wavelength and intensity on the face ([0050]); wherein because the human skin and a computer screen can have different reflection properties, their images under the different illumination conditions can appear different ([0050]); wherein moreover, different portions of a real human's face reflect light differently (e.g., some facial features may reflect light better than others), whereas a computer screen reflects light more uniformly ([0050]); and wherein by comparing image features of different portions of the face, the system can also determine whether the face is a real human face or a face displayed on a computer screen ([0050]). However, Wang does not explicitly state “based on an intensity distribution of the manipulated pattern light image(s)”.
Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract); and wherein the system can incorporate additional features of anti-spoofing in combination with spectral methods, such as, detecting a spoof iris image with the system based on an intensity distribution of the manipulated pattern light image(s) (wherein a real iris image has a saturated specularity that displays as a flat distribution of intensity across the area; and wherein a specular reflection that shows a peaked distribution of pixel intensities rather than a clipped, flat distribution is likely to result from a facsimile photograph of a specularity rather than a real specularity) ([0147-0148]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s light pattern detection to determine if a face is a real human face or not by using an intensity distribution since it can effectively reduce the possibility of mistaking a spoof image for a real image (Green; [0149]).
Wang teaches by comparing image features of different portions of the face, the system can also determine whether the face is a real human face or a face displayed on a computer screen ([0050]); and wherein a reflected light pattern signal, specific to the material (reflection property of a physical entity can often be different from that of a non-physical entity under certain illumination conditions) ([0064]), relative to a background signal (the system can extract, from each captured image, a smaller area containing the to-be-recognized entity so that background entities don’t interfere with the entity-recognition process) ([0060]) within the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]). Green teaches wherein a real iris image has a saturated specularity that displays as a flat distribution of intensity across the area; and wherein a specular reflection that shows a peaked distribution of pixel intensities rather than a clipped, flat distribution is likely to result from a facsimile photograph of a specularity rather than a real specularity ([0147-0148]).
However, neither explicitly teaches “extracting material information from the manipulated pattern light image(s) by classifying a material depicted in the manipulated pattern light image(s) based on an intensity distribution of a reflected light pattern signal, specific to the material, relative to a background signal within the manipulated pattern light image(s)”.
Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract); wherein extracting material information (extracting physical properties of the materials) (p. 1; Abstract) from the manipulated pattern light image(s) (from distinctive reflection values from polarized light included in the SDOLP generated images) (p. 1; Abstract and p. 5; Section C.) by classifying a material depicted in the manipulated pattern light image(s) (classifying the material as either a printed paper photo or a genuine face) (p.1; Abstract and p. 5; Section C.) based on an intensity distribution of a reflected light pattern signal (based on the pixel intensity data distribution of the SDOLP images) (p. 5; Section C. and Fig. 9), specific to the material (specific to the material; i.e. a genuine face/skin or paper) (p. 5; Section C. and Fig. 9), relative to a background signal within the manipulated pattern light image(s) (classifying the materials within the images; without the background reflectance) (p. 4; last paragraph and p. 5; Section C.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include classifying the material based on an intensity distribution since it provides promising materials classification results (Aziz; p. 6, Section VI., 1st paragraph).
Regarding claim 8, Wang teaches wherein the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) is configured to manipulate the pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) and to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) based on the processing of at least one data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]).
Regarding claim 9, Wang teaches wherein the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) includes one or more central processing unit(s) CPU (wherein a processor 1102 exists within the computer system 1100) (Fig. 11; [0090]), wherein pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) are provided to the central processing unit CPU configured to (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on a processor) (Fig. 11; [0090-0091]) manipulate the pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) and configured to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) based on the processing of at least one data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]).
Regarding claim 10, Wang teaches wherein the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) includes one or more central processing unit(s) CPU (wherein a processor 1102 exists within the computer system 1100) (Fig. 11; [0090]), wherein pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) are provided to the central processing unit CPU configured to (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on a processor) (Fig. 11; [0090-0091]) manipulate the pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]), wherein at least one neural network processor (a deep-learning neural network) ([0067] and [0088]) is configured to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) based on the processing of at least one data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]).
Regarding claim 12, Wang teaches wherein manipulating includes manipulating the one or more pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) to suppress background texture information from a region of interest (wherein the background can be suppressed by reducing the captured image to a smaller image area that contains only or mostly the to-be-recognized entity) ([0060]), wherein manipulating the one or more pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) to suppress background texture information from a region of interest (wherein the background can be suppressed by reducing the captured image to a smaller image area that contains only or mostly the to-be-recognized entity) ([0060]) includes performing an image augmentation technique on the pattern light image (wherein the image is augmented by cropping out the reduced image area) ([0060]).
Regarding claim 13, Wang teaches wherein the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) includes one or more central processing unit(s) CPU (wherein a processor 1102 exists within the computer system 1100) (Fig. 11; [0090]), wherein pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) are provided to the one or more central processing unit(s) CPU configured to (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on a processor) (Fig. 11; [0090-0091]) manipulate the pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]), wherein manipulating is executed on one or more central processing unit(s) CPU (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on one or more processor(s)) (Fig. 11; [0090-0091]), wherein the pattern light image(s) are manipulated by performing an image augmentation technique on the pattern light image (wherein the pattern light images are manipulated by cropping out a reduced image that contains only or mostly the to-be-recognized entity) ([0029] and [0060]).
Regarding claim 14, Wang teaches wherein the at least one image processor includes one or more central processing unit(s) CPU (wherein a processor 1102 exists within the computer system 1100) (Fig. 11; [0090]), wherein pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) are provided to the one or more central processing unit(s) CPU configured to (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on a processor) (Fig. 11; [0090-0091]) manipulate the pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]), wherein manipulating is executed on one or more central processing unit(s) CPU (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on one or more processor(s)) (Fig. 11; [0090-0091]), wherein the pattern light image(s) are manipulated by randomizing the image information of or included in the region of interest (wherein the pattern light images are manipulated by cropping out a reduced image that contains only or mostly the to-be-recognized entity) ([0029] and [0060]). The Examiner points out that “randomizing” as stated in Applicant’s Specification can be generating one or more “cut-out(s)”, which is the same as generating one or more “cropped” image(s).
Regarding claim 15, Wang teaches wherein manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) are provided to at least one neural network processor (a deep-learning neural network) ([0067] and [0088]), wherein at least one data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) is executed by at least one neural network processor (a deep-learning neural network) ([0067] and [0088]), wherein the at least one neural network processor (a deep-learning neural network) ([0067] and [0088]) is configured to provide material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) based on the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) provided to the at least one neural network processor (a deep-learning neural network) ([0067] and [0088]).
Regarding claim 18, Wang teaches wherein manipulating pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) includes generating partial image(s) (generating an image that is a smaller image area) ([0060]) with at least part of one or more pattern feature(s) (generating an image that is a smaller image area including the only or mostly the to-be-recognized entity) ([0060]).
Regarding claim 19, Wang teaches wherein manipulating pattern light image(s) (using the cropped detected human face from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) includes generating a segmented image (generating a segmented/cropped image) ([0060]) including the region of interest per pattern light image (using the cropped detected human face from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) and generating per segmented image partial image(s) with at least part of one or more pattern feature(s) (the partial image being the segmented image that is a smaller area including the only or mostly the to-be-recognized entity; the image being a pattern light image) ([0029], [0060], and [0088]).
Regarding claim 21, Wang teaches wherein the data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information from manipulated pattern light image(s) (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) generates a material classifier (machine-learning classifier) ([0088]) discriminating between one or more material types (discriminating between a face of a real human or one that is a face displayed on a computer screen) ([0088]).
Regarding claim 23, Wang teaches wherein a pixel location information for the region of interest, a binary skin classifier generated by the data-driven model trained to extract material information from manipulated pattern light image(s) and/or a material classifier (machine-learning classifier) ([0088]) generated by the data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information from manipulated pattern light image(s) (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) is used for authenticating the user (after confirming that the face in the captured images is a real human face, using the deep-learning neural network, the system can authenticate the user using known face-recognition techniques) ([0088-0089]).
Although Wang does not explicitly state that the deep-learning neural network is used for authentication; it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that since the deep-learning neural network of Wang is used to determine if there is a real human face or a potential screen (Wang; [0088]) and then the authentication is taken place using known face-recognition techniques (Wang; [0089]), that in turn the deep-learning neural network is part of the authentication of the user.
Regarding claim 24, Wang teaches wherein authenticating the user includes validation of the one or more image(s) captured for authentication based on the extracted material information (wherein based on the extracted features the system determines first, as a validation, that the entity within the image is a real human face) ([0088]), wherein the authentication process is triggered upon successful validation (after confirming that the face in the captured images is a real human face, the system can authenticate the user) ([0089]).
Claim(s) 4 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., US 2019/0102608 A1 (Wang), Green et al., US 2017/0337440 A1 (Green), Aziz et al., “Face anti-spoofing countermeasure: Efficient 2D materials classification using polarization imaging” (Aziz), and further in view of Fishel et al., US 2019/0340014 A1 (Fishel).
Regarding claim 4, Wang teaches wherein the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) includes one or more central processing unit(s) CPU (wherein a processor 1102 exists within the computer system 1100) (Fig. 11; [0090]), wherein the central processing unit CPU configured to (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on a processor) (Fig. 11; [0090-0091]) execute at least one data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) by at least one neural network processor (a deep-learning neural network) ([0067] and [0088]) and to provide material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) based on the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) provided to the at least one neural network processor (a deep-learning neural network) ([0067] and [0088]). Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); and wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract). Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract).
However, none of them explicitly teaches to generate and/or provide a task list.
Fishel teaches managing tasks that when executed by a neural processor circuit instantiates a neural network (Abstract); and wherein the CPU generates a task list or task descriptors of tasks that when executed by the neural processor circuit instantiates a neural network ([0104] and [0116]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include a task list since it allows for the system to put tasks with high priority first (Fishel; [0116]), which also allows the system to perform the operations in a fast and power-efficient manner while relieving CPU of resource-intensive operations associated with neural network operations (Fishel; [0043]).
Regarding claim 11, Wang teaches wherein the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) includes one or more central processing unit(s) CPU (wherein a processor 1102 exists within the computer system 1100) (Fig. 11; [0090]), wherein the central processing unit CPU configured to (it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that all the processing is done based on a processor) (Fig. 11; [0090-0091]) execute at least one data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) by at least one neural network processor (a deep-learning neural network) ([0067] and [0088]) and to provide material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) based on the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]) provided to the at least one neural network processor (a deep-learning neural network) ([0067] and [0088]). Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); and wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract). Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract).
However, none of them explicitly teaches to generate and/or provide a task list.
Fishel teaches managing tasks that when executed by a neural processor circuit instantiates a neural network (Abstract); and wherein the CPU generates a task list or task descriptors of tasks that when executed by the neural processor circuit instantiates a neural network ([0104] and [0116]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include a task list since it allows for the system to put tasks with high priority first (Fishel; [0116]), which also allows the system to perform the operations in a fast and power-efficient manner while relieving CPU of resource-intensive operations associated with neural network operations (Fishel; [0043]).
Claim(s) 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., US 2019/0102608 A1 (Wang), Green et al., US 2017/0337440 A1 (Green), Aziz et al., “Face anti-spoofing countermeasure: Efficient 2D materials classification using polarization imaging” (Aziz), and further in view of Jakubiak et al., US 2020/0218915 A1 (Jakubiak).
Regarding claim 16, Wang teaches at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) and authentication is taken place using known face-recognition techniques (Wang; [0089]). Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); and wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract). Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract).
However, none of them explicitly teaches “one or more flood light images”, “one or more image signal processor(s) ISP”, for preparing a biometric authentication.
Jakubiak teaches devices and method for user authentication on an electronic device (Abstract); wherein one or more flood light image(s) (obtain a facial image though a camera at the time of illumination of the flood lighting device) ([0056]) are provided to the at least one image processor (wherein the processor obtains a facial image though a camera at the time of illumination of the flood lighting device) ([0056]), wherein the at least one image processor (the processor) ([0056]) includes one or more image signal processor(s) ISP (wherein the processor may include one or more image signal processor (ISP)) ([0054]), wherein the one or more flood light image(s) are provided to the one or more image signal processor(s) ISP for preparing a biometric authentication (wherein the flood light image is provided to the processor, which includes an ISP, for preparing for biometric scanning) (Abstract and [0056]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include using flood lighting for preparing for biometric authentication since it can increase the accuracy of biometrics on an electronic device (Jakubiak; [0011]).
Regarding claim 17, Wang teaches wherein the image(s) are provided to the at least one neural network processor (a deep-learning neural network) ([0067] and [0088]) configured to execute at least one data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to generate biometric authentication information (after confirming that the face in the captured images is a real human face, using the deep-learning neural network, the system can authenticate the user using known face-recognition techniques) ([0088-0089]). Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); and wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract). Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract).
However, none of them explicitly teaches “prepared flood light image(s)”.
Jakubiak teaches devices and method for user authentication on an electronic device (Abstract); wherein one or more flood light image(s) (obtain a facial image though a camera at the time of illumination of the flood lighting device) ([0056]) are provided to the at least one image processor (wherein the processor obtains a facial image though a camera at the time of illumination of the flood lighting device) ([0056]), wherein the one or more flood light image(s) are provided for preparing a biometric authentication (wherein the flood light image is provided to the processor, which includes an ISP, for preparing for biometric scanning; and perform authentication on the user based on the biometric information) (Abstract and [0056]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include using flood lighting for preparing for biometric authentication since it can increase the accuracy of biometrics on an electronic device (Jakubiak; [0011]).
Claim(s) 20 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., US 2019/0102608 A1 (Wang), Green et al., US 2017/0337440 A1 (Green), Aziz et al., “Face anti-spoofing countermeasure: Efficient 2D materials classification using polarization imaging” (Aziz), and further in view of Mannerheim et al., US 2007/0230743 A1 (Mannerheim).
Regarding claim 20, Wang teaches wherein the manipulated pattern light image(s) (using the cropped detected human face from the captured images; the images being pattern light images) ([0029], [0060], and [0088]). Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); and wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract). Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract).
However, none of them explicitly teaches wherein the image(s) “are associated with a pixel location information for the region of interest”.
Mannerheim teaches an apparatus for tracking a listener’s head position (Abstract); and wherein the image(s) are associated with a pixel location information for the region of interest (a skin region detection unit detecting a skin region) ([0013] and [0039]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include detecting a pixel location information for a region of interest, such as a skin region, since it reduces the amount of computation by efficiently using computing resources (Mannerheim; [0010] and [0039]).
Regarding claim 22, Wang teaches wherein the data-driven model trained (previously trained machine-learning model (e.g., a support vector machine or a deep-learning neural network) ([0067] and [0088]) to extract material information from manipulated pattern light image(s) (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) generates a classifier (machine-learning classifier) ([0088]) discriminating between skin and no-skin (discriminating between a face of a real human or one that is a face displayed on a computer screen) ([0088]). Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); and wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract). Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract).
However, none of them explicitly teaches “a binary skin classifier” discriminating between skin and no-skin.
Mannerheim teaches an apparatus for tracking a listener’s head position (Abstract); and a binary skin classifier (Gaussian skin classifier) ([0009] and [0013]) (including a binary image generation unit generating a binary image of the skin region) ([0009] and [0013]) discriminating between skin and no-skin (discriminating between a skin region and non-skin) (Fig. 8; [0009], [0013], [0040], and [0055]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include detecting skin for a region of interest since it reduces the amount of computation by efficiently using computing resources (Mannerheim; [0010] and [0039]).
Claim(s) 25-27, 29, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., US 2019/0102608 A1 (Wang), Green et al., US 2017/0337440 A1 (Green), Aziz et al., “Face anti-spoofing countermeasure: Efficient 2D materials classification using polarization imaging” (Aziz), and further in view of Bennett et al., US 2015/0339471 A1 (Bennett).
Regarding claim 25, Wang teaches at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) and authentication is taken place using known face-recognition techniques (Wang; [0089]). Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); and wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract). Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract).
However, none of them explicitly teaches “wherein the operation requiring authentication includes unlocking the device and/or one or more components of the device and/or one or more functionalities or operations triggered or executed by the device”.
Bennett teaches a method for unlocking a device, comprising projecting, via a light signal projection unit, a plurality of light signals sequentially on a three dimensional target object, capturing, via an image capture unit (Abstract); and wherein the operation requiring authentication includes unlocking the device and/or one or more components of the device and/or one or more functionalities or operations triggered or executed by the device (wherein the device can employ biometric measures for user authentications and/or authorization, such as facial images, and/or other biometric scans, for unlocking the device) (Fig. 1; [0014], [0016-0017], and [0030]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include unlocking a device based on the authentication for improved security (Bennett; [0015]).
Regarding claim 26, Wang teaches an apparatus (computing device) (Abstract and [0036]) for authenticating a user of a device (the computing device is for physical-entity recognition and authentication) ([0036]), wherein the device includes at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) and at least one neural network processor (wherein the system/device can include a deep-learning neural network; which processes the images) ([0060], [0067], [0077], and [0088]), the apparatus comprising:
- at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) configured to provide one or more pattern light image(s) (the computing device can also configure a light projector to project light of a predetermined pattern) ([0029]) and manipulate the one or more pattern light image(s) (cropping the image so that it contains only or mostly the to-be-recognized entity) ([0060]);
- material extractor configured to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]);
- at least one neural network processor (wherein the system/device can include a deep-learning neural network; which processes the images) ([0060], [0067], [0077], and [0088]) configured to authenticate the user based at least on the extracted material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) and at least in part execution of an authentication process (after confirming that the face in the captured images is a real human face, using the deep-learning neural network, the system can authenticate the user using known face-recognition techniques) ([0088-0089]);
and wherein authenticating a user of the device (user authentication of the computing device) ([0036], [0039], and [0042]) comprises biometric authentication sufficient to reduce risk of spoofing (wherein the system reduces the risk of spoofing by determining if the face includes human skin or a computer screen based on different reflection properties) ([0042-0043], [0050], and [0088]), wherein the biometric authentication comprises distinguishing between human skin and a non-skin material (wherein the authentication system distinguishes between human skin and a computer screen) ([0050] and [0088]).
Although Wang does not explicitly state that the deep-learning neural network is used for authentication; it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that since the deep-learning neural network of Wang is used to determine if there is a real human face or a potential screen (Wang; [0088]) and then the authentication is taken place using known face-recognition techniques (Wang; [0089]), that in turn the deep-learning neural network is part of the authentication of the user.
Wang teaches that the illumination conditions include wavelength, intensity, and direction of the light ([0050]); wherein while capturing images of the face (which can be the face of real human or a face displayed on a computer screen), the system can adjust the illumination conditions by shining light of a particular wavelength and intensity on the face ([0050]); wherein because the human skin and a computer screen can have different reflection properties, their images under the different illumination conditions can appear different ([0050]); wherein moreover, different portions of a real human's face reflect light differently (e.g., some facial features may reflect light better than others), whereas a computer screen reflects light more uniformly ([0050]); and wherein by comparing image features of different portions of the face, the system can also determine whether the face is a real human face or a face displayed on a computer screen ([0050]). However, Wang does not explicitly state “based on an intensity distribution of the manipulated pattern light image(s)”.
Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract); and wherein the system can incorporate additional features of anti-spoofing in combination with spectral methods, such as, detecting a spoof iris image with the system based on an intensity distribution of the manipulated pattern light image(s) (wherein a real iris image has a saturated specularity that displays as a flat distribution of intensity across the area; and wherein a specular reflection that shows a peaked distribution of pixel intensities rather than a clipped, flat distribution is likely to result from a facsimile photograph of a specularity rather than a real specularity) ([0147-0148]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s light pattern detection to determine if a face is a real human face or not by using an intensity distribution since it can effectively reduce the possibility of mistaking a spoof image for a real image (Green; [0149]).
Wang teaches by comparing image features of different portions of the face, the system can also determine whether the face is a real human face or a face displayed on a computer screen ([0050]); and wherein a reflected light pattern signal, specific to the material (reflection property of a physical entity can often be different from that of a non-physical entity under certain illumination conditions) ([0064]), relative to a background signal (the system can extract, from each captured image, a smaller area containing the to-be-recognized entity so that background entities don’t interfere with the entity-recognition process) ([0060]) within the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]). Green teaches wherein a real iris image has a saturated specularity that displays as a flat distribution of intensity across the area; and wherein a specular reflection that shows a peaked distribution of pixel intensities rather than a clipped, flat distribution is likely to result from a facsimile photograph of a specularity rather than a real specularity ([0147-0148]).
However, neither explicitly teaches “extracting material information from the manipulated pattern light image(s) by classifying a material depicted in the manipulated pattern light image(s) based on an intensity distribution of a reflected light pattern signal, specific to the material, relative to a background signal within the manipulated pattern light image(s)”.
Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract); wherein extracting material information (extracting physical properties of the materials) (p. 1; Abstract) from the manipulated pattern light image(s) (from distinctive reflection values from polarized light included in the SDOLP generated images) (p. 1; Abstract and p. 5; Section C.) by classifying a material depicted in the manipulated pattern light image(s) (classifying the material as either a printed paper photo or a genuine face) (p.1; Abstract and p. 5; Section C.) based on an intensity distribution of a reflected light pattern signal (based on the pixel intensity data distribution of the SDOLP images) (p. 5; Section C. and Fig. 9), specific to the material (specific to the material; i.e. a genuine face/skin or paper) (p. 5; Section C. and Fig. 9), relative to a background signal within the manipulated pattern light image(s) (classifying the materials within the images; without the background reflectance) (p. 4; last paragraph and p. 5; Section C.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include classifying the material based on an intensity distribution since it provides promising materials classification results (Aziz; p. 6, Section VI., 1st paragraph).
However, none of them explicitly teaches “wherein the authentication process is optionally for authorizing the user to perform at least one operation on, in relation to and/or triggered by the device that requires authentication”.
Bennett teaches a method for unlocking a device, comprising projecting, via a light signal projection unit, a plurality of light signals sequentially on a three dimensional target object, capturing, via an image capture unit (Abstract); and wherein the authentication process is optionally the operation requiring authentication includes unlocking the device triggered or executed by the device (wherein the device can employ biometric measures for user authentications and/or authorization, such as facial images, and/or other biometric scans, for unlocking the device) (Fig. 1; [0014], [0016-0017], and [0030]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include unlocking a device based on the authentication for improved security (Bennett; [0015]).
Regarding claim 27, Bennett teaches comprising a trigger interface configured, in response to receiving an unlock request (response to receiving an unlock request) ([0005] and [0016]), to trigger capture of one or more pattern light image(s) of the user (employ a camera to capture a sequence of images in synchronization with the sequence of projected light signals) ([0016]) using a camera located on the device (image capture unit 120 in electronic device 150) (Fig. 1; [0017]), wherein the one or more pattern light image(s) comprise an image of the user under illumination (employ a camera to capture a sequence of images in synchronization with the sequence of projected light signals) ([0016]) with at least one infrared pattern illuminator located on or of the device (wherein light signal projection unit 110 is in the electronic device 150 and can include an infrared (IR) emitter) (Fig. 1; [0017-0018]).
Regarding claim 29, Bennett teaches wherein the user is authorized to perform at least one operation on, in relation to and/or triggered by the device if authenticated (if the user is authorized the device is unlocked so it can be used) ([0030]).
Regarding claim 30, Wang teaches wherein the at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) or the at least one neural network processor (wherein the system/device can include a deep-learning neural network; which processes the images) ([0060], [0067], [0077], and [0088]) is configured to extract material information (based at least on the extracted features; such as to determine if it’s a real human face or a face displayed on a computer screen) ([0088]) from the manipulated pattern light image(s) (using the cropped detected human faces from the captured images; the images being pattern light images) ([0029], [0060], and [0088]).
Claim(s) 28 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., US 2019/0102608 A1 (Wang), Green et al., US 2017/0337440 A1 (Green), Aziz et al., “Face anti-spoofing countermeasure: Efficient 2D materials classification using polarization imaging” (Aziz), Bennett et al., US 2015/0339471 A1 (Bennett), and further in view of Jakubiak et al., US 2020/0218915 A1 (Jakubiak).
Regarding claim 28, Wang teaches at least one image processor (wherein the computing device/system includes a processor 1102) (Fig. 11; [0090]) (as well as an image-processing module 1128) (Fig. 11; [0091]) and authentication is taken place using known face-recognition techniques (Wang; [0089]). Green teaches embodiments that are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module (Abstract); and wherein the analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing (Abstract). Aziz teaches for the purpose of face anti-spoofing analysis; skin structure is a key factor in achieving the target of detecting spoofing or a valid user of any biometric system in order to gain access (p. 1; Abstract). Bennett teaches comprising a trigger interface configured, in response to receiving an unlock request (response to receiving an unlock request) ([0005] and [0016]), to trigger capture of one or more light image(s) of the user (employ a camera to capture a sequence of images in synchronization with the sequence of projected light signals) ([0016]) using a camera located on the device (image capture unit 120 in electronic device 150) (Fig. 1; [0017]), wherein the one or more light image(s) comprise an image of the user under illumination (employ a camera to capture a sequence of images in synchronization with the sequence of projected light signals) ([0016]) with at least one infrared pattern illuminator located on or of the device (wherein light signal projection unit 110 is in the electronic device 150 and can include an infrared (IR) emitter) (Fig. 1; [0017-0018]).
However, none of them explicitly teaches using “flood light image(s)”.
Jakubiak teaches devices and method for user authentication on an electronic device (Abstract); wherein one or more flood light image(s) (obtain a facial image though a camera at the time of illumination of the flood lighting device) ([0056]) are provided to the at least one image processor (wherein the processor obtains a facial image though a camera at the time of illumination of the flood lighting device) ([0056]), wherein the one or more flood light image(s) are provided to the one or more image signal processor(s) ISP for preparing a biometric authentication (wherein the flood light image is provided to the processor, which includes an ISP, for preparing for biometric scanning; and perform authentication on the user based on the biometric information) (Abstract and [0056]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include using flood lighting for preparing for biometric authentication since it can increase the accuracy of biometrics on an electronic device (Jakubiak; [0011]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ando et al., US 2021/0049252 A1 teaches: photography of the user by using the surface reflection components in the present embodiment. In the identifying device in the present embodiment, the surface reflection components can be detected by the time-resolved imaging or the space-resolved imaging, as described above ([0111]). This makes it possible to more clearly detect texture of the skin surface of the user ([0111]). Examples of the texture include wrinkles or minute pits and bumps ([0111]). Use of a result obtained by verifying the information obtained from the surface reflection components against the information included in the biometric data in the memory and indicating the texture of the skin surface of the user makes it possible to enhance the authentication accuracy in ([0111]). Li et al., US 2022/0004731 A1 teaches: determining a light intensity distribution of a reflected light reflected by the surface of the biometric sensor in touch with the skin of the user based on signals from the plurality of photosensors; and determining the biometric information based on the light intensity distribution (Abstract).
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J VANCHY JR whose telephone number is (571)270-1193. The examiner can normally be reached Monday - Friday 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL J VANCHY JR/Primary Examiner, Art Unit 2666 Michael.Vanchy@uspto.gov