DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to communication fled on 2/17/2026. Claims 10-15 have been added. Claims 1-15 are pending on this application.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1and 9 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7-9 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al (US20210232808) in view of Appia (US9769461).
Regarding claim 1, Nguyen teaches a face authentication apparatus (fig. 1), comprising:
at least one processor (para. [0005]); and
at least one memory including at least one program that, when executed by the at least one processor, causes the at least one processor to (para. [0005]):
receive image data (106 in fig.1) including a captured image of a subject from a camera (104 in fig. 1; para. [0039], The digital camera 104 may be utilized to capture an image (e.g., tester image 110) of the tester 112);
generate, based on the image data, projection image data of a projection image to be projected onto the subject (118 in fig. 1; para. [0040], In some embodiments, a failure detection engine 116 (e.g., a component of the failure detection device 106) may utilize the tester image 112 and the target image 114 to generate data representing a light pattern 118);
transmit the projection image data to a projector (108 in fig. 1) configured to project the projection image onto the subject (para. [0040], In some embodiments, the failure detection device 106 may provide the data representing the light pattern 118 to the projection device 108 in order to project light pattern toward the tester 112 (e.g., onto the face of tester 112). The projection device may be any suitable device configured to project one or more rays of light according to input provided by at least the failure detection device 106);
receive projection subject image data from the camera (106 in fig. 1 would require a receiver to receive input image 120 from camera 104; para. [0041]), the projection subject image data being of the subject onto which the projection image is projected (para. [0041], Input image 120 may be an image depicting the tester 112 with light corresponding to the light pattern 118 as projected by the projection device 108); and
perform face authentication processing on the subject based on the projection subject image data (102 in fig. 1; para. [0045], Data representing the light pattern 118 may be utilized to test whether the facial recognition system 102 misclassifies the input image 120 (e.g., an image of the light pattern 118 overlaid over the face of the tester 112) as being an image of a person different from the tester 112).
Nguyen fails to teach wherein the at least one processor detects, from the image data, a factor that reduces an accuracy of face authentication, and generates the projection image data in response to the factor such that the factor is reduced in the projection subject image data used for the face authentication processing.
However Appia teaches detecting a factor that reduces an accuracy of an image (col. 3 lines 4-21), and generating a projection image data in response to the factor such that the factor is reduced in a projection subject image data (col. 2 lines 17-21 and col. 3 lines 4-17). It would be obvious use the steps for the facial authentication process of Nguyen.
Therefore taking the combined teachings of Nguyen and Appia as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the features of Appia into the apparatus of Nguyen. The motivation to combine Appia and Nguyen would be to provide depth map optimization using structed light (col. 1 lines 50-52 of Appia).
Regarding claim 7, the modified apparatus of Nguyen teaches a face authentication apparatus wherein:
the at least one processor generates, based on the image data and in response to the factor (col. 3 lines 18-23 of Appia), N different pieces of the projection image data, with N being an integer of at least one (118 in fig. 1 of Nguyen; para. [0040] of Nguyen); and
after N pieces of the projection subject image data respectively obtained by projecting images of the N different pieces of the projection image data onto the subject are received from the camera (120 in fig. 1 of Nguyen; para. [0041] of Nguyen), the at least one processor performs the face authentication processing on the subject based on the N pieces of the projection subject image data (para. [0042] of Nguyen, The failure detection engine 116 may provide the input image 120 to the facial recognition system 102).
Regarding claim 8, the modified apparatus of Nguyen teaches a face authentication apparatus wherein:
the at least one processor generates, based on the image data and in response to the factor (col. 3 lines 18-23 of Appia), N different pieces of the projection image data, with N being an integer of at least one (118 in fig. 1 of Nguyen; para. [0040] of Nguyen); and
every time an image of one piece of the projection image data is projected onto the subject, the projection subject image data of the subject is received from the camera (120 in fig. 1 of Nguyen; para. [0041] of Nguyen), and the at least one processor performs the face authentication processing on the subject (para. [0042] of Nguyen, The failure detection engine 116 may provide the input image 120 to the facial recognition system 102).
Regarding claim 9, the claim recites similar subject matter as claim 1 and is rejected for the same reasons as stated above.
Regarding claim 15, the modified apparatus of Nguyen teaches a face authentication apparatus, wherein the at least one processor detects, from the image data, a plurality of factors that each reduce an accuracy of face authentication (col. 3 lines 4-21 of Appia), and generates the projection image data in response to the plurality of factors such that the plurality of factors is each reduced in the projection subject image data (col. 2 lines 17-21 and col. 3 lines 4-17 of Appia) for the face authentication processing (para. [0045] of Nguyen).
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al (US20210232808) and Appia (US9769461) in view of Watanabe (US20130222642).
Regarding claim 2, the modified apparatus of Nguyen fails to teach a face authentication apparatus wherein the at least one processor generates, based on the image data and in response to the factor, the projection image data for correcting brightness or a color of an eye area of the subject in response to the subject wearing glasses.
However Watanabe teaches generating projection image data for correcting brightness or a color of an eye area of a subject in response to the subject wearing glasses (fig. 7; para. [0054]). It would be obvious to generate the projection image data in response to the scene characteristics as taught by Appia (para. col. 3 lines 18-23 of Appia).
Therefore taking the combined teachings of Nguyen and Appia with Watanabe as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the features of Watanabe into the apparatus of Nguyen and Appia. The motivation to combine Watanabe, Appia and Nguyen would be to highly accurately obtain eye-related information even if a user is wearing eyeglasses (para. [0007] of Watanabe).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al (US20210232808) and Appia (US9769461) in view of Choukroun et al (US20180005448).
Regarding claim 3, the modified apparatus of Nguyen fails to teach a face authentication apparatus wherein the at least one generates, based on the image data and in response to the factor, the projection image data for correcting brightness or a color of a mouth area of the subject in response to the subject wearing a mask.
However Choukroun teaches generating projection image data for correcting brightness or a color of a mouth area of a subject wearing a mask (para. [0028], [0408]). It would be obvious to generate the projection image data in response to the scene characteristics as taught by Appia (para. col. 3 lines 18-23 of Appia).
Therefore taking the combined teachings of Nguyen and Appia with Choukroun as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the features of Choukroun into the apparatus of Nguyen and Appia. The motivation to combine Choukroun, Appia and Nguyen would be to allow for shadows cast by the object to be rendered invisible (para. [0035] of Choukroun).
Claim(s) 4 and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al (US20210232808) and Appia (US9769461) in view of Bohl et al (US20180025244).
Regarding claim 4, the modified apparatus of Nguyen fails to teach a face authentication apparatus wherein the at least one processor generates, based on the image data and in response to the factor, the projection image data so that brightness of an entire face of the subject becomes uniform.
However Bohl teaches generating projection image data so that brightness of an entire face of a subject becomes uniform (para. [0142]). It would be obvious to generate the projection image data in response to the scene characteristics as taught by Appia (para. col. 3 lines 18-23 of Appia).
Therefore taking the combined teachings of Nguyen and Appia with Bohl as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the features of Bohl into the apparatus of Nguyen and Appia. The motivation to combine Bohl, Appia and Nguyen would be to ensure a secure transaction (para. [0002] of Bohl).
Regarding claim 5, the modified apparatus of Nguyen fails to teach a face authentication apparatus wherein the at least one processor generates, based on the image data and in response to the factor, the projection image data so that a face color or a hair color of the subject becomes a predetermined color.
However Bohl teaches generating projection image data so that a face color or a hair color of a subject becomes a predetermined color (para. [0142]). It would be obvious to generate the projection image data in response to the scene characteristics as taught by Appia (para. col. 3 lines 18-23 of Appia).
Therefore taking the combined teachings of Nguyen and Appia with Bohl as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the features of Bohl into the apparatus of Nguyen and Appia. The motivation to combine Bohl, Appia and Nguyen would be to ensure a secure transaction (para. [0002] of Bohl).
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al (US20210232808) and Appia (US9769461) in view of Benini et al (US20180181794).
Regarding claim 6, the modified apparatus of Nguyen fails to teach a face authentication apparatus wherein the at least one processor generates the projection image data based on background image data of a background behind the subject, the background image data being included in the image data.
However Benini teaches generating projection image data (para. [0029]) based on background image data of a background behind the subject, the background image data being included in the image data (para. [0078], [0129]). It would be obvious to generate the projection image data in response to the scene characteristics as taught by Appia (para. col. 3 lines 18-23 of Appia).
Therefore taking the combined teachings of Nguyen and Appia with Benini as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the features of Benini into the apparatus of Nguyen and Appia. The motivation to combine Benini, Appia and Nguyen would be to analyze an input image and obtain useful spoofing knowledge to differentiate live faces and spoof faces in earlier stages of the enrollment or face recognition process (para. [0028] of Benini).
Claim(s) 10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al (US20210232808) and Appia (US9769461) in view of Kurtz et al (US8290208).
Regarding claim 10, the modified apparatus of Nguyen fails to teach a face authentication apparatus wherein the at least one processor generates, based on the image data and in response to the factor, the projection image data to not be projected onto a predetermined area of the subject.
However Kurtz teaches generating projection image data to not be projected onto a predetermined area of the subject (col. 5 lines 1-16). It would be obvious to generate the projection image data in response to the scene characteristics as taught by Appia (para. col. 3 lines 18-23 of Appia)
Therefore taking the combined teachings of Nguyen and Appia with Kurtz as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the features of Kurtz into the apparatus of Nguyen and Appia. The motivation to combine Kurtz, Appia and Nguyen would be to improve eye safety (col. 5 lines 17-20 of Kurtz).
Regarding claim 11, the modified apparatus of Nguyen teaches a face authentication apparatus wherein the predetermined area includes a portion of a face of the subject (245 in fig. 5A of Kurtz).
Regarding claim 12, the modified apparatus of Nguyen teaches a face authentication apparatus wherein the portion of the face of the subject includes an eye-area (245 in fig. 5A of Kurtz).
Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al (US20210232808) and Appia (US9769461) in view of Cooper et al (US10277842).
Regarding claim 13, the modified apparatus of Nguyen fails to teach a face authentication apparatus wherein the projection image data defines a two-dimensional distribution of at least one of luminance or color of the projection image.
However Cooper teaches wherein projection image data defines a two-dimensional distribution of at least one of luminance or color of the projection image (col. 5 lines 25-54 and col. 6 lines 14-23).
Therefore taking the combined teachings of Nguyen and Appia with Cooper as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the features of Cooper into the apparatus of Nguyen and Appia. The motivation to combine Cooper, Appia and Nguyen would be to provide a robust and accurate imaging system (col. 1 lines 13-18 of Cooper).
Regarding claim 14, the modified apparatus of Nguyen teaches a face authentication apparatus, wherein the at least one processor generates the projection image data by modifying the two- dimensional distribution in response to the factor (col. 6 lines 24-40 of Cooper).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEON VIET Q NGUYEN whose telephone number is (571)270-1185. The examiner can normally be reached Mon-Fri 11AM-7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LEON VIET Q NGUYEN/Primary Examiner, Art Unit 2663