Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This action is responsive to: an original application filed on 16 October 2024.
2. Claims 1-19 are currently pending and claims 1, 10 and 19 are independent claims.
Information Disclosure Statement
3. The information disclosure statement (IDS) submitted are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Priority
4. Priority claimed date has been noted.
Drawings
5. The drawings filed on 16 October 2024 are accepted by the examiner.
Claim Rejections - 35 USC § 112
6. “An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.”
Claim 1, 10-11, 14 and 19 are rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention.
Claim 1: The word "unit that" is preceded by the word(s) “when acquiring”, in an attempt to use a "unit that" clause to recite a claim element as a means for performing a specified function. However, since no function is specified by the word(s) preceding "unit that," it is impossible to determine the equivalents of the element, as required by 35 U.S.C. 112(f). See Ex parte Klumb, 159 USPQ 694 (Bd. App. 1967). Examiner could not determine the proper structure and algorithm for each limitation of “Unit that” in the claims as it is required if applicant invoking 112/(f).
The word "unit that" is preceded by the word(s) “when acquiring”, in an attempt to use a "unit that" clause to recite a claim element as a means for performing a specified function. However, since no function is specified by the word(s) preceding "unit that," it is impossible to determine the equivalents of the element, as required by 35 U.S.C. 112(f). See Ex parte Klumb, 159 USPQ 694 (Bd. App. 1967). Examiner could not determine the proper structure and algorithm for each limitation of “Unit that” in the claims as it is required if applicant invoking 112/(f).
Claim 10-11, 14 and 19: The word "unit that" is preceded by the word(s) “acquiring”, “outputting”, in an attempt to use a "step of" clause to recite a claim element as a means for performing a specified function. However, since no function is specified by the word(s) preceding "step of," it is impossible to determine the equivalents of the element, as required by 35 U.S.C. 112(f). See Ex parte Klumb, 159 USPQ 694 (Bd. App. 1967). Examiner could not determine the proper structure and algorithm for each limitations of “Unit that” in the claims as it is required if applicant invoking 112/(f).
The examiner notes to the applicant that can overcome the rejection by executing one of the following options: 1) point the examiner where in applicants specification applicant structure and algorithm for invoking the means plus function rule, 2) strike the means plus function claim language from the claim, 3) cancel the claim.
Claim Rejections - 35 USC § 102
7. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
.
Claims 1-20 are rejected 35 U.S.C §102 (a)(1) as being anticipated by Samadani et al. (US Publication No. 20180307815), hereinafter Samadani.
Regarding claim 1:
A user biometric authentication system comprising: a user information storage unit that acquires and stores user information and generates facial expression data information (Samadani, Abstract, ¶40-43).
and an authentication unit that, when acquiring authentication request information, acquires authentication facial expression information which is the facial expression data information corresponding to the authentication request information, outputs a preset number of arbitrary face imaging information (Samadani, ¶32, ¶71) including the authentication facial expression information as authentication candidate information, and performs user authentication using the authentication candidate information (Samadani, ¶4, Fig.7, ¶46, ¶119), wherein facial profile may include and/or indicate one or more characteristics of a person's face. For example, the facial profile may indicate one or more features (e.g., feature points, keypoints, corners, feature vectors, facial structure, distances between feature points, facial shape, eye color, skin color, feature size, etc.) that characterize a person's (e.g., subject's, user's, authorized user's, etc.) face. The face recognizer 752 may extract one or more features of a face in the set of images and compare the extracted features to the features of the facial profile.
Regarding claim 2:
wherein the user information storage unit comprises: a face imaging information acquisition module that acquires face imaging information, which is image information of a user's face included in the user information; and a facial expression data information processing module that analyzes the face imaging information using a preset algorithm and generates and stores the facial expression data information, which is an analysis result (Samadani, ¶126).
Regarding claim 3:
wherein the face imaging information comprises three-dimensional information including depth information and is acquired with at least two different facial expressions (Samadani, ¶54).
Regarding claim 4:
wherein the facial expression data information is generated by digitizing different values for each different facial expression using depth information of the face imaging information (Samadani, ¶10).
Regarding claim 5:
wherein the authentication unit comprises: an authentication data processing module that acquires the authentication request information and acquires the authentication facial expression information (Samadani, ¶43).
an authentication candidate information output module that outputs the authentication candidate information including the authentication facial expression information (Samadani, ¶71).
and an authentication approval module that acquires authentication attempt information, which is selection information for the authentication candidate information, and when the authentication attempt information is analyzed and the user's authentication face imaging information including a facial expression equal to the authentication facial expression information is selected, determines that an access is normal to perform the user authentication (Samadani, ¶80-81).
Regarding claim 6:
wherein the authentication facial expression information is included in the authentication request information and is information previously selected by the user (Samadani, ¶68-69).
Regarding claim 7:
wherein the authentication candidate information comprises the authentication facial expression information and five pieces of face imaging information other than the authentication facial expression information, and is selected through a preset algorithm using the facial expression data information corresponding to the face imaging information (Samadani, ¶69).
Regarding claim 8:
wherein if the authentication attempt information is analyzed and the authentication facial expression information is not equal to the authentication face imaging information, it is determined that the access is abnormal, and the authentication approval module stops the user authentication and outputs authentication error information (Samadani, ¶74).
Regarding claim 9:
wherein the user information storage unit additionally generates a user's fingerprint data information included in the user information, and the authentication unit additionally acquires authentication fingerprint information, which is the fingerprint data information corresponding to the authentication request information, and outputs the authentication fingerprint information together when outputting the preset number of arbitrary face imaging information as authentication candidate information, and performs user authentication using the authentication candidate information including the authentication fingerprint information and the face imaging information (Samadani, ¶51-52).
Regarding claim 10:
a user information storage step of acquiring and storing user information using a user information storage unit and generating facial expression data information (Samadani, abstract, ¶40-43).
and an authentication step of, when acquiring authentication request information, acquiring authentication facial expression information which is the facial expression data information corresponding to the authentication request information using an authentication unit, outputting a preset number of arbitrary face imaging information including the authentication facial expression information as authentication candidate information, and performing user authentication using the authentication candidate information (Samadani, ¶4, Fig.7, ¶46, ¶119), wherein facial profile may include and/or indicate one or more characteristics of a person's face. For example, the facial profile may indicate one or more features (e.g., feature points, keypoints, corners, feature vectors, facial structure, distances between feature points, facial shape, eye color, skin color, feature size, etc.) that characterize a person's (e.g., subject's, user's, authorized user's, etc.) face. The face recognizer 752 may extract one or more features of a face in the set of images and compare the extracted features to the features of the facial profile.
Regarding claim 11:
11. The user biometric authentication method of claim 10, wherein the user information storage step comprises: a face imaging information acquisition step of acquiring face imaging information, which is image information of a user's face included in the user information; and a facial expression data information processing step of analyzing the face imaging information using a preset algorithm and generating and storing the facial expression data information, which is an analysis result (Samadani, ¶126).
Regarding claim 12:
wherein the face imaging information comprises three-dimensional information including depth information and is acquired with at least two different facial expressions (Samadani, ¶54).
Regarding claim 13:
wherein the facial expression data information is generated by digitizing different values for each different facial expression using depth information of the face imaging information (Samadani, ¶10).
Regarding claim 14:
wherein the authentication step comprises: an authentication data processing step of acquiring the authentication request information and acquiring the authentication facial expression information (Samadani, ¶43).
an authentication candidate information output step of outputting the authentication candidate information including the authentication facial expression information (Samadani, ¶41).
and an authentication approval step of acquiring an authentication attempt information, which is selection information for the authentication candidate information, and if a user's authentication face imaging information including a facial expression equal to the authentication facial expression information is selected by analyzing the authentication attempt information, determining that an access is normal to perform the user authentication (Samadani, ¶80-81).
Regarding claim 15:
wherein the authentication facial expression information is included in the authentication request information and is information previously selected by the user (Samadani, ¶68-69).
Regarding claim 16:
wherein the authentication candidate information comprises the authentication facial expression information and five pieces of face imaging information other than the authentication facial expression information, and is selected through a preset algorithm using the facial expression data information corresponding to the face imaging information (Samadani, ¶69).
Regarding claim 17:
wherein if the authentication attempt information is analyzed and the authentication facial expression information is not equal to the authentication face imaging information, it is determined that the access is abnormal, and the authentication approval step stops the user authentication and outputs authentication error information (Samadani, ¶74).
Regarding claim 18:
wherein the user information storage step additionally generates a user's fingerprint data information included in the user information, and the authentication step additionally acquires authentication fingerprint information, which is the fingerprint data information corresponding to the authentication request information, and outputs the authentication fingerprint information together when outputting the preset number of arbitrary face imaging information as authentication candidate information, and performs user authentication using the authentication candidate information including the authentication fingerprint information and the face imaging information (Samadani, ¶51-52).
Regarding claim 19:
a user information storage step of acquiring and storing user information and generating facial expression data information (Samadani, Abstract, ¶40-43).
and an authentication step of, when acquiring authentication request information, acquiring authentication facial expression information which is the facial expression data information corresponding to the authentication request information, outputting a preset number of arbitrary face imaging information including the authentication facial expression information as authentication candidate information, and performing user authentication using the authentication candidate information (Samadani, ¶4, Fig.7, ¶46, ¶119), wherein facial profile may include and/or indicate one or more characteristics of a person's face. For example, the facial profile may indicate one or more features (e.g., feature points, keypoints, corners, feature vectors, facial structure, distances between feature points, facial shape, eye color, skin color, feature size, etc.) that characterize a person's (e.g., subject's, user's, authorized user's, etc.) face. The face recognizer 752 may extract one or more features of a face in the set of images and compare the extracted features to the features of the facial profile.
Conclusion
8. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Monjour Rahim whose telephone number is (571)270-3890.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (in USA or CANANDA) or 571-272-1000.
/Monjur Rahim/
Patent Examiner
United States Patent and Trademark Office
Art Unit: 2436; Phone: 571.270.3890
E-mail: monjur.rahim@uspto.gov
Fax: 571.270.4890