DETAILED ACTION
Notice of Pre-AIA or AIA Status.
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. Claims 1-19 filed and preliminary amended on 09/19/2023 are pending and being examined. Claims 1, 9, and 10 are independent form.
Priority
3. This application is a CIP of PCT/CN2022/080800 filed on 03/15/2022, where the benefit of foreign priority was further claimed.
Claim Rejections - 35 USC § 101
4. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
5. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed inventions are directed to non-statutory subject matter (an abstract idea without significantly more).
5-1. Regarding independent claim 1, the claim recites a method for face detection implemented by a terminal device, comprising:
[1] obtaining, by the terminal device, an image to be detected from a camera device, wherein the image to be detected contains a first facial image;
[2] performing, by the terminal device, an initial detection on the image to be detected to obtain an initial detection result;
[3] comparing, by the terminal device, if the initial detection result indicates that the initial detection is passed, the first facial image in the image to be detected with a target facial image to obtain a comparison result; and
[4] determining, by the terminal device, a final face detection result of the image to be detected according to the comparison result.
Step 1:
With regard to step (1), claim 1, is directed to an electronic device comprising a method for face detection implemented by a terminal device. The claim 1 therefore is one of statutory categories of invention, i.e., a process.
Step 2A-1:
With regard to 2A-1, The elements recited in claim 1, as drafted, under their broadest reasonable interpretation, encompass a process(es) which is/are directed to organizing human activity, can be practically performed in human mind, or falls within mathematical concepts. For example, “performing an initial detection on the image to be detected to obtain an initial detection result” in step [2], “comparing if the initial detection result indicates that the initial detection is passed, the first facial image in the image to be detected with a target facial image to obtain a comparison result” in step [3], and “determining a final face detection result of the image to be detected according to the comparison result” in step [4] in the context of this claim, each of which encompasses mental observation, evaluations, judgments, and/or opinions that “can be performed in human mind, or by a human using a pen and paper”, therefore the limitations falls within the “mental processes” grouping of abstract ideas. Claim 1 therefore recites an abstract idea. If a claim limitation is directed to organizing human activity, can be practically performed in human mind, or falls within mathematical concepts, then the claim recites an abstract idea. See MPEP 2106.04(a)(2).
Step 2A-2:
The 2019 PEG defines the phrase "integration into a practical application" to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, the additional element of “obtaining” in steps [1] under its broadest reasonable interpretation, is mere data gathering recited at a high level of generality, and thus is insignificant extra-solution activity. Similarly, “from a camera device” and “by the terminal device” are recited in step [1]. The “camera device” and the “terminal device” are recited at high level of generality and amount to no more than mere instruction to apply the exception using a generic camera and a generic computer device. Therefore, the claim as a whole does not integrate the judicial exception into a practical application.
Step 2B:
As explained above, the method comprising an image camera and a terminal device, is at best the equivalent of merely adding the words “apply it” to the judicial exception. The “obtaining” in step [1] was considered insignificant extra-solution activity. These conclusions should be reevaluated in Step 2B. The limitations are mere data gathering and/or output recited at high level of generality and amount to receiving (i.e., acquiring), accessing, or transmitting data over a network, which is well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The limitations remain insignificant extra-solution activity even upon reconsideration. Even when considered in combination, the additional elements present mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. The claim therefore is ineligible.
5-2. Regarding dependent claims 2-8, and 18-19, they are dependent from claim 1 and viewed individually, these additional elements are under its broadest reasonable interpretation, either covers performance of the limitation in the mind, performing a mathematical algorithm or extra solution activity for data gathering and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. And, when the claims are viewed as a whole, they do not improve a technology by allowing the technology to perform a function that it previously was not capable of performing; and they do not provide any limitations beyond generally linking the use of the abstract idea to a broad technological environment (i.e., computer-based analysis of generic data). Hence, the claimed invention does not constitute significantly more than the abstract idea, so the claims are rejected under 35 USC § 101 as being directed to non-statutory subject matter.
5-3. Regarding independent claims 9 and 10, the claims recite a terminal device (claim 9) and a non-transitory storage medium (claim 10) and each of which analogous to apparatus claim 1, grounds of rejection analogous to those applied to claim 1 are applicable to claims 9 and 10. Furthermore, the claim is a method that does not recite any additional elements, and according to step 2A-2 does not integrate the abstract idea into a practical application because it does not recite any additional elements that impose any meaningful limits on practicing the abstract idea. The claim recites an abstract idea.
Because the claim fails under (2A), the claim is further evaluated under (2B). The claim herein does not include any additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible.
5-4. Regarding dependent claims 11-17 they are dependent from claim 10 and viewed individually, these additional elements are under its broadest reasonable interpretation, either covers performance of the limitation in the mind, performing a mathematical algorithm or extra solution activity for data gathering and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. And, when the claims are viewed as a whole, they do not improve a technology by allowing the technology to perform a function that it previously was not capable of performing; and they do not provide any limitations beyond generally linking the use of the abstract idea to a broad technological environment (i.e., computer-based analysis of generic data). Hence, the claimed invention does not constitute significantly more than the abstract idea, so the claims are rejected under 35 USC § 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 112
6. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
7. Claims 2-3, 11-12, 18-19 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
7-1. Regarding claim 2, the claim recites: “wherein said obtaining, by the terminal device, the image to be detected from the camera device comprises: obtaining a RGB image and an infrared image from the camera device” in lines 1-4. However, it cannot be understood how a single image (the image) could comprise “a RGB image and an infrared image”. Here, it is not clear what is meant by “an image”. The claim(s) do/does not define the metes and bound of the claimed invention with a reasonable degree of precision and particularity, and thus is/are rejected under 35 U.S.C. 112(b).
7-2. Regarding claims 11, the claim faces the same issue set forth in the rejection of independent claim 2, and thus, is rejected as being indefinite under 35 U.S.C. 112(b).
7-3. The claims 3, 18-19, 12 are dependent from claims 2 or 11, respectively, therefore, are rejected as being indefinite under 35 U.S.C. 112(b).
Claim Rejections - 35 USC § 102
8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
9. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
9-1. Claims 1-4, and 9-13 are rejected under 35 U.S.C. 102(a)(1)/102(a)(2) as being anticipated by Alameh et al (US 2020/0026831, hereinafter “Alameh”).
Regarding claim 1, Alameh discloses a method for face detection implemented by a terminal device (the electronic device for face authentication; see fig.1), comprising: obtaining, by the terminal device, an image to be detected from a camera device, wherein the image to be detected contains a first facial image (see para.43: “an imager 102 captures at least one image 103 of an object situated within a predefined radius 104 of the electronic device 100”);
performing, by the terminal device, an initial detection on the image to be detected to obtain an initial detection result; comparing, by the terminal device, if the initial detection result indicates that the initial detection is passed, the first facial image in the image to be detected with a target facial image to obtain a comparison result (see para.53: “authentication 116 occurs where each of the following is true: the at least one image 103 sufficiently corresponds to at least one of the one or more predefined reference images 108 and the at least one depth scan 106 sufficiently corresponds to at least one of the one or more predefined facial maps 109.”); and
determining, by the terminal device, a final face detection result of the image to be detected according to the comparison result (ibid: “Where both are true, in one or more embodiments, the object is authenticated 117 as the user 101 authorized to use the electronic device 100.”).
Regarding claim 2, 11, Alameh discloses, wherein said obtaining, by the terminal device, the image to be detected from the camera device comprises: obtaining a RGB image (see para.43: “in one embodiment the image 103 is a two-dimensional RGB image.”) and an infrared image from the camera device (see para.45: “in one or more embodiments a depth imager 105 captures at least one depth scan 106 of the object when situated within the predefined radius 104 of the electronic device 100.”), wherein both the RGB image and the infrared image contain the first facial image; performing, by the terminal device, a face liveness detection on the first facial image contained in the infrared image to obtain a face liveness detection result; and determining the RGB image as the image to be detected, if the face liveness detection result indicates that the first facial image contained in the infrared image is a real face (see para.53: “authentication 116 occurs where each of the following is true: the at least one image 103 sufficiently corresponds to at least one of the one or more predefined reference images 108 and the at least one depth scan 106 sufficiently corresponds to at least one of the one or more predefined facial maps 109. Where both are true, in one or more embodiments, the object is authenticated 117 as the user 101 authorized to use the electronic device 100.”).
Regarding claim 3, 12, Alameh discloses, wherein said performing, by the terminal device, the liveness detection on the first facial image contained in the infrared image to obtain the liveness detection result comprises: detecting, by the terminal device, a plurality of facial contour key points in the infrared image; cropping, by the terminal device, the first facial image contained in the infrared image according to the plurality of facial contour key points (see para.46: “the depth scan 106 creates a depth map of a three-dimensional object, such as the user's face 107. This depth map can then be compared to one or more predefined facial maps 109 to confirm whether the contours, nooks, crannies, curvatures, and features of the user's face 107 are that of the authorized user identified by the one or more predefined facial maps 109.”); and inputting, by the terminal device, the first facial image contained in the infrared image into a trained liveness detection architecture, and outputting the liveness detection result through the trained liveness detection architecture (see para.111: “in one embodiment when the authentication system 111 detects a person, one or both of the imager 102 and/or the depth imager 105 can capture a photograph and/or depth scan of that person. The authentication system 111 can then compare the image and/or depth scan to one or more reference files stored in the memory 205. This comparison, in one or more embodiments, is used to confirm beyond a threshold authenticity probability that the person's face—both in the image and the depth scan—sufficiently matches one or more of the reference files.”).
Regarding claim 4, 13, Alameh discloses, wherein the initial detection comprises at least one of detection items consisting of a face pose detection, a face occlusion detection (the image quality (the degree of blur image images) detection, see 303—305 of fig.3 and para.124: “if the imager (102) and/or depth imager (105) attempt to capture images 303,304,306 when the electronic device 100 is moving, their quality may not be sufficient for authentication to occur. For example, as shown in FIG. 3, each of images 303,304,306 have some degree of blur, which is a distortion component that can cause errors in the authentication process. Similar error can occur in depth scans.”), a face brightness detection and a face ambiguity detection; said performing, by the terminal device, the initial detection on the image to be detected to obtain the initial detection result comprises: performing, by the terminal device, the detection items in the initial detection on the image to be detected to obtain detection results of the detection items; and indicating that a face detection is passed by the initial detection result, if the detection results of the detection items in the initial detection indicate that all detections of the detection items are passed (see 311 of fig.3 and para.128: “authentication will occur at step 311 where each of the following is true: the image 305 sufficiently corresponds to at least one of the one or more predefined reference images (108) and the depth scan 313 sufficiently corresponds to at least one of the one or more predefined facial maps (109). Where both are true, in one or more embodiments, the user 101 is authenticated an authorized user permitted to use the electronic device 100.”).
Regarding claim 9, 10, each of them is an inherent variation of claim 1, thus it is interpreted and rejected for the reasons set forth in the rejection of claim 1.
Claim Rejections - 35 USC § 103
10. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
11. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
12. Claims 5-8, and 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over Alameh in view of Wang et al (CN112069887, hereinafter “Wang”). A machine translated English version (hereinafter “Wang-Eng”) of document CN112069887 is provided by the examiner with this office action.
Regarding claim 5, 14, Alameh disclose the claimed invention except for the face pose detection in the image quality evaluation. However, this technique is well known and widely used in the field of face recognition. As evidence, Wang teaches a face image quality evaluation method including evaluating the face horizontal rotation angle by a pretrained neural network. See Wang-Eng, pg.6, lines 16-29: “In one embodiment, the face image quality evaluation from N dimensions, can be evaluated from the following five dimensions to the image quality: the horizontal rotating angle of the human face image is lower than the preset rotating angle (such as the preset rotating angle is 45 degrees or other angle), whether the human face image light is greater than the first preset light intensity value, whether the human face image light is less than the second preset light intensity value, whether the shielding degree in the human face image is in the preset shielding range; the definition of the human face image satisfies the preset definition requirement. if the first neural network model is to evaluate the 5 dimensions of the human face image, inputting the face image to the first neural network model, outputting a 5 * 1 vector to represent the evaluation result, if the value of a certain dimension in the output vector is 1, representing that the image quality corresponding to the dimension does not meet the requirement; if the value of a certain dimension in the output vector is 0, representing that the image quality corresponding to the dimension 28 satisfies the requirement.” ): It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Wang into the teachings of Alameh and further evaluate the face pose taught by Wang. Suggestion or motivation for doing so would have been to speed up face identification processing as taught by Wang, see page.1, line-page.2 line 5. Therefore, the claim is unpatentable over Alameh in view of Wang.
Regarding claim 6, 15, the combination of Alameh and Wang discloses, further comprising: performing, by the terminal device, the face occlusion detection on the image to be detected to obtain a detection result of the face occlusion detection when the initial detection is the face occlusion detection; said performing, by the terminal device, the face occlusion detection on the image to be detected to obtain the detection result of the face occlusion detection comprises: dividing the first facial image contained in the image to be detected into N facial regions, wherein N is a positive integer (Wang-Eng: to evaluate the image quality, the system may detect the key features including the eyes, nose, and mouth from the face image; see S601-602 of fig.6 and page. 12, lines 12-35); inputting the N facial regions into occlusion detection architectures respectively corresponding to the N facial regions, and outputting face occlusion detection results respectively corresponding to the N facial regions; and determining the detection result of the face occlusion detection according to the face occlusion detection results respectively corresponding to the N facial regions (Wang-Eng: see S603-S604 of fig.6 and page.12, line 27--page.13, line 9: comparing the key features extracted from the target image with that the preset standard features to determine whether they are matched, namely, whether the quality is passed.).
Regarding claim 7, 16, the combination of Alameh and Wang discloses, further comprising: performing, by the terminal device, the face brightness detection on the image to be detected to obtain a detection result of the face brightness detection when the initial detection is the face brightness detection; said performing, by the terminal device, the face brightness detection on the image to be detected to obtain the detection result of the face brightness detection comprises: calculating a ratio of a number of target pixel points in the image to be detected to a number of all pixel points in the image to be detected, wherein pixel values of the target pixel points are within a preset gray value range; and determining the detection result of the face brightness detection according to the ratio and a preset threshold value (e.g., Wang-Eng, see page.8 lines 26-35: “when image is not in the preset light intensity range, processing the human face image by the high dynamic range imaging algorithm to obtain the human face image of high dynamic range imaging. The high dynamic range imaging algorithm can be used for the over-dark or over-bright image, the brightness interval can be stretched, recovering to the normal brightness level, so as to obtain the human face image of high dynamic range imaging.”).
Regarding claim 8, 17, the combination of Alameh and Wang discloses, further comprising: performing, by the terminal device, the face ambiguity detection on the image to be detected to obtain a detection result of the face ambiguity detection when the initial detection is the face ambiguity detection; said performing, by the terminal device, the face ambiguity detection on the image to be detected to obtain the detection result of the face ambiguity detection comprises: calculating an ambiguity of the image to be detected; and determining the detection result of the face ambiguity detection according to the ambiguity and a preset numerical range (Wang-Eng: “the definition of the human face image [i.e., the face ambiguity] satisfies the preset definition requirement”. See page. 6, lines 16-29).
Regarding claims 18, 19, the combination of Alameh and Wang discloses, wherein the initial detection comprises at least one of detection items consisting of a face pose detection, a face occlusion detection, a face brightness detection and a face ambiguity detection; said performing, by the terminal device, the initial detection on the image to be detected to obtain the initial detection result comprises: performing, by the terminal device, the detection items in the initial detection on the image to be detected to obtain detection results of the detection items; and indicating that a face detection is passed by the initial detection result, if the detection results of the detection items in the initial detection indicate that all detections of the detection items are passed (Wang-Eng, e.g., see page. 6, lines 16-29: “In one embodiment, the face image quality evaluation from N dimensions, can be evaluated from the following five dimensions to the image quality: the horizontal rotating angle of the human face image is lower than the preset rotating angle (such as the preset rotating angle is 45 degrees or other angle), whether the human face image light is greater than the first preset light intensity value, whether the human face image light is less than the second preset light intensity value, whether the shielding degree in the human face image is in the preset shielding range; the definition of the human face image satisfies the preset definition requirement.”).
Conclusion
13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUIPING LI whose telephone number is (571)270-3376. The examiner can normally be reached 8:30am--5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on (571)272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov; https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center, and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RUIPING LI/Primary Examiner, Ph.D., Art Unit 2676