DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al (CN 112183504 A) in view of Lu et al (CN 111178256 A).
Referring to claims 1 and 9:
Zhao et al discloses a device, and method for using same, based on a non-contact palm vein imaging (par. 6-49) comprising:
an image shooting apparatus configured to shoot palm images (abstract: contactless apparatus [camera] configured to collect a palm vein video having a duration of seconds);
a picking apparatus, configured to pick out a picked palm image satisfying a preset condition from the shot palm images (abstract: [registration device] configured to locate the positions of a palm and a palm ROI, cropping corresponding images, and label the images as a palm region A and a palm ROI region B, respectively);
a feature extracting apparatus, configured to extract palm vein feature data from the picked palm images (abstract: [registration device] configured to performing palm anomaly detection on the palm region A and performing imaging quality determination on the palm ROI region B, outputting the determination result to procced or return to palm image positioning and cropping);
a first feature template generating apparatus, configured to perform feature fusion on the palm vein feature data extracted from the picked palm images to form one first feature template; and a user feature template generating apparatus, configured to form a user feature template based on the one first feature template (abstract: [registration device] configured to pre-processing the palm ROI, extracting a 512-dimensional feature vector of the palm ROI region B, and storing the 512-dimensional feature vector as one first-level template, locating the positions of the palm and the palm ROI once every two frames so as to extract several first-level templates, setting a cosine similarity threshold T3 between the templates, calculating the cosine similarity between two adjacent first-level templates, if the calculated cosine similarity is greater than or equal to the cosine similarity threshold T3, proceeding to the next step to perform registration, otherwise, registration fails and the present instance of registration is ended, setting a first-level template quantity threshold M, comparing the number of collected first-level templates after the end of the t-second video to the size of the threshold M, if the number of collected first-level templates is greater than or equal to the threshold M, proceeding the next step, otherwise, registration fails and the present instance of registration is ended and improving a K-means clustering algorithm, filtering, on the basis of the improved K-means clustering algorithm, all of the first-level templates according to maximization of a template intra-cluster difference to obtain k second-level templates, then fusing the k second-level templates into a third-level registration template, and registering the third-level registration template to a template database, thereby successfully completing the registration).
Zhao et al do not disclose a non-contact three-dimensional palm vein modeling apparatus, and method for using same, wherein the image shooting apparatus is not configured for shooting palm images of M different positions, wherein a quantity of shot palm images of each different position is one or more, the different positions are different positions of a palm relative to the image shooting apparatus, and M is greater than 1, and wherein the picking apparatus is not configured such that a quantity of picked palm images of each position is one or more, and the quantity of picked palm images of each position is less than or equal to a quantity of shot palm images of the corresponding position.
However, Lu et al disclose (par. 45-94) fitting a three-dimensional palm vein image according to a two-dimensional palm vein image captured by each camera, and identifying and authenticating the three-dimensional palm vein image. Lu et al teaches how to establish a three-dimensional palm vein model according to multiple captured palm images.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhao et al In view of Lu et al to provide a non-contact three-dimensional palm vein modeling apparatus, and method for using same, wherein the image shooting apparatus is configured for shooting palm images of M different positions, wherein a quantity of shot palm images of each different position is one or more, the different positions are different positions of a palm relative to the image shooting apparatus, and M is greater than 1, and wherein the picking apparatus is configured such that a quantity of picked palm images of each position is one or more, and the quantity of picked palm images of each position is less than or equal to a quantity of shot palm images of the corresponding position in order to provide a much greater amount of palm vein information, reduce or eliminate palm placement requirements when shooting, and improve the user experience.
Referring to claim 3:
The combination of Zhao et al and Lu et al disclose picking out the picked palm image satisfying the preset condition comprises extracting regions of interest of the shot palm images; obtaining image vector data of the regions of interest; and comparing the image vector data of the palm images, to pick out the picked palm image satisfying the preset condition (see the description in the disclosure of the video registration method and a video registration device: preprocessing a palm ROI, extracting 512-dimensional feature vectors of a palm ROI area B and storing the 512-dimensional feature vectors as 1 primary template, repeating palm image cropping and abnormality/quality detection positioning every 2 frames, further extracting a plurality of primary templates, setting a cosine similarity threshold T3 between the templates, calculating the cosine similarity between two adjacent primary templates, and entering the next step to continue registration if the calculated cosine similarity is more than or equal to the cosine similarity threshold T3).
Claims 10 and 18 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Zhao et al and Lu et al, further in view of well-known prior art (MPEP 2144.03).
Referring to claim 10:
While the combination of Zhao et al and Lu et al do not disclose an electronic device comprising a memory, wherein the memory stores execution instructions, and a processor, wherein the processor executes the execution instructions stored by the memory, to enable the processor to perform the method according to claim 1, such electronic devices (e.g., computers) are notoriously old and well known in the prior art. It would have been obvious for one of ordinary skill in the art to implement the palm image processing technique taught by the combination of Zhao et al and Lu et al as execution instructions stored in a memory and executed by a processor in order to allow a sequence of instructions to be performed by an electronic devices such as computer thereby offering wider and more practical and flexible application (implementation) of the technique and providing an alternative to the dedicated hardware configuration that would otherwise be required.
Referring to claim 18:
This claim is rejected for the same reasons as set forth above with respect to claim 3.
Allowable Subject Matter
Claims 2, 4-8, 11-17 and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Referring to these claims, the prior art searched and of record neither anticipates nor suggests all the limitation added in the claimed combinations.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11 March 2024 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the IDS has been considered by the examiner.
The relevance of the cited document(s), in addition to any applied above, can be found in the International Search Report and/or Written Opinion from the ISA dated 29 March 2022 for PCT/CN2021/116230 (of record).
Cited Art
The prior art and other references made of record and not relied upon are considered pertinent to applicant's disclosure.
Tu (US 5548667 A) discloses an image processing system comprising a unit for photographing an object in two dimensions, a feature extraction unit for extracting features from the two-dimensional image data from the photographing means, and a three-dimensional shape reproduction unit. The feature extraction unit refers to feature points given to the object to extract the features. The three-dimensional shape reproduction unit expresses the object by a dynamic equation, applies force from the feature extraction coordinates to the dynamic model to cause the dynamic model to change shape and supplement depth data, and to thereby reproduce the three-dimensional shape of the object. To increase the speed of the processing, it is desirable to divide the image data of the object into portions with little changes in shape and perform the processing for reproducing the three-dimensional shape for each mode.
Zhang et al (US 8265347 B2) disclose a biometric identification system (30) for identifying a person, the system (30) comprising: an image acquisition module (31) to capture a three-dimensional (3D) image of a palm of the person; a region of interest (ROI) extraction module (34) to extract a 3D subimage from the captured image; and a 3D features extraction module (36) to extract 3D palmprint features from the 3D subimage; wherein the extracted 3D palmprint features are compared to reference 3D palmprint features to verify the identity of the person.
Yalla et al (US 9141844 B2) disclose a 3D feature detection module and 3D recognition module 202. The 3D feature detection module processes 3D surface map of a biometric object, wherein the 3D surface map includes a plurality of 3D coordinates. The 3D feature detection module determines whether one or more types of 3D features are present in the 3D surface map and generates 3D feature data including 3D coordinates and feature type for the detected features. The 3D recognition module compares the 3D feature data with biometric data sets for identified persons. The 3D recognition module determines a match between the 3D feature data and one of the biometric data sets when a confidence value exceeds a threshold.
Endoh et al (US 9418275 B2) disclose biometric information processing with a model that allows more flexible postures, for example, a projective transform being considered. The projective transform is a model where the shape of a palm is viewed as a plane in a three dimensional space and the optical system of the imaging unit 103 is viewed as a perspective projection, which can be used when the palm is tilted. However, to determine the model parameters, it requires four associated points.
Jo et al (US 9697415 B2) disclose palm vein authentication using a vein sensor 13.
Fukuda (US 9953207 B2) discloses a biometric authentication device performing an authentication based on a similarity between a biometric image that is an object of comparing and an enrolled biometric image, includes: a storage configured to store a plurality of model images generated by changing a bending angle of a joint of a biometric model and correction information of each of the plurality of model images; a biometric sensor configured to capture a biometric image that is an object of comparing; and a processor configured to execute a process, the process including: determining similarities between the biometric image captured by the biometric sensor and the plurality of model images; selecting a model image based on the similarities; reading correction information corresponding to the model image that is selected, from the storage; and correcting one of the biometric image captured by the biometric sensor or the enrolled biometric image based on the correction information. See Figs. 8A-8E and 12A-12F.
LeCun et al (US 10135815 B2) disclose biometric technology for authentication and identification, and more particularly to non-contact based solutions for authenticating and identifying users, via computers, such as mobile devices, to selectively permit or deny access to various resources. In the present invention authentication and/or identification is performed using an image or a set of images of an individual's palm through a process involving the following key steps: (1) detecting the palm area using local classifiers; (2) extracting features from the region(s) of interest; and (3) computing the matching score against user models stored in a database, which can be augmented dynamically through a learning process.
Che et al (US 10748017 B2) disclose a device and method for palm vein identification. The method may comprise: acquiring a target palm vein image of a user; extracting a region of interest (ROI) from the target palm vein image of the user; acquiring feature data corresponding to the ROI, wherein the feature data are obtained by binarization processing; and comparing the feature data corresponding to the target palm vein image against feature data corresponding to a registered original palm vein image to perform identification on the target palm vein image of the user, wherein the feature data corresponding to the registered original palm vein image are obtained by calculation in advance.
Fourre et al (US 11928883 B2) disclose a device for capturing a biometric print of a user, the user compartment including a frontal opening designed for the passage of part of a hand into said user compartment and a lateral opening allowing the passage of the thumb of the user's hand, the lateral opening extending as far as the frontal opening to form a continuous open space, the lateral opening being delimited in a direction of insertion of the user's hand into the user compartment by a user hand positioning stop. The depth of the user compartment 10, between the frontal opening 11 and the through-orifice 141, in the direction of insertion Din may allow the index, middle, ring and little finger of a hand to pass through it, and the device is therefore able to acquire an image of the palm of a hand, a plurality of acquisitions in different positions in the direction Din then being necessary in order to be able to reconstruct an image of the entire palmprint of the hand or of the entire hand including the fingers.
Park (US 20210012513 A1) discloses creating 3D models of biological entities from different types of sensor data. For instance, these methods can track an underlying network of nodes corresponding to blood vessel networks in 3 dimensions. Such methods adapt models to compensate for changes on the surface and in the structure that continuously occur in living entities, such as when blood flows, hands stretch, heads turn, and the like. These 3D models can then be used to perform functions such as motion tracking, biometric authentication, and visualizations in air (such as with Augmented and Virtual Reality) using 3D models as positional references.
Bergqvist et al (US 20230298026 A1) disclose a method and a system comprise: at a hand imaging device (1, 1′, . . . ), capturing (S4) image data of a hand (91) of a current person (9), at a federation server (11), comparing (S5) a current feature vector determined from the captured image data with pre-stored feature vectors of enrolled persons, at a user device (92) of the current person (9), processing (S8) second factor information for enabling execution of an electronic payment In some embodiments, the electronic payment is initiated by a payment service server (21) and forwarded to a bank (5, 5′, . . . ) via a banking gateway (4) for adhering to payment protocols of the bank (5, 5′, . . . ).
Wang et al (US 20260010602 A1) disclose biometric recognition technology for performing identity authentication and identification by analyzing and recognizing a biometric feature such as a palm print and a palm vein of a palm. An image capturing device captured a three-dimensional image, and is usually used in biometric recognition applications such as face recognition and palm scanning recognition. In one embodiment, a palm scanning recognition operation, feature fusion may be performed on an extracted palm print feature and palm vein feature of the target object, to obtain the biometric feature.
Feng et al (CN 105760841 A) disclose a method and system for identity recognition. First, a high-resolution palm vein image is acquired through an image acquisition mode combining CCD and FPGA. Then, the original palm vein image is preprocessed, local invariant features of the image are extracted based on training data (acquired during registration) and test data (verified online), the similarity between a test data feature point vector and a training data feature point vector is measured through use of Euclidean distance for feature matching, a decision is made according to the feature matching rate after matching, a decision result is output directly for a palm vein image with high feature matching rate, and for a palm vein image with low feature matching rate, image 3D deflection angle estimation and 3D rotation are carried out, feature selecting and matching is carried out again on the rotated image, and a decision result is then output directly.
Yao et al (CN 112801031 A) disclose a vein image recognition method, device, electronic device and readable storage medium, relating to the technical field of image processing. The method extracts the region of interest ROI of the original vein image, obtaining the ROI image containing vein area, so as to remove some useless area, convenient for subsequent can reduce the extraction of the useless feature, effectively extracting the vein characteristic; then performing the edge detection and characteristic extraction of the phase consistency of the ROI image, so as to obtain the target image containing phase consistency characteristic of different directions; because the extracted phase consistency characteristic is smaller than the influence of the image brightness and contrast, so it can better extract the vein characteristic; so as to match the target image with the preset template image, which can realize more accurate identification of the original vein image.
Wang et al (“A Palm Vein Recognition Method Based on LSTM-CNN”) propose a palm vein recognition method based on Long Short-Term Memory (LSTM). Initially, palm vein image data and temperature information are collected over a period. A Fully Convolutional Network (FCN) model is trained using manually annotated palm vein images. Features vectors of the annotated palm vein images are then computed through a Convolutional Neural Network (CNN), and an LSTM-CNN model for palm vein recognition is trained using LSTM. Finally, the to-be-recognized palm vein images and temperature information are obtained through sensors. These are matched with the corresponding temperature-based LSTM-CNN model and feature vector templates. If the matching score exceeds a certain threshold, the recognition is successful; otherwise, it fails. Experimental results show that under a temperature range of -20 to 40 degrees Celsius, and with approximately 12,000 palm vein image data and temperature information collected from 10 individuals for training, complete and accurate recognition of the palm vein information of the 10 individuals can be achieved. This method overcomes the impact of temperature variations on the accuracy of palm vein recognition, meeting the application needs of real-world scenarios.
Marattukalam et al (On Palm Vein as a Contactless Identification Technology) discuss how palm vein biometrics has received a lot of attention as a technology offering accuracy, robustness and being contactless, which makes it a promising option for clinical applications. The technology uses palm vascular patterns of individuals as identification metric to match the identity. As per observations, the vein structure beneath the palm surface has a more complicated pattern as compared to the back of the palm, the fingers or any other easily accessible vein networks in the body. Thus, the palm vein can provide more features to be used for authentication. The authors highlight the performance evaluation of various approaches adopting this authentication technique. The performance evaluation is based on standard metrics such as equal error rate and false acceptance rate. The authors compare different techniques based on existing published research and summarize their advantages and disadvantages, and finally suggest the use of deep learning algorithms in the decision-making process which promises to be most reliable for near future applications.
Khram et al (“AI Driven Individual Identification Using Palm Veins: A Systematic Review, Challenges, and Future Perspectives”), noting individual identification using palm vein recognition as an emerging biometric technique known for its accuracy and hygiene benefits, highlight how artificial intelligence (AI), including machine learning and deep learning, enhances feature extraction and matching stages. Palm vein emphasizes, AI's growing role in authentication systems, summarizing trends like the use of infrared imaging, CNN models, and synthetic datasets. Despite advancements challenges remain such as: inconsistent image quality, preprocessing methods, feature extraction algorithms, matching techniques as well as database diversity. The review analyzes 30 papers and concludes that future research should focus on developing context-specific AI models, reducing computational load, and improving fairness and accuracy.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott Rogers whose telephone number is 571-272-7467. The examiner can normally be reached 8 am to 7 pm flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached on 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Scott A Rogers/
Primary Examiner, Art Unit 2683
10 January 2026