DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11December2025 is being considered by the examiner.
Status of Claims
Pending
1-7, 9-17, and 19-20
103
1-7, 9-17, and 19-20
Response to Amendment
This office action is responsive to the amendment filed on 16December2025. As directed by the amendment: claims 1, 5-6, 9, 11, 15-16, and 19 has (have) been amended, no claims has/have been cancelled, and no new claims has/have been added. Thus, claims 1-7, 9-17, and 19-20 are presently pending in this application.
Response to Arguments
The amendments to the claims affected the prior scope thereby necessitating further search and consideration. The instant office action has been made FINAL.
Applicant's arguments filed on 16December2025 have been fully considered but they are not persuasive.
In regards to the applicant’s argument that DeLong only disclose touch/finger motions and not image based, see arguments page 19, the examiner respectfully disagrees. Applicants’ specification includes fingerprints as disclosed in [0040], [0046], [0052], [0057], [0067], and [0073] as a form of biometric data. The same methods to de-identify image based details can and is also utilized for fingerprints.
In regards to the applicant’s argument that DeLong and Streit does not disclose de-identification on full biometric data, see arguments page 19, the examiner respectfully disagrees. The primary rejection to the de-identification process of biometric data is Delong in view of Zhu which does teach the de-identification steps claimed.
The claim only says the processor supports privacy technology, and their spec says privacy technology includes encryption, but the claim doesn't expressly link the privacy technology to only happening after a specific step of the de-identification. The claim states the first thing the processor does with the second biometric feature is perform de-identification which Streit does perform.
It is also unclear what is meant by performing de-identification processing on the complete biometric feature since that is not in the claim and not well defined in the specification.
The rejection is maintained but modified due to claims 1 and 11 being amended to include claims 8 and 18.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 9-15, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over DeLong US 20210229633 in view of Zhu US 20230222843 further in view of Streit US 20190280869 .
DeLong discloses,
Claims 1 and 11; A vehicle-mounted system, disposed in a vehicle([0010] The present disclosure is directed to systems and methods that determine an identity of a user of a vehicle through a specifically configured physical key), comprising: a data acquisition device(200), acquiring a registered self-key([0037] The card reader 206 can comprise a processor 210 and a memory 212 for storing identity and mode management logic), using the vehicle to obtain a first data([0039] The vehicle controller 216 can then prompt the user to place their smartcard 208 near the card reader 206, for both communication and power, and place their finger on a biometric reader 220 of the smartcard 208 at a variety of positions. The card reader 206 can use these images to train a recognition model), a first biometric feature acquisition device(106), acquiring a second biometric feature of a current user to be recognized([0022]); and a processor(112), coupled to the data acquisition device and the first biometric feature acquisition device(Fig. 1 shows 112 is coupled to 106 and Fig. 2 shows 216 which contains 112 is coupled to 206), compare the second feature vector with the first feature vector in the self-key([0018] Discloses a touch signature which can be encoded as a series of vector points and is used to make the physical key more secure which makes the touch signature a self-key, [0014] the processor 112 to perform aspects of biometric authentication, as well as user identity and/or mode management as disclosed throughout, and [0048] discloses comparing existing biometric data with received biometric data), and activate a predetermined function of the vehicle according to a comparison result([0014] When referring to operations executed by the vehicle controller 104, it will be understood that this includes the execution of instructions by the processor 112).
Claims 2 and 12; Further comprising: a storage device(114, 122, and 212), storing the self-key([0014] The memory 114 stores instructions that are executed by the processor 112 to perform aspects of biometric authentication), wherein the processor acquires, by using a second biometric feature acquisition device(220), the first biometric feature of the user using the vehicle([0041] The user can also enroll a digital signature 222 used to authenticate the user. The digital signature can be provided by the user into the biometric reader 220 of the smartcard 208), and stores the first feature vector as the self-key in the storage device([0037] a memory 212 for storing identity and mode management logic).
Claims 3 and 13; Further comprising: a communication device(132), acquiring the self-key from a mobile device of the user using the vehicle through wired communication or wireless communication([0020] The mobile device 132 can pair or otherwise communicatively couple with the vehicle controller 104 and/or the physical key 106), wherein the mobile device acquires, by using a second biometric feature acquisition device, the first biometric feature of the user using the vehicle([0020] The user can provide their signature through a mobile device 132 within the vehicle 102),
Claims 4 and 14; Further comprising: a card reader(206), acquiring the self-key from a portable storage device of the user using the vehicle(208), wherein the self-key is generated by a computer device of the user using the vehicle through acquiring, by using a second biometric feature acquisition device, the first biometric feature of the user using the vehicle([0041] The user can also enroll a digital signature 222 used to authenticate the user. The digital signature can be provided by the user into the biometric reader 220 of the smartcard 208), and is written into the portable storage device([0037] a memory 212 for storing identity and mode management logic).
Claims 5 and 15; Wherein the processor comprises recognizing an identity of the current user according to the comparison result([0014] the processor 112 to perform aspects of biometric authentication, as well as user identity and/or mode management as disclosed throughout, and [0048] discloses comparing existing biometric data with received biometric data), and activating the predetermined function of the vehicle allowed to be used in authorization data according to the authorization data set in advance([0014] When referring to operations executed by the vehicle controller 104, it will be understood that this includes the execution of instructions by the processor 112).
However, DeLong fails to disclose:
Claims 1 and 11; Wherein the self-key is generated by performing de-identification processing on a first biometric feature of a user, and transforming the first de-identified data into a first feature vector including a plurality of first de-identified features; and configured to perform the de-identification processing on the second biometric feature to obtain a second de-identified data, transform the second de-identified data into a second feature vector including a plurality of second de-identified features, wherein the processor further employs a deep learning model that supports a privacy protection technology to perform the de-identification processing on the second biometric feature.
Claims 2 and 12; Performs the de-identification processing on the first biometric feature to obtain the first de-identified data, transforms the first de-identified data into the first feature vector including the plurality of first de-identified features.
Claims 3 and 13; Performs the de-identification processing on the first biometric feature to obtain the first de-identified data, and transforms the first de-identified data into the first feature vector including the plurality of first de-identified features to generate the self-key.
Claims 4 and 14; performs the de-identification processing on the first biometric feature to obtain the first de-identified data, and transforms the first de-identified data into the first feature vector including the plurality of first de-identified features.
Claims 10 and 20; Wherein the processor further employs a biometric identification technology to identify a living body in the second biometric feature, and, when identifying that there is the living body in the second biometric feature, performs the de-identification processing on the second biometric feature, wherein the biometric identification technology comprises a challenge-response technology.
Claims 9 and 19; Wherein the deep learning model comprises a plurality of neurons divided into multiple layers, the second biometric feature is transformed into feature values of a plurality of neurons at a layer among the multiple layers, and the transformed feature value of each of the plurality of neurons is added to a noise generated using a privacy parameter and then input into a next layer, after the multiple layers of processing, the second de-identified data is obtained.
Zhu teaches a similar device in the same field of biometric security.
Zhu teaches,
Claim 1 and 11; Wherein the self-key is generated by performing de-identification processing on a first biometric feature of a user([0054] discloses performing a privacy-removing process on a first biometric feature of a biological sample to obtain a second biometric feature; which can be done to the data obtained in DeLong to generate a self-key), and transforming the first de-identified data into a first feature vector including a plurality of first de-identified features([0063] and [0070] discloses creating a feature vector using the facial features and removing the facial features leaving only the vectors to verify); and configured to perform the de-identification processing on the second biometric feature to obtain a second de-identified data([0079] The privacy-removing process performed on the first biometric feature is also performed on the second), transform the second de-identified data into a second feature vector including a plurality of second de-identified features([0079] Discloses the privacy-removing process performed on the first biometric feature is also performed on the second which means the same feature vector is generated and enhances the security).
Claims 2 and 12; Performs the de-identification processing on the first biometric feature to obtain the first de-identified data, transforms the first de-identified data into the first feature vector including the plurality of first de-identified features([0063] and [0070] discloses creating a feature vector using the facial features and removing the facial features leaving only the vectors to verify).
Claims 3 and 13; Performs the de-identification processing on the first biometric feature to obtain the first de-identified data, and transforms the first de-identified data into the first feature vector including the plurality of first de-identified features to generate the self-key([0063] and [0070] discloses creating a feature vector using the facial features and removing the facial features leaving only the vectors to verify).
Claims 4 and 14; performs the de-identification processing on the first biometric feature to obtain the first de-identified data, and transforms the first de-identified data into the first feature vector including the plurality of first de-identified features([0063] and [0070] discloses creating a feature vector using the facial features and removing the facial features leaving only the vectors to verify).
Claims 10 and 20; Wherein the processor further employs a biometric identification technology to identify a living body in the second biometric feature([0039] Discloses The biometric feature identification includes a variety of identification methods such as fingerprint A, face B, iris C, palm print D, vein E, voice print F, gesture G which are features of a living body), and, when identifying that there is the living body in the second biometric feature([0063] and [0070] discloses creating a feature vector using the facial features and removing the facial features leaving only the vectors to verify), performs the de-identification processing on the second biometric feature, wherein the biometric identification technology comprises a challenge-response technology([0052] When the face to be verified needs to be authenticated, the face sample feature is compared with the face feature to be verified which is a form of challenge-response technology).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the self-key is generated by performing de-identification processing on a first biometric feature of a user, and transforming the first de-identified data into a first feature vector including a plurality of first de-identified features; and configured to perform the de-identification processing on the second biometric feature to obtain a second de-identified data, transform the second de-identified data into a second feature vector including a plurality of second de-identified features, performs the de-identification processing on the first biometric feature to obtain the first de-identified data, transforms the first de-identified data into the first feature vector including the plurality of first de-identified features, performs the de-identification processing on the first biometric feature to obtain the first de-identified data, and transforms the first de-identified data into the first feature vector including the plurality of first de-identified features to generate the self-key, performs the de-identification processing on the first biometric feature to obtain the first de-identified data, and transforms the first de-identified data into the first feature vector including the plurality of first de-identified features as taught by Zhu, for the purpose of is .
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to include the processor further employs a biometric identification technology to identify a living body in the second biometric feature, and, when identifying that there is the living body in the second biometric feature, performs the de-identification processing on the second biometric feature, wherein the biometric identification technology comprises a challenge-response technology as taught by Zhu, for the purpose of preventing someone from spoofing the biometric lock with just a picture.
Streit teaches a similar device in the same field of biometric security.
Streit teaches,
Claims 1 and 11; Wherein the processor further employs a deep learning model that supports a privacy protection technology to perform the de-identification processing on the second biometric feature([0053] Discloses that a deep learning convolution network can be used for biometrics processing).
Claims 9 and 19; Wherein the deep learning model comprises a plurality of neurons divided into multiple layers(Fig. 4B shows a plurality of neurons divided into multiple layers), the second biometric feature is transformed into feature values of a plurality of neurons at a layer among the multiple layers, and the transformed feature value of each of the plurality of neurons is added to a noise generated using a privacy parameter and then input into a next layer, after the multiple layers of processing, the second de-identified data is obtained([0061] Discloses a convolutional deep neural network is executed to process the unencrypted biometric information and transform it into feature vector which has a property of being one-way encrypted cipher).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to include the self-key is generated by performing de-identification processing on a first biometric feature of a user, and transforming the first de-identified data into a first feature vector including a plurality of first de-identified features; and configured to perform the de-identification processing on the second biometric feature to obtain a second de-identified data, transform the second de-identified data into a second feature vector including a plurality of second de-identified features as taught by Streit, for the purpose of securing biometric data and [0061] preventing anyone from from regenerating the original biometrics.
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to include the processor further employs a deep learning model that supports a privacy protection technology to perform the de-identification processing on the second biometric feature, and the deep learning model comprises a plurality of neurons divided into multiple layers, the second biometric feature is transformed into feature values of a plurality of neurons at a layer among the multiple layers, and the transformed feature value of each of the neurons is added to a noise generated using a privacy parameter and then input into a next layer, after the multiple layers of processing, the second de-identified data is obtained as taught by Streit, for the purpose of.
Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over DeLong US 20210229633 in view of Zhu US 20230222843 further in view of Streit US 20190280869 further in view of Bielby US 20210403017.
Regarding claim(s) 6 and 16, DeLong in view of Zhu discloses the claimed invention substantially as claimed, as set forth above for claim(s) 1 and 11.
However, DeLong in view of Zhu fails to disclose:
Claim(s) 6 and 16; Wherein the processor further monitors a variation of the second biometric feature to determine a state of the current user, and activates another predetermined function of the vehicle according to a determination result.
Bielby teaches a similar device in the same field of biometric security.
Bielby teaches,
Claim(s) 6 and 16; Wherein the processor further monitors a variation of the second biometric feature to determine a state of the current user([0011] Discloses using biometric sensors to monitor the user to determine the status of the user such as intoxication), and activates another predetermined function of the vehicle according to a determination result([0011] Also discloses that aspects of the vehicle can be adjusted to accommodate a driver with a slower reaction time).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to include the processor further monitors a variation of the second biometric feature to determine a state of the current user, and activates another predetermined function of the vehicle according to a determination result as taught by Bielby, for the purpose of detecting if a user is falling asleep, intoxicated, or incapable of driving.
Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over DeLong US 20210229633 in view of Zhu US 20230222843 further in view of Streit US 20190280869 further in view of Trelin US 20190031145.
Regarding claim(s) 7 and 17, DeLong in view of Zhu discloses the claimed invention substantially as claimed, as set forth above for claim(s) 1 and 11.
DeLong further discloses,
Claims 7 and 17; Wherein the first biometric feature acquisition device is configured to acquire a third biometric feature of an external user([0022] 106 contains biometric information to access the vehicle), compares the third feature vector with the first feature vector in the self-key, and opens a door of the vehicle according a comparison result([0051] authentication can be based on a comparison of the biometric information and/or a digital signature obtained to the biometric information and/or a digital signature(s) stored on the physical key when the user was enrolled, [0012] Physical key 106 can be used to unlock a door 108 of the vehicle 102).
Zhu further discloses,
Claims 7 and 17; Wherein the processor further performs the de-identification processing on the third biometric feature to obtain a third de-identified data, transforms the third de-identified data into a third feature vector including a plurality of third de-identified features([0063] and [0070] discloses creating a feature vector using the facial features and removing the facial features leaving only the vectors to verify).
However, DeLong in view of Zhu fails to disclose:
Claims 7 and 17; Wherein the first biometric feature acquisition device is configured outside the vehicle.
Trelin teaches a similar device in the same field of biometric security.
Trelin teaches,
Claims 7 and 17; Wherein the first biometric feature acquisition device is configured outside the vehicle([0088] A biometric read on a device can be coupled to a vehicle outside of the vehicle).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to include the first biometric feature acquisition device is configured outside the vehicle as taught by Trelin, for the purpose of conveniently registering a user without having to have access to the vehicle.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to John Merino whose telephone number is (703)756-4721. The examiner can normally be reached Mon - Fri 11am-7pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached at (571) 270-7429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/John C Merino/Patent Examiner, Art Unit 3669
/Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669