Prosecution Insights
Last updated: April 19, 2026
Application No. 18/412,254

SYSTEM AND METHOD FOR SECURING ELECTRONIC IDENTITY DATA USING ELECTRONIC DATA OBFUSCATION AND MASKING

Final Rejection §103
Filed
Jan 12, 2024
Examiner
SONG, HEE K
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
BANK OF AMERICA CORPORATION
OA Round
2 (Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
653 granted / 770 resolved
+26.8% vs TC avg
Strong +20% interview lift
Without
With
+19.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
13 currently pending
Career history
783
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
45.9%
+5.9% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 770 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . On response filed on 12/29/2025, claims 1, 8, and 14 were/was amended; no claims were added; no claims were cancelled. As a result, claims 1-20 are pending, of which claims 1, 8 and 14 are in independent form. Response to Arguments On Page 8 of Remarks filed on 12/29/2025, applicant argues that “At no point does Streit teach or reasonably suggest generating a plurality of decoy identifiers, much less generating decoy identifiers based on random modifications to the biometric features (e.g., changing a facial geometry, changing a ridge or valley of a fingerprint scan, etc.) as recited in independent claims 1, 8, and 14.” The above argument has been carefully and respectfully considered in view of a newly added limitation. However, the argument is moot in view of a new ground of rejection based on Obaidi (US 2022/0164470 A1). Obaidi in paragraph [0037] discloses gateway application 226 including an obfuscation module that obfuscates biometric data. Obaidi mentions in paragraph [0036] that “an obfuscation operation may comprise adding/altering pixels within the image data either randomly or pseudo-randomly at an area within the image that corresponds to the data to be altered. In this example, pixels located within some predetermined distance of an image portion determined to be associated with biometric data may be randomly replaced with pixels of other colors. Alternatively, the obfuscation module 310 may introduce a blurring effect at the area within the image that corresponds to the data to be altered (within some predetermined distance surrounding the biometric data). In another example, if the sensor data is audio data that includes spoken speech, the obfuscation operation may identify the spoken speech within the audio data (e.g., using one or more machine learning techniques) and may alter the identified spoken speech within the audio data. For example, the spoken speech may be altered by changing a pitch, frequency, or tone of the speech within the audio data.” The examiner considers the altered biometric as a result of the obfuscation operation as equivalent to the decoy identifiers of instant application. Paragraph [0061] of Obaidi discloses similar obfuscation operation as in paragraph [0039] of Obaidi. In paragraph [0055] Obaidi also teaches another obfuscation operation of overlaying an overlay that includes image noise at the region where image noise includes pixels selected randomly, pseudo randomly, or pattern that are distributed over the region. “In some embodiments, this involves applying a blurring effect to the region by altering pixels by “blending” the colors of the pixels with the colors of pixels close by, thereby removing sharp edges within the region. An altered image 512 is then created by applying the created overlay to the unaltered image 508.” As to third argument on page 9, applicant argues that “At no point does Streit teach or suggest storing the one or more unique characteristic identifiers with a plurality of decoy identifiers within a Quique characteristic identifier database as recited in independent claims 1, 8, and 14”, the examiner notes that the context of “storing the one or more unique chrematistic identifiers with a plurality of decoy identifiers within a unique chrematistic identifier database” has changed with the new amended limitation added since now it’s clear that there are at least one decoy identifiers as a result of the randomized modification of one or more features of the unique characteristic identifiers. As a result of the amendment now the storing indicates storing mixture of one or more decoy indenters and corresponding unique characteristic identifiers. This argument is moot in view of a new ground of rejection based on Chabanne et al. (WO 2006056683 A1) claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Streit (US 2020/0228336 A1), in view of Obaidi (US 2022/0164470 A1) and further in view of Chabanne et al. (WO 2006056683 A1) hereinafter Chabanne. As to claim 1, Streit teaches a system for securing electronic identity data using electronic data obfuscation and masking, the system comprising: a processing device; a non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: receiving one or more unique characteristic identifiers from an endpoint computing device (see Fig. 1 102; see also para. [0193]-[0194] “[0193] FIG. 1 is an example process flow 100 for enrolling in a privacy-enabled biometric system (e.g., FIG. 3, 304 described in greater detail below or FIG. 7, 704 above). Process 100 begins with acquisition of unencrypted biometric data at 102. The unencrypted biometric data (e.g., plaintext, reference biometric, etc.) can be directly captured on a user device, received from an acquisition device, or communicated from stored biometric information. In one example, a user takes a photo of themselves on their mobile device for enrollment. Pre-processing steps can be executed on the biometric information at 104. For example, given a photo of a user, pre-processing can include cropping the image to significant portions (e.g., around the face or facial features). Various examples exist of photo processing options that can take a reference image and identify facial areas automatically. [0194] In another example, the end user can be provided a user interface that displays a reference area, and the user is instructed to position their face from an existing image into the designated area. Alternatively, when the user takes a photo, the identified area can direct the user to focus on their face so that it appears within the highlighted area. In other options, the system can analyze other types of images to identify areas of interest (e.g., iris scans, hand images, fingerprint, etc.) and crop images accordingly. In yet other options, samples of voice recordings can be used to select data of the highest quality (e.g., lowest background noise), or can be processed to eliminate interference from the acquired biometric (e.g., filter out background noise).”); identifying, using an identifier scrambling engine, one or more features of the one or more unique characteristic identifiers (see Fig. 1, step 110 and see para. [0199] “[0199] In one embodiment, a convolutional deep neural network is executed to process the unencrypted biometric information and transform it into feature vector(s) which have a property of being one-way encrypted cipher text. The neural network is applied (108) to compute a one-way homomorphic encryption of the biometric—resulting in feature vectors (e.g., at 110). These outputs can be computed from an original biometric using the neural network but the values are one way in that the neural network cannot then be used to regenerate the original biometrics from the outputs.”); generating, using an identifier scrambling engine, a plurality of decoy identifiers based on the one or more unique characteristic identifiers, wherein each of the plurality of decoy identifiers comprises a randomized modification to the one or more features of the one or more unique characteristic identifiers and is associated with a randomly generated decoy identity (e.g., encrypted features; see para. [203]-[204] “[0203] In some embodiments, an optional step can be executed as part of process 100 (not shown). The optional step can be executed as a branch or fork in process 100 so that authentication of a user can immediately follow enrollment of a new user or authentication information. In one example, a first phase of enrollment can be executed to generate encrypted feature vectors. The system can use the generated encrypted feature vectors directly for subsequent authentication. For example, distance measures can be application to determine a distance between enrolled encrypted feature vectors and a newly generated encrypted feature vector. Where the distance is within a threshold, the user can be authenticated or an authentication signal returned. In various embodiments, this optional authentication approach can be used while a classification network is being trained on encrypted feature vectors in the following steps. [0204] The resulting feature vectors are bound to a specific user classification at 112. For example, deep learning is executed at 112 on the feature vectors based on a fully connected neural network (e.g., a second neural network, an example classifier network). The execution is run against all the biometric data (i.e., feature vectors from the initial biometric and training biometric data) to create the classification information. According to one example, a fully connected neural network having two hidden layers is employed for classification of the biometric data. In another example, a fully connected network with no hidden layers can be used for the classification. However, the use of the fully connected network with two hidden layers generated better accuracy in classification in some example executions (see e.g., Tables I-VIII described in greater detail below). According to one embodiment, process 100 can be executed to receive an original biometric (e.g., at 102) generate feature vectors (e.g., 110), and apply a FCNN classifier to return a label for identification at 112 (e.g., output # people).”). Streit does not explicitly teach the following limitations: “obfuscating the one or more unique characteristic identifiers by interspersing one or more artifacts into the one or more unique characteristic identifiers; and storing the one or more unique characteristic identifiers and the plurality of decoy identifiers within a unique characteristic identifier database.” Obaidi teaches “obfuscating the one or more unique characteristic identifiers by interspersing one or more artifacts into the one or more unique characteristic identifiers” (see para. [0039] “an obfuscation operation may comprise adding/altering pixels within the image data either randomly or pseudo-randomly at an area within the image that corresponds to the data to be altered. In this example, pixels located within some predetermined distance of an image portion determined to be associated with biometric data may be randomly replaced with pixels of other colors. Alternatively, the obfuscation module 310 may introduce a blurring effect at the area within the image that corresponds to the data to be altered (within some predetermined distance surrounding the biometric data). In another example, if the sensor data is audio data that includes spoken speech, the obfuscation operation may identify the spoken speech within the audio data (e.g., using one or more machine learning techniques) and may alter the identified spoken speech within the audio data. For example, the spoken speech may be altered by changing a pitch, frequency, or tone of the speech within the audio data.”; The examiner considers the altered biometric as a result of the obfuscation operation as equivalent to the decoy identifiers of instant application.; see para [0055] “In some embodiments, this involves applying a blurring effect to the region by altering pixels by “blending” the colors of the pixels with the colors of pixels close by, thereby removing sharp edges within the region. An altered image 512 is then created by applying the created overlay to the unaltered image 508.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Streit and Obaidi before him or her, to modify the scheme of Streit by including Obaidi. The suggestion/motivation for doing so would have been to protect the sensitive identity determining biometric data by introducing obfuscating to some of the features in the biometric data. The combination of Streit and Obaidi does not explicitly teach but Chabanne teches “storing the one or more unique characteristic identifiers and the plurality of decoy identifiers within a unique characteristic identifier database” (see claim 1, “A method of identifying a user, characterized in that the method is implemented by means of a personal user database, containing for each user at least a first unmodified biomechanical characteristic (E_1, i ), at least one modified second biometric characteristic (E_2 , i) by means of at least one modification (T) and accessible from the first unmodified biometric characteristic, and at least one accessible identification data (D) from a code identifying the modification performed on the second biometric characteristic, and in that Ie_1 method comprises the steps of: reading (1) from the user a first biometric characteristic (eχ i) and comparing it with the first unmodified biometric characteristics of the database to identify the first biometric feature no. modified corresponding to the user, -read (4) on the user a second biometric characteristic (e_2 / i) and compare it to the second modified biometric characteristic corresponding to the first unmodified biometric characteristic of the user to determine ( 5) the modification made and deduce the code identifying the latter, extracting (6) the identification data by means of the code thus deduced.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Streit, Obaidi and Chabanne before him or her, to modify the scheme of Streit and Obaida by including Chabanne. The suggestion/motivation for doing so would have been to deduce the code being used to identify the user from the operation involving the modified biometric data and corresponding unmodified biometric data. As to claim(s) 8, 14, claim(s) 8 and 14 include(s) similar limitations as claim 1 and thus claim(s) 8 and 14 is/are rejected under the same rationale as in claim 1. As to claims 2, 9, and 15, in view of claims 1, 8, and 14, respectively, Streit teaches wherein the one or more unique characteristic identifiers comprises a facial image scan (see para. [0194] “…the user is instructed to position their face from an existing image into the designated area. Alternatively, when the user takes a photo, the identified area can direct the user to focus on their face so that it appears within the highlighted area. In other options, the system can analyze other types of images to identify areas of interest (e.g., iris scans, hand images, fingerprint, etc.) and crop images accordingly.”), wherein the randomized modification to the one or more features of the one or more unique characteristic identifiers comprises at least one of a change to facial geometry or a change to a hue of one or more elements of the facial image scan (see para. [0199] “…a convolutional deep neural network is executed to process the unencrypted biometric information and transform it into feature vector(s) which have a property of being one-way encrypted cipher text. The neural network is applied (108) to compute a one-way homomorphic encryption of the biometric—resulting in feature vectors (e.g., at 110).”). As to claims 3, 10, 16, in view of claims 1, 8, and 14, respectively, Streit teaches wherein the one or more unique characteristic identifiers comprises a fingerprint scan, wherein the randomized modification to the one or more features of the one or more unique characteristic identifiers comprises a change to a ridge or valley of the fingerprint scan (see para. [0194], [0199]). As to claims 4, 11, 17, in view of claims 1, 8, and 14, respectively, Streit teaches wherein the one or more unique characteristic identifiers comprises a facial image scan, wherein the randomized modification to the one or more features of the one or more unique characteristic identifiers comprises at least one of a change to facial geometry or a change to a hue of one or more elements of the facial image scan (see para. [0194] an [0199]). As to claims 5, 12, 18, in view of claims 1, 8, and 14, respectively, Streit teaches wherein the identifier scrambling engine is an artificial intelligence (“AI”) engine, wherein identifying the one or more features of the one or more unique characteristic identifiers comprises performing Al-based detection of the one or more features (see para. [0201]-[0202] “[0201]… the new neural network has additional properties. This neural network is specially configured to enable incremental training (e.g., on new users and/or new feature vectors) and configured to distinguish between a known person and an unknown person. In one example, a fully connected neural network with 2 hidden layers and a “hinge” loss function is used to process input feature vectors and return a known person identifier (e.g., person label or class) or indicate that the processed biometric feature vectors are not mapped to a known person. For example, the hinge loss function outputs one or more negative values if the feature vector is unknown. In other examples, the output of the second neural network is an array of values, wherein the values and their positions in the array determined a match to a person or identification label. [0202] Various embodiments use different machine learning models for capturing feature vectors in the first network. According to various embodiments, the feature vector capture is accomplished via a pre-trained neural network (including, for example, a convolutional neural network) where the output is distance measurable (e.g., Euclidean measurable). In some examples, this can include models having a softmax layer as part of the model, and capture of feature vectors can occur preceding such layers. Feature vectors can be extracted from the pre-trained neural network by capturing results from the layers that are Euclidean measurable. In some examples, the softmax layer or categorical distribution layer is the final layer of the model, and feature vectors can be extracted from the n−1 layer (e.g., the immediately preceding layer). In other examples, the feature vectors can be extracted from the model in layers preceding the last layer. Some implementations may offer the feature vector as the last layer.”). As to claims 6, 13, 19, in view of claims 1, 8, and 14, respectively, Streit teaches wherein the randomized modification is performed using a generative AI based process (see para. [0201]-[0202]). As to claims 7, and 20, in view of claims 1, and 14, respectively, Streit teaches wherein the endpoint device is a mobile device of a user, wherein the one or more unique characteristic identifiers are captured from the user through the mobile device (see para. [0194]). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEE K SONG whose telephone number is (571)270-3260. The examiner can normally be reached on M-F 9:00 am – 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni Shiferaw can be reached on (571)272-3867 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-7291. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HEE K SONG/Primary Examiner, Art Unit 2497
Read full office action

Prosecution Timeline

Jan 12, 2024
Application Filed
Sep 28, 2025
Non-Final Rejection — §103
Dec 29, 2025
Response Filed
Feb 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596823
System and Method for Protecting Information
2y 5m to grant Granted Apr 07, 2026
Patent 12598162
SECURE TUNNEL PROXY WITH SOFTWARE-DEFINED PERIMETER FOR NETWORK DATA TRANSFER
2y 5m to grant Granted Apr 07, 2026
Patent 12585763
DETECTING AND RESPONDING TO ENVIRONMENTAL CONDITION-INDUCED SECURITY ATTACKS ON SEMICONDUCTOR PACKAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12579297
INCORPORATING LARGE LANGUAGE MODEL PROMPTS IN GRAPH QUERY LANGUAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12574739
ID TRANSMITTER FOR AUTHENTICATION, SET FOR ASSEMBLING AN ID TRANSMITTER, AND SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+19.7%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 770 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month