Prosecution Insights
Last updated: April 19, 2026
Application No. 18/753,509

SYSTEMS AND METHODS FOR ENHANCING SECURITY ASSOCIATED WITH NETWORKED DEVICES VIA ARTIFICIAL INTELLIGENCE ENHANCED PROCESSING

Non-Final OA §101§103
Filed
Jun 25, 2024
Examiner
VOGT, JACOB BUI
Art Unit
2653
Tech Center
2600 — Communications
Assignee
BANK OF AMERICA CORPORATION
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
4 granted / 7 resolved
-4.9% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
33 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
35.1%
-4.9% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §103
DETAILED ACTION This communication is in response to the Application filed on 06/25/2024. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The IDS dated 12/09/2024 has been considered and placed in the application file. Claim Objections Claim 9 is objected to because of the following informalities: Claim 9, line 1, should be “A computer program product for enhancing security” Claims 10-14 depend either directly or indirectly from the objection of claim 9, therefore they are also objected. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 9-11, and 15-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. All of the claims are method claims (15-17), apparatus/machine claims (1-4, 9-11) or manufacture claim under (Step 1), but under Step 2A all of these claims recite abstract ideas and specifically mental processes. These mental processes are more particularly recited in claims 1, 9, and 15 as: initiating data collection based on a direct user input into a data transmission device… authenticating a user based on a data transmission device onboard generative AI analysis of the direct user input compared to at least one previous direct user input… generating, in response to the authentication, a user dataset from the direct user input and from a plurality of indirect user inputs to the data transmission device… validating the direct user input based on a threat score of the user dataset if the threat score of the user dataset is above a required threat score threshold or invalidate the direct user input if the threat score of the user dataset is below the required threat score threshold… triggering a response from the data transmission device based on the validation or the invalidation… Under Step 2A Prong One, claims 1, 9, and 15 are directed to an abstract idea and specifically a mental process. As detailed above, the steps of initiating, authenticating, generating, validating, triggering, etc. may be practically performed in the human mind with the use of a physical aid such as a pen and paper. For example, a human could, upon receiving a request for information retrieval from a second human, ask a plurality of questions collecting data from the second human, analyze the second human’s responses to the questions, identify that the second human is actually the second human based on the analysis, organize a set of user documents from both the collected data and environmental details (e.g. location of second human, context of information retrieval request), estimate a probability of fraud based on both the collected data and environmental details, compare the probability value to a preset threshold, and return user documents to the user based on the comparison. Under Step 2A Prong Two, this judicial exception is not integrated into a practical application because claims 1-4, 9-11, and 15-17 do not recite additional elements that integrate the exception into a practical application. In particular, claims 1, 9, and 15 recite the additional elements of a memory device (¶ [0041]), a processing device (¶ [0040]), a communication device (¶ [0043]), and onboard generative artificial intelligence (¶ [00101]). These additional elements are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Under Step 2B, the claims do not recite additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a computer is noted as a general computer { memory device (¶ [0041]); processing device (¶ [0040]); communication device (¶ [0043]); onboard generative artificial intelligence (¶ [00101])}. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. With respect to claim 2, the claim relates to encrypting and comparing a voice command to a plurality of stored user voice commands. This relates to a human first taking notes about the prosody of an inbound voice sample of the second human, and then comparing the inbound voice sample with a plurality of voice recordings of the second human. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claims 3, 10, and 16, the claim relates to aggregating user data from a plurality of locations, configuring the user data, prioritizing data elements from both stored user data and inbound user data, transferring the data elements to a threat analytics module, and then repeating the prioritization and transfer steps. This relates to a human collecting information about the second human from multiple filing cabinets, organizing the information, choosing two data elements to compare between the inbound data and stored data, and after comparing the two elements, repeating the process for other data elements. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claims 4, 11, and 17, the claim relates to receiving data elements from an orchestration engine, comparing the data elements to generate a match score, and then transferring the generated match scores to an AI or ML model. This relates to a human computing a similarity between the two elements chosen in the orchestration step and then storing the computed similarity value using pen and paper. The limitation of an “AI or ML model” is recited at a high level of generality (¶ [0095]) and merely equates to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. For all of the above reasons, taken alone or in combination, claims 1-4, 9-11, and 15-17 recite a non-statutory mental process. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 9, and 15 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 20220392453 A1 (Gupta et al.) in view of US Patent Publication 20220029981 A1 (Mavani). Claim 1 Regarding claim 1, Gupta et al. disclose a system for enhancing security associated with networked devices via artificial intelligence (AI) enhanced processing, the system comprising: a memory device with computer-readable program code stored thereon (Gupta et al. ¶ [0224], "When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. "); at least one processing device operatively coupled to the at least one memory device and the at least one communication device (Gupta et al. ¶ [0211], "In some embodiments, a system comprises a database and a computer. The database comprises non-transitory machine-readable storage configured to store a plurality of enrollee-records for a plurality of enrollee-users. The computer is in communication with the database and comprises a processor."), wherein executing the computer-readable code is configured to cause the at least one processing device to: initiate data collection based on a direct user input into a data transmission device (Gupta et al. ¶ [0126], "In operation 130, the user interacts with the user device 114, prompting the user device 114 to initiate a new function of the end-user device 114 or access a new computing service 105 hosted by the provider server 106. The user device 114 transmits a transaction request to the provider server 106."); generate, [in response to the authentication,] a user dataset from the direct user input and from a plurality of indirect user inputs to the data transmission device (Gupta et al. ¶ [0127], "In operation 132, the provider server 106 invokes the computing service 105 by sending an authentication request to the computing service 105. The authentication request includes various types of inbound contact data received or otherwise captured from an instruction the user device 114, such as a device identifier, a voice command, and various metadata." Inbound contact data is considered analogous to a user dataset comprising direct and indirect user inputs); validate the direct user input based on a threat score of the user dataset if the threat score of the user dataset is above a required threat score threshold (Gupta et al. ¶ [0132]-[0135], "In operation 141, the identification server 102 computes a risk score by executing a risk engine 122a on the extracted inbound features. The risk engine 122a ingests the extracted inbound features and applies a DNN classifier to predict a risk score, which the risk engine 122a outputs as a classification level of risk or as a risk score representing a likelihood of fraud or other threat." A risk score is considered analogous to a threat score) or invalidate the direct user input if the threat score of the user dataset is below the required threat score threshold (Gupta et al. ¶ [0139], "the computing service 105 determines whether to authenticate the user based upon the risk score generated by the risk engine 122a (in operation 141). ... If the computing service 105 determines that the risk score fails to satisfy a threshold risk value, then the computing service 105 rejects the user's authentication attempt for the requested transaction."); and trigger a response from the data transmission device based on the validation or the invalidation of the direct user input (Gupta et al. ¶ [0145], "In operation 152, the provider server 106 generates and transmits the approval notification or the denial notification to the user device 114 in accordance with the authorization result notification generated by the computing service 105."). Gupta et al. do not explicitly disclose all of authenticating a user using onboard generative AI before creating the user dataset. However, Mavani discloses authenticating a user based on a data transmission device onboard generative artificial intelligence (AI) analysis of a direct user input (Mavani ¶ [0035]-[0038], "User device 130 may be equipped with a virtual assistant ... a virtual assistant may include functionality to facilitate user interaction via user device 130" A virtual assistant that facilitates user interaction is considered analogous to a generative AI that analyzes direct user inputs) compared to at least one previous direct user input (Mavani ¶ [0056]-[0058], "During an enrollment process, an enterprise organization may ask a user to speak a key phrase ... in order to obtain a voice print of the key phrase for the user. The voice print for the user may then be stored in the voice biometric database 112c.... When the user subsequently attempts to access account information, the user may be prompted to speak the key phrase again... the audio data processing engine 112a may receive the spoken key phrase and generate a voice print for the spoken key phrase. The voice print may then be compared to one or more of the voice biometric signatures stored in the voice biometric database 112c. … If there is a match, then the voice biometric authentication module 112b may conclude that the user that provided the voice command is the same user associated with the voice biometric signature."); and generating, in response to the authentication, a user dataset from the direct user input (Mavani ¶ [0072], "Upon determining that there is a mismatch between the additional passive voice monitoring data and the voice biometric signature, voice biometric training server 120 may update or refine the voice biometric signature at step 206." Updating a voice biometric signature is considered analogous to generating a user dataset) [and from a plurality of indirect user inputs to the data transmission device]. It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Gupta et al.’s authentication system to incorporate Mavani’s initial authentication because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Gupta et al.’s authentication system as modified by Mavani’s initial authentication can yield a predictable result of reducing resource utilization since the system would be able to skip creating a user dataset dependent on whether the initial authentication fails. Thus, a person of ordinary skill would have appreciated including in Gupta et al.’s authentication system the ability to do Mavani’s initial authentication since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 2 Regarding claim 2, the rejection of claim 1 is incorporated. Gupta et al. in view of Mavani disclose all the elements of the claimed invention as stated above. Gupta et al. further disclose wherein the authentication of the user is based on the direct user input (Gupta et al. ¶ [0024], "Speaker recognition (voice biometrics) utilizes unique characteristics of a person's voice to identify or authenticate the person as a user of a device or service."), and the direct user input comprises a user voice command that is encrypted (Gupta et al. ¶ [0024], "These unique characteristics may be evaluated to generate feature vectors combined from multiple samples of the user, to produce an embedding vector (sometimes called a “voiceprint”)." Generating a voiceprint is considered analogous to encrypting a user voice command) and compared with a plurality of stored user voice commands (Gupta et al. ¶ [0135], "Using the enrolled voiceprints of the potential identities, the voice bio engine 122c computes a similarity score for each potential identity indicating a similarity or distance between the enrolled voiceprint and the inbound voiceprint extracted for the end-user."). Claim 9 Regarding claim 9, Gupta et al. disclose a computer program for enhancing security associated with networked devices via AI enhanced processing (Gupta et al. ¶ [0224], "When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium."). The remaining limitations of claim 9 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 15 Regarding claim 15, the limitations of claim 15 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claims 3, 4, 10, 11, 16, and 17 are rejected under 35 U.S.C. 103 as obvious over Gupta et al. in view of Mavani as applied to claims 1, 9, and 15 above, and further in view of US Patent Publication 20240232308 A1 (Beaver et al.). Claim 3 Regarding claim 3, the rejection of claim 1 is incorporated. Gupta et al. in view of Mavani disclose all the elements of the claimed invention as stated above. Gupta et al. further disclose wherein the user dataset is transferred to an orchestration engine configured to: receive the user dataset from the data transmission device (Gupta et al. ¶ [0127], "In operation 132, the provider server 106 invokes the computing service 105 by sending an authentication request to the computing service 105. The authentication request includes various types of inbound contact data received or otherwise captured from an instruction the user device 114, such as a device identifier, a voice command, and various metadata."); request and aggregate a plurality of stored user data from a plurality of data storage locations (Gupta et al. ¶ [0110], "the identification server 102 queries all or most of the enrolled embeddings stored in the identity database 104a" ¶ [0135], "the voice bio engine 122c queries a voiceprint database 104b to retrieve the enrolled voiceprints associated with the set of potential identities."), wherein the plurality of stored user data comprises previous direct user inputs (Gupta et al. ¶ [0135], "the voice bio engine 122c queries a voiceprint database 104b to retrieve the enrolled voiceprints associated with the set of potential identities. ¶ [0026], "The speech recognition engine can generate the enrollee voiceprint using the speaker feature vectors or embeddings extracted from the enrollee audio signals containing the utterances of the speaker." An enrolled voiceprint is considered analogous to a direct user input) and indirect user inputs (Gupta et al. ¶ [0107], "the identification server 102 generates an enrolled context-print embedding for an enrollee's contextual “scene” (e.g., public setting, private setting, at home, at work, at school, at expected location, at unexpected location) or transaction context (e.g., transaction or function offered by the service provider system 103 that the end-user intended to access) by applying a trained context engine 128 on transaction context data within instances of the enrollment contact data." A context-print is considered analogous to indirect user inputs); configure the user dataset and the plurality of stored user data for analysis in a threat analytics module (Gupta et al. ¶ [0105]-[0108], "The identification server 102 then algorithmically combines (e.g., averages) each of the enrollment feature vectors (as extracted from the enrollment data) to generate an enrollment embedding of a given type (e.g., voiceprint, context-print, faceprint) using the one or more enrollment feature vectors. ... the identification server 102 algorithmically combines (e.g., averages, concatenates, convolves) one or more inbound feature vectors (as extracted from the inbound contact data) to generate one or more inbound embeddings. The identification server 102 executes programming for determining similarity scores based upon a distance (or other algorithm) between the inbound embeddings and the corresponding enrolled embeddings of one or more enrollees."); transfer the data elements to a threat analytics module (Gupta et al. ¶ [0096], "Various types of functional engines 122 for spoof detection may be trained to detect if a speech utterance is genuine, replayed, distorted, or synthesized, and applied to the (enrollment and inbound) contact data." Functional engines trained to detect speech authenticity are considered analogous to a thread analytics module); and repeat the [prioritization and] transfer for a plurality of subsequent data elements (Gupta et al. ¶ [0056], "The identity app may instruct the user device 114 to transmit data in the background to the identification server 102, databases 104, or provider server 106, continuously as a data stream.... the user device 114 may transmit certain types of data when the user device 114 launches and executes the identity app and then every five minutes transmits the data (or updates to the data); but also the user device 114 may continuously stream other types of data to the server 102, 106." Continuously streaming new data to an identification server for comparison is considered analogous to repeating the above process for a plurality of subsequent data elements). Gupta et al. in view of Mavani do not explicitly disclose all of prioritizing a data element from a user dataset and a data element from a plurality of stored user data. However, Beaver et al. disclose prioritizing a data element from a user dataset and a data element from a plurality of stored user data to be output (Beaver et al. ¶ [0026]-[0029], "Flow 100 then proceeds to step 106 with determining whether a reference voice print is stored for the user, such as in user account database 218 in FIG. 2 ... if at step 106, there is not a reference voice print stored for the user, or at step 108, the sample voice print does not match the reference voice print, then the user must be manually authenticated at step 110. A user may be manually authenticated, for example, through password-based authentication" Passwords and voice prints are considered analogous to data elements. Thus, preferring to first authenticate a user via voiceprint before resorting to manual authentication via password is considered analogous to prioritizing a data element from both a user dataset and stored user data); transferring the data elements to a threat analytics module (Beaver et al. ¶ [0050], "Identity provider service 220 is configured to authenticate the user. Identity provider service 220 is further configured to access a reference voice print associated with the user, such as described with respect to step 106 in FIG. 1, which may be stored in user account database 218." Identity provider service 220 is considered analogous to a threat analytics module); and repeating the prioritization and transfer for a plurality of subsequent data elements (Beaver et al. ¶ [0032], "Flow 100 then proceeds to step 118 with training (or retraining/tuning) the user authentication model. For example, where there was no reference voice print stored, such as at step 106, a reference voice print may be generated through an authentication model for the user. The new reference voice print may then be available for subsequent user authentication." Subsequent user authentications are considered analogous to repeating the prioritization and transfer processes for subsequent data elements). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Gupta et al. in view of Mavani to incorporate Beaver et al.’s prioritization of data elements because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Gupta et al.’s orchestration engine as modified by Beaver et al.’s prioritization of data elements can yield a predictable result of reducing system overhead since the authentication process would be able to choose whichever data element to compare that would consume the least computational resources. Thus, a person of ordinary skill would have appreciated including in Gupta et al.’s orchestration engine the ability to do Beaver et al.’s prioritization of data elements since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 4 Regarding claim 4, the rejection of claim 3 is incorporated. Gupta et al. in view of Mavani in view of Beaver et al. disclose all the elements of the claimed invention as stated above. Gupta et al. further disclose wherein the threat analytics module is a cloud-based natural language application programming interface (API) for threat analytics (Gupta et al. ¶ [0055], "the identification server 102 may receive, extract, and store as various types of metadata features in the analytics databases 104. As explained further below, the identification server 102 may apply one or more functional engines 122" Server-sided functional engines 122 are considered analogous to a cloud-based NL API for threat analytics) configured to: receive an input of data from the orchestration engine (Gupta et al. ¶ [0162], "In operation 402, the server extracts various types of risk-related features of the contact metadata and/or biometric data. "); compare similar data elements from the user dataset and the plurality of stored user data (Gupta et al. ¶ [0133], "The identification engine 122b then compares the inbound context embedding against each of the context embedding for the enrolled users in the identity database 104a to identify the set of identities associated with the context-prints having a nearest similarity score to the inbound context-print."); generate a match score between the compared similar data elements (Gupta et al. ¶ [0135], "Using the enrolled voiceprints of the potential identities, the voice bio engine 122c computes a similarity score for each potential identity indicating a similarity or distance between the enrolled voiceprint and the inbound voiceprint extracted for the end-user."); and transfer the generated match scores to an AI or machine learning (ML) model (Gupta et al. ¶ [0135], "The voice bio engine 122c then feeds the voice similarity scores and the enrolled voiceprints satisfying a matching threshold to downstream operations."). Claim 10 Regarding claim 10, the rejection of claim 9 is incorporated. The limitations of claim 10 are similar in scope to that of claim 3 and therefore are rejected for similar reasons as described above. Claim 11 Regarding claim 11, the rejection of claim 10 is incorporated. The limitations of claim 11 are similar in scope to that of claim 4 and therefore are rejected for similar reasons as described above. Claim 16 Regarding claim 16, the rejection of claim 15 is incorporated. The limitations of claim 16 are similar in scope to that of claim 3 and therefore are rejected for similar reasons as described above. Claim 17 Regarding claim 17, the rejection of claim 16 is incorporated. The limitations of claim 17 are similar in scope to that of claim 4 and therefore are rejected for similar reasons as described above. Claims 5, 12, and 18 are rejected under 35 U.S.C. 103 as obvious over Gupta et al. in view of Mavani in view of Beaver et al. as applied to claims 4, 11, and 17 above, and further in view of US Patent Publication 20240428101 A1 (Smith et al.). Claim 5 Regarding claim 5, the rejection of claim 4 is incorporated. Gupta et al. in view of Mavani in view of Beaver et al. disclose all the elements of the claimed invention as stated above. Gupta et al. further disclose wherein the trained AI or ML model is configured to intake the generated match scores (Gupta et al. ¶ [0135], "The identification server 102 uses the voice similarity scores (from operation 146) and the risk score (from operation 145) to determine and select a most likely voiceprint match to predict the current identity of the user.") and generate the threat score (Gupta et al. ¶ [0119], "the identification server 102 further trains or develops the risk engine 122a by applying the risk engine 122a on various types of fraud-related features, risk-indicator features, and fraudulent feature vectors, where the risk engine 122a is trained to adjust the risk level based upon features or feature vectors extracted from contact data suggesting fraud or elevated risk.") [based on an assessment of the generated match scores]. Gupta et al. in view of Mavani in view of Beaver et al. do not explicitly disclose all of continuously training an AI or ML model using a federated learning strategy. However, Smith et al. disclose initializing a set of parameters for the AI or ML model through a set of initial data (Smith et al. ¶ [0043], "The neural network 122 may be pre-trained with features based on training data 347 comprising human voices spoken by a plurality of historical speakers inside the vehicle."); continuously training the AI or ML model through analyzed data collected from a plurality of users (Smith et al. ¶ [0044], "a newly updated voiceprint by the neural network 122 is fed back to the neural network 122 to train the neural network 122 based on the just processed data based on human voice 110. This process is repeated as new human voice 110 becomes available, which allows the neural network 122 to continuously improve its accuracy"); updating the set of parameters for the ML model based on the analyzed data from a plurality of users (Smith et al. ¶ [0043], "the neural network 122 may include an incremental learning algorithm that dynamically integrates the input features 112 weighted based on the probabilistic notion 150 into the voiceprint 317 of the identified user."); and obtaining a higher threat score precision via the updated set of parameters for the AI or ML model (Smith et al. ¶ [0044], "This process is repeated as new human voice 110 becomes available, which allows the neural network 122 to continuously improve its accuracy"), wherein the trained AI or ML model is configured to intake the generated match scores and generate the [threat] score based on an assessment of the generated match scores (Smith et al. ¶ [0037]-[0038], "The authentication module 342 determines a probabilistic notion 150 based on the similarity 118 calculated by the similarity module 332 between the input vector 113 and the historical vectors 116." A probabilistic notion is considered analogous to a score). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Gupta et al. in view of Mavani in view of Beaver et al. to include Smith et al.’s AI or ML model training because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Gupta et al.’s functional engines as modified by Smith et al.’s AI or ML model training can yield a predictable result of improving functional engine accuracy since continuous training based on iterative data would allow Gupta et al.’s functional engines to learn user behaviors overtime. Thus, a person of ordinary skill would have appreciated including in Gupta et al.’s functional engines the ability to do Smith et al.’s AI or ML model training since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 12 Regarding claim 12, the rejection of claim 11 is incorporated. The limitations of claim 12 are similar in scope to that of claim 5 and therefore are rejected for similar reasons as described above. Claim 18 Regarding claim 18, the rejection of claim 17 is incorporated. The limitations of claim 18 are similar in scope to that of claim 5 and therefore are rejected for similar reasons as described above. Claims 6, 8, 13, and 19 are rejected under 35 U.S.C. 103 as obvious over Gupta et al. in view of Mavani as applied to claims 1, 9, and 15 above, and further in view of US Patent Publication 20180349749 A1 (Cardinal et al.). Claim 6 Regarding claim 6, the rejection of claim 1 is incorporated. Gupta et al. in view of Mavani disclose all the elements of the claimed invention as stated above. Mavani further discloses wherein the data transmission device is a [smart card] device further comprising: the on-board generative AI (Mavani ¶ [0031], “the voice biometric authentication computing platform 110 may be configured to train the virtual assistant using previous voice commands and/or interactive voice response session…. In some instances, the voice biometric authentication computing platform 110 may be configured to dynamically update the virtual assistant as additional data and/or feedback is received.” ¶ [0035], "User device 130 may be equipped with a virtual assistant" A virtual assistant that learns and trains overtime based on additional data and/or feedback is considered analogous to artificial intelligence); at least one built in internet of things (IoT) sensor associated with collecting user data comprising a geocoordinate, an internet protocol (IP) address, a device identifier (ID), or a user voice sample (Mavani ¶ [0034], “user device 130 may be configured to receive information from, send information to, and/or otherwise exchange information with one or more devices described herein.” ¶ [0040], "User device 130 may further include one or more of an audio input (e.g., a microphone), a fingerprint sensor, a camera (e.g., a still camera, a video camera, an infrared/biometric camera, and the like), and/or a location sensor (e.g., a GPS device, a triangulation device such as a telecommunications modem, and the like)." A location sensor that transmits location data to other networked devices is considered analogous to an IoT sensor); a digital display which is configured to display a set of relevant data based on a user requested task (Mavani ¶ [0039], "The virtual assistant may cause display of a virtual assistant user interface screen on display screen of user device, e.g., in response to one or more user queries using the virtual assistant."); an alert mechanism configured to trigger an audio notification based on an invalidation of the direct user input (Mavani ¶ [0086], "in response to determining a mismatch between the voice command and one of the one or more voice biometric signatures, voice biometric authentication computing platform 110 may generate an error message at step 219, and the error message may comprise at least one of: an audio file, a video file, an image file, or text content. The error message may be transmitted to user device 130 to notify the user at user device 130 that the voice command could not be authenticated. "); and at least one non-transitory memory device (Mavani ¶ [0034], "User device 130 may include one or more computing devices and/or other computer components (e.g., processors, memories, communication interfaces)." ¶ [0096], "In general, the one or more computer-readable media may be and/or include one or more non-transitory computer-readable media.") that stores temporary data (Mavani ¶ [0095], "The computer-executable instructions may be stored as computer-readable instructions on a computer-readable medium such as ... RAM" RAM stores temporary data.). Gupta et al. in view of Mavani do not explicitly disclose all of a smart card device. However, Cardinal et al. disclose wherein a data transmission device is a smart card device (Cardinal et al. ¶ [0044], "In some embodiments, the smart card may include a GPS chip for receiving and/or transmitting GPS signals."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Gupta et al. in view of Mavani to incorporate Cardinal et al.’s smart card device. The suggestion/motivation for doing so would have been that, “It would be desirable to provide systems and methods for a purchasing instrument that improves security of sensitive data associated with the instruments, enhances usability of the instrument and maintains a limited form factor of the instrument,” as noted by the Cardinal et al. disclosure in paragraph [0011]. Claim 8 Regarding claim 8, the rejection of claim 6 is incorporated. Gupta et al. in view of Mavani in view of Cardinal et al. disclose all the elements of the claimed invention as stated above. Mavani further discloses wherein operation of the alert mechanism to notify the user of direct user input that has been flagged as a threat comprises: triggering an alert notification [on the smart card] comprising an audio notification (Mavani ¶ [0086], "in response to determining a mismatch between the voice command and one of the one or more voice biometric signatures, voice biometric authentication computing platform 110 may generate an error message at step 219, and the error message may comprise ... an audio file"); and sending an alert notification to a user chosen secondary user device (Mavani ¶ [0086], "Additionally, or alternatively, the error message may be transmitted to user device 140 (e.g., as verified device associated with the user account)") comprising a text notification (Mavani ¶ [0086], "the error message may comprise ... text content."). Cardinal et al. further disclose triggering an alert notification on the smart card (Cardinal et al. ¶ [0045], "Illustrative alerts may include fraud alerts. For example, an OLED display of a smart card may flash red when a potential security breach is detected. The security breach may relate to exposure of sensitive data stored on the smart card.") [comprising an audio notification]. Claim 13 Regarding claim 13, the rejection of claim 9 is incorporated. The limitations of claim 13 are similar in scope to that of claim 6 and therefore are rejected for similar reasons as described above. Claim 19 Regarding claim 19, the rejection of claim 15 is incorporated. The limitations of claim 19 are similar in scope to that of claim 6 and therefore are rejected for similar reasons as described above. Claims 7, 14, and 20 are rejected under 35 U.S.C. 103 as obvious over Gupta et al. in view of Mavani in view of Cardinal et al. as applied to claims 6, 13, and 19 above, and further in view of US Patent 9600656 A1 (Wellinghoff). Claim 7 Regarding claim 7, the rejection of claim 6 is incorporated. Gupta et al. in view of Mavani in view of Cardinal et al. disclose all the elements of the claimed invention as stated above. Gupta et al. in view of Mavani in view of Cardinal et al. do not explicitly disclose all of temporarily storing a plurality of user authentication data. However, Wellinghoff discloses wherein the at least one non-transitory memory device is a cache temporary memory device that is configured to temporarily store a plurality of user authentication data for reuse in an instance of a disruption of a user requested task (Wellinghoff ¶ (15), "domain controller 210... may provide local cache 250 of authentication credentials 252 for the domain 200. These locally stored credentials 252 are sometimes referred to as domain cached credentials or DCC. This local cache 250 may be in the registry maintained by the operating system 260 of the device 220."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Gupta et al. in view of Mavani in view of Cardinal et al. to include Wellinghoff’s temporary storage of user credentials because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Gupta et al.’s authentication system as modified by Wellinghoff’s temporary storage of user credentials can yield a predictable result of improving user experience since the system would be able to reuse the temporarily-stored user credentials to resume a disrupted authentication process, skipping the need for a user to redo the initial authentication steps. Thus, a person of ordinary skill would have appreciated including in Gupta et al.’s authentication system the ability to do Wellinghoff’s temporary storage of user credentials since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 14 Regarding claim 14, the rejection of claim 13 is incorporated. The limitations of claim 14 are similar in scope to that of claim 7 and therefore are rejected for similar reasons as described above. Claim 20 Regarding claim 20, the rejection of claim 19 is incorporated. The limitations of claim 20 are similar in scope to that of claim 7 and therefore are rejected for similar reasons as described above. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 20210398544 A1 to Kwon discloses an onboard generative AI and prioritization of data elements. US Patent Publication 20200051572 A1 to Sohn discloses an implementation of an authentication system on a smartphone, the smartphone including an onboard generative AI, an IoT sensor, and a display, among other important hardware components. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB B VOGT whose telephone number is (571)272-7028. The examiner can normally be reached Monday - Friday 9:30am - 7pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACOB B VOGT/ Examiner, Art Unit 2653 /Paras D Shah/ Supervisory Patent Examiner, Art Unit 2653 02/10/2026
Read full office action

Prosecution Timeline

Jun 25, 2024
Application Filed
Feb 10, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12505279
METHOD AND SYSTEM FOR DOMAIN ADAPTATION OF SOCIAL MEDIA TEXT USING LEXICAL DATA TRANSFORMATIONS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+100.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month