Prosecution Insights
Last updated: April 19, 2026
Application No. 18/520,116

AUTHENTICATING USER ACTIONS USING ARTIFICIAL INTELLIGENCE

Final Rejection §103§112
Filed
Nov 27, 2023
Examiner
DHAKAD, RUPALI
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
Capital One Services LLC
OA Round
2 (Final)
39%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
71%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
13 granted / 33 resolved
-18.6% vs TC avg
Strong +31% interview lift
Without
With
+31.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
40 currently pending
Career history
73
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 5-6, and 16-17 are cancelled. Claims 1-4, 7-15, 18-24 are pending. Response to Arguments Applicant’s arguments, see page 12, filed 11/19/2025, with respect to claims 1-5, 7-16, and 18-20 have been fully considered and are persuasive. The rejection under 101 of 08/26/205 has been withdrawn. Applicant’s arguments, see page 12, filed 11/19/2025, with respect to claims 1-5, 7-16, and 18-20 have been fully considered and are persuasive. The enablement rejection under 112(a) of 08/26/205 has been withdrawn. However, the written description rejection under 112(a) is maintained. The applicant contends that the specification sufficiently describes the claimed machine learning model by reciting, in claim 2, functional language such as “inputting the natural language response and the plurality of natural language representations into a machine learning model to obtain an indication of whether the natural language response corresponds to one or more natural language representations of the plurality of natural language representations,” and asserts that one of ordinary skill in the art would understand how to implement this feature using “a machine learning model.” The Examiner respectfully disagrees. Written description under 35 U.S.C. § 112(a) requires that the specification, not the claims, convey with reasonable clarity to those skilled in the art that the inventor had possession of the claimed invention at the time of filing. Functional claim language describing inputs and outputs does not, by itself, demonstrate possession of the full scope of the claimed subject matter. While the specification references the use of a machine learning model and generically identifies certain model types (e.g., neural networks, factorization machines, Bayesian models), it does not adequately describe how the claimed “plurality of textual representations” or “plurality of natural language representations” are generated from the input parameters. The disclosure treats the large language model (LLM) that produces these representations as a black box, without providing details of the generation process, representative examples of multiple distinct textual representations for the same parameter set, or structural/functional features common to all embodiments encompassed by the claimed “plurality.” Because the specification fails to describe the process or provide representative species showing how multiple distinct representations are created and what distinguishes them, a person of ordinary skill in the art would not be reasonably assured that the inventor was in possession of the full scope of the claimed “plurality” limitations at the time of filing. Accordingly, the rejection under 35 U.S.C. § 112(a) for lack of written description is maintained. Applicant's arguments filed 11/19/2025 have been fully considered but they are not persuasive. Because, applicant still not clarify the term “low-security” the term “low-security” in claim 5 is a relative term which renders the claim indefinite. The term “low-security” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The exact metes and bounds of the terms are not known and it is not sure and it is not sure how a low security subset of the plurality of parameter is being determined. To expedite compact prosecution, the examiner interprets lower security level as low-security. Appropriate corrections are required. Dependent claim 6, does not cure the deficiency of the its parent claim 5, therefore claim 6 also is rejected under same rationale as claim 5 . Therefore rejection under 35 USC § 112 (b) is maintained. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. While the claim recites “input a plurality of parameters, from an authentication request indicating an action requested using the mobile device, into a large language model to obtain a plurality of textual representations, of the plurality of parameters, that include a natural language description of the plurality of parameters; … transmit, to the mobile device, via a network, a first short message service (SMS) message comprising a prompt to describe the action; … receive, from the mobile device, a second SMS message comprising a response to the prompt; and … generate, after receiving the second SMS message from the mobile device and after inputting the plurality of textual representations from the large language model, an authentication flag, for the action, by inputting the response and the plurality of textual representations into a machine learning model to obtain an indication of whether the response matches one or more textual representations of the plurality of textual representations,” the specification does not adequately disclose how the large language model produces the claimed “plurality of textual representations” for a given parameter set, nor does it describe the machine learning model used to determine correspondence in sufficient detail. In particular, the specification treats the generation of multiple distinct textual representations as essentially a black box (i.e., input parameters → LLM → representations) without providing representative examples of multiple outputs, prompt templates or prompt-engineering guidance, output control/selection criteria, or other procedural detail showing how plural and distinct representations are generated and distinguished. In other words, the specification discloses how the LLM procures these outputs using functional language without actually disclosing how they are obtained. Likewise, the specification lacks detail regarding the correspondence-determining machine learning model (for example, its architecture, example training methodology or training data, representative input/output formats, or operational procedure) that would demonstrate possession of the claimed element. While an off-the-shelf machine learning model may be used as a starting point, unlike a generic computer with fixed components that predictably perform known operations, a machine learning model must be specifically configured and trained so that it can perform the precise correspondence-determination function required for the claimed invention to operate. The present specification does not describe such configuration or training in sufficient detail to show that the inventor was in possession of a model capable of performing the claimed function at the time of filing. As established in Ariad Pharms., Inc. v. Eli Lilly & Co., 598 F.3d 1336, 1351, 94 USPQ2d 1161, 1172 (Fed. Cir. 2010), the specification must convey with reasonable clarity to those skilled in the art that the inventor had possession of the claimed invention. Merely stating that a large language model and a machine learning model are used, and reciting functional inputs and outputs in the claims, without providing representative embodiments or descriptive detail of how multiple textual representations are generated and how the correspondence model is configured and trained to operate, does not satisfy this requirement. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5 and 6: The term “low-security” in claim 5 is a relative term which renders the claim indefinite. The term “low-security” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The exact metes and bounds of the terms are not known and it is not sure and it is not sure how a low security subset of the plurality of parameter is being determined. To expedite compact prosecution, the examiner interprets lower security level as low-security. Appropriate corrections are required. Dependent claim 6, does not cure the deficiency of the its parent claim 5, therefore claim 6 also is rejected under same rationale as claim 5. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 is rejected under 35 U.S.C. 103 as being unpatentable over Cao et al. (U. S. PGPub. No. 2023/0095728 A1) (hereinafter “Cao”) in view of Ramasamy et al. (U. S. PGPub. No. 2020/0267179 A1) (hereinafter “ Ramasamy”) and further in view of DHINDSA et al. (U. S. PGPub. No. 2022/0224685 A1) (hereinafter “Dhindsa”). Regarding Claim 1, Cao teaches: one or more processors (Cao: [0109], In some non-limiting embodiments or aspects, processor 904 may be implemented in hardware, software, or a combination of hardware and software. For example, processor 904 may include a processor (e.g., a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), etc.), a microprocessor, a digital signal processor (DSP), and/or any processing component (e.g., a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), etc.) that can be programmed to perform a function); and a non-transitory, computer-readable storage medium storing instructions that when executed by the one or more processors cause the one or more processors to (Cao: [0111] Device 900 may perform one or more processes described herein. Device 900 may perform these processes based on processor 904 executing software instructions stored by a computer-readable medium, such as memory 906 and/or storage component 908. A computer-readable medium may include any non-transitory memory device. A memory device includes memory space located inside of a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory 906 and/or storage component 908 from another computer-readable medium or from another device via communication interface 914. When executed, software instructions stored in memory 906 and/or storage component 908 may cause processor 904 to perform one or more processes described herein): input a plurality of parameters, from an authentication request indicating an action requested using the mobile device, into a large language model to obtain a plurality of textual representations, of the plurality of parameters (Cao: [0081] As shown in FIG. 3, at step 310, process 300 may include inputting each of the plurality of attributes into an NLP model…[0008], the first machine learning model comprises a NLP model and wherein, when training the first machine learning model to perform the first task, the at least one processor is further programmed or configured to: input each of the plurality of attributes (=plurality of parameters) of each RTP transaction into the NLP model as a word of a sentence. [0104] In some non-limiting embodiments or aspects, processing a transaction may include generating and/or communicating at least one transaction message (e.g., authorization request, authorization response, any combination thereof, and/or the like). For example, a client device (=mobile device) (e.g., customer device 806, a POS device of merchant system 808, and/or the like) may initiate the transaction, e.g., by generating an authorization request (=authorization request). Additionally or alternatively, the client device (e.g., customer device 806, at least on device of merchant system 808, and/or the like) may communicate the authorization request), The Cao does not explicitly disclose: transmit, to the mobile device, via a network, and after inputting the plurality of parameters into the large language model, a first short message service(SMS) message comprising a prompt to describe the action as part of an authentication process; receive, from the mobile device and via the network, a second SMS message comprising a response to the prompt; However, in an analogous art, Dhindsa disclose: transmit, to the mobile device, via a network, and after inputting the plurality of parameters into the large language model, a first short message service(SMS) message comprising a prompt to describe the action as part of an authentication process (Dhindsa: [0034] As further shown in FIG. 1B, and by reference number 130, the authentication system sends an authentication request. The authentication request may prompt the user device (=mobile device) to provide a contextual description of the operation. For example, via User Device 2 (=mobile device), the authentication request may prompt (=first short messages prompt) the application to present a message (=SMS) (e.g., via a display) to request User A to record a contextual description of the requested operation (e.g., a contextual description that describes one or more parameters of the operation); receive, from the mobile device and via the network, a second SMS message comprising a response to the prompt (Dhindsa: [0036], As shown in FIG. 1C, User A may provide a text-based contextual description of the operation (e.g., using a text message and/or a user input to a field of the application (=second SMS message). [0038] As further shown in FIG. 1C, and by reference number 140, the authentication system receives the contextual description within an authentication response. For example, the authentication response and/or contextual description may include text data, audio data, and/or video data that is associated with one or more described characteristics of the operation from User A); It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Cao’s method of inputting plurality of attributes into NPL model by applying Dhindasa’s method of sending an authentication request which prompt the user device (=mobile device) to provide a contextual description of the operation and receiving a contextual description of the operation from the user device. The motivation is to determine whether the authentication response is valid based on a comparison of the described characteristic of the operation and the parameter of the operation and based on a determination that the authentication response is valid, performance of the operation based on the parameter (Dhindsa: Abstract]). Cao in view of Dhindsa does not explicitly disclose: and generate, after receiving the second SMS message from the mobile device and after inputting the plurality of textual representations from the large language model, an authentication flag, for the action by inputting the response and the plurality of textual representations into a machine learning model to obtain an indication of whether the textual data response matches one or more textual representations of the plurality of textual representations and However in analogous art, Ramasamy teaches: and generate, after receiving the second SMS message from the mobile device and after inputting the plurality of textual representations from the large language model, an authentication flag, for the action by inputting the response and the plurality of textual representations into a machine learning model to obtain an indication of whether the textual data response matches one or more textual representations of the plurality of textual representations and (Ramasamy: [0032], the alert status 127 may be a flag or an indicator that is set when a data manipulation attack has been detected. [0034] In one embodiment, the alert engine 110 is configured to determine whether the alert vector 126 comprises any alert status 127 that indicate a data manipulation attack has been detected and may send an alert 130 in response to the determination. The alert 130 may be an email, a text message (e.g. a short message service (SMS) message), an application pop-up alert, or any other suitable type of message notification…) A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Cao in view of Dhindsa by applying the well-known technique as disclosed by Ramasamy of providing flag after detecting manipulation. The motivation is detecting unauthorized data manipulation attack (Ramasamy: [0001]). Claim(s) 2-4, 7-15, 18-24 are rejected under 35 U.S.C. 103 as being unpatentable over LEE et al. (U. S. PGPub. No. 2020/143809 A1 A1) (hereinafter “Lee”) and DHINDSA et al. (U. S. PGPub. No. 2022/0224685 A1) (hereinafter “Dhindsa”); and in further view of Liila et al. (U. S. PGPub. No 2023/0306504 A1) (hereinafter “Lee”). Regarding Claim 2, Lee teaches: receiving a first communication that includes an authentication request indicating an action was requested using a user device (Lee: [0063] According to an embodiment of the disclosure, the electronic apparatus 100 may receive input of a first utterance of a user(=first communication=[0075] The Automatic Speech Recognition part 410 may include various processing circuitry and/or executable program elements for converting a user utterance into the form of a text that the electronic apparatus 100 can process, identify a first task for the first utterance, [0113] When a user utterance is input, the electronic apparatus may determine a task associated with the input user utterance at operation S510 (=requested task). For example, the electronic apparatus 100 may acquire the entity and the intention of a user utterance input (=an action requested to be performed) through the natural language understanding part 420 and determine a task for the user utterance),wherein the authentication request comprises a plurality of parameters (Lee: [0075] The Automatic Speech Recognition part 410 may include various processing circuitry and/or executable program elements for converting a user utterance into the form of a text that the electronic apparatus 100 can process, by performing voice recognition for the user utterance input through a microphone, etc). Lee does not explicitly disclose: generating a prompt, related to the action, by inputting a subset of the plurality of parameters into a large language model; transmitting, to the user device, a second communication comprising the prompt; However in an analogus art, Dhindsa teaches: generating a prompt, related to the action, by inputting a subset of the plurality of parameters into a large language model (Dhindsa: [0034] As further shown in FIG. 1B, and by reference number 130, the authentication system sends an authentication request. The authentication request may prompt the user device to provide a contextual description of the operation. For example, via User Device 2, the authentication request may prompt the application to present a message (=SMS) (e.g., via a display) to request User A to record a contextual description of the requested operation (e.g., a contextual description that describes one or more parameters of the operation); transmitting, to the user device, a second communication comprising the prompt (Dhindsa: [0034] As further shown in FIG. 1B, and by reference number 130, the authentication system sends an authentication request. The authentication request may prompt the user device to provide a contextual description of the operation (=second communication prompt). For example, via User Device 2, the authentication request may prompt the application to present a message (=SMS) (e.g., via a display) to request User A to record a contextual description of the requested operation (e.g., a contextual description that describes one or more parameters of the operation. [0039] The contextual description may be received as unstructured data that may describe or identify the one or more parameters, and the one or more parameters may be received as structured data that specifies the parameters. Accordingly, the one or more described characteristics in the authentication response may or may not match the one or more parameters of the operation because a user that provides the authentication response may provide the contextual description using natural language (which indicates whether the authentication response is valid and/or whether or not User A is an authorized user of the user account)) It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Lee’s method of receive input of a first utterance of a user(=first communication) by applying Dhindasa’s method of sending an authentication request which prompt the user device (=mobile device) to provide a contextual description of the operation and receiving a contextual description of the operation from the user device. The motivation is to determine whether the authentication response is valid based on a comparison of the described characteristic of the operation and the parameter of the operation and based on a determination that the authentication response is valid, performance of the operation based on the parameter (Dhindsa: Abstract]). Lee in view of Dhindsa does not explicitly disclose: obtaining, based on the plurality of parameters, a plurality of natural language representations of the action; However, in an analogous art, Liila teaches: obtaining, based on the plurality of parameters, a plurality of natural language representations of the action (Liila: [0036], and notes 130 (e.g., natural language text explaining the reason for the activity). In some embodiments, as discussed above, the historical data 105 includes data for multiple activities associated with any number of residents. That is, for a given resident, the historical data 105 may include multiple sets of attributes, one for each activity. [0139] At block 730, the machine learning system extracts natural language input for the model. For example, the machine learning system may identify and evaluate natural language notes); receiving, from the user device, a natural language response to the prompt (Liila: [0044], For example, the resident may report a reason (=response to the prompt) for the withdrawal (e.g., “this is for a haircut,” or “it is my grandson's birthday”). In some embodiments, the recipient may additionally or alternatively provide a reason (such as on an invoice); inputting the natural language response (Liila: [0139] At block 730, the machine learning system extracts natural language input for the model. the natural language input can include verbal or recorded notes. [0140] In at least one embodiment, the machine learning system can perform one or more preprocessing operations on the natural language text to extract the input. [0140] In at least one embodiment, the machine learning system can perform one or more preprocessing operations on the natural language text to extract the input. For example, as discussed above with reference to FIG. 4, the machine learning system may extract the text itself, normalize it, remove noise and/or redundant elements, lemmatize it, tokenize it, generate one or more roots for it, vectorize it, and the like. One example for extracting and evaluating the natural language input is described in more detail below with reference to FIG. 8.) and the plurality of natural language representations into a machine learning model to obtain an indication of whether the natural language response corresponds to one or more natural language representations of the plurality of natural language representations (Lila: [0141] Using the method 700, the machine learning system can therefore extract relevant attributes to train machine learning model(s) to predict activity validity, or to be used as input to trained models in order to generate predicted validity during runtime), wherein the machine learning model has been trained to determine whether natural language representations describe a matching action (Liila: [0050] For example, the machine learning system 135 may process the historical data 105 for a given activity as input to the machine learning model 140, and compare the generated validity score to the ground-truth (e.g., an indication as to whether the activity was valid)). and based on determining that the natural language response corresponds to the one or more natural language representations of the plurality of natural language representations (Liila: [[0183] At block 1315, a first validity score (e.g., validity score 240 of FIG. 2) is generated by processing the first set of attributes using a trained machine learning model (e.g., machine learning model 140 of FIG. 1), wherein the first validity score indicates a probability that the first activity is valid), generating a third communication indicating that the action has been authenticated (Liila: [0160], At block 930, the machine learning system flags (=third communication) the activity as potentially valid. [0159], the machine learning system may proceed to use the machine learning model to evaluate the activity regardless) A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Lee in view of Dhindsa by applying the well-known technique as disclosed by Liila of providing flag that indicating that the activity as potentially valid . The motivation is to improve system and techniques to automatically monitor activity (Liila: [0006]) Regarding Claim 3, Lee in view of Dhindsa and Liila teach: The method of claim 2 (see rejection of claim 2 above), based on determining that the natural language response does not match the one or more natural language representations of the plurality of natural language representations, generating a fourth communication indicating that the action has not been authenticated (Lila: [0158] At block 920, the machine learning system determines whether any of the rules were violated. Although the illustrated example depicts sequential evaluation of each rule before arriving at block 920…[0159] If any rules were violated, the method 900 continues to block 925, where the machine learning system flags the activity as potentially invalid (=action has not neem authenticated)). A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Lee in view of Dhindsa by applying the well-known technique as disclosed by Liila of determines whether any of the rules were violated. The motivation is to improve system and techniques to automatically monitor activity (Liila: [0006]) Regarding Claim 4, Lee in view of Dhindsa and Liila teach: The method of claim 3 (see rejection of claim 3 above), generating a warning message for the user (Lila: [0170] At block 1110, the machine learning system alert (=warning message) the resident about the invalid activity. This may allow the resident to quickly respond (e.g., by locking or freezing other accounts), wherein the warning message comprises one or more parameters associated with the action and wherein the warning message indicates a security issue (Lila: [0068], the alert include information relating to the suspicious activity, such as the magnitude of the transaction(s), the recipient(s), and the like. [0110], In one such embodiment, lower values may indicate a lower probability that the activity is invalid or inappropriate, while a higher value indicates that the activity is more likely to be problematic.) and transmitting the warning message to the user device of the user (Lila: [0068], the intervention system transmit an alert (e.g., to one or more clinicians or family members…this alert is transmitted to an individual who is not involved, associated, or otherwise implicated in the activity. For example, rather than requesting that the authorizing caregiver (e.g., that approved or otherwise carried out the withdrawal) or the individual that authored the reason note(s) verify the activity, the system may identify and transmit the alert to a third party such as a compliance officer, a management employee, and the like). A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Lee in view of Dhindsa by applying the well-known technique as disclosed by Liila of providing alert (=warning message) the resident about the invalid activity. The motivation is to improve system and techniques to automatically monitor activity (Liila: [0006]) Regarding Claim 7, Lee in view of Dhindsa and Liila teach: The method of claim 2 (see rejection of claim 2 above), receiving from the machine learning model a score below a first threshold that indicates that the natural language response corresponds to the one or more natural language representations of the plurality of natural language representations (Dhindsa: [0041], Using the similarity analysis, the contextual authentication module may determine a similarity score between a described characteristic and an indicated parameter of the operation. The authentication system may use the similarity score to determine whether a described characteristic is associated with a particular parameter. [0023], an operation associated with a relatively low security level (e.g., a security level that does not satisfy a threshold level associated with requiring a context-based authentication process and/or media-based authentication process), the application server may determine that a less secure authentication process (e.g., an OTP-based authentication process) may be utilized to authenticate User A) and above a second threshold that indicates that the natural language response does not match the one or more natural language representations of the plurality of natural language representations (Dhindsa: [0042], , when a described characteristic does not indicate an association with a corresponding parameter, the authentication system may infer that the response is not valid (e.g., was not provided by User A or an authorized user) because the user that provided the described characteristic should have known the parameter(s) of the operation. [0024] With reference to the example of User A requesting an operation that involves execution of a transaction, the application server may determine the security level based on a value of the transaction (e.g., a relatively lower value may be associated with a lower security level and a relatively higher value may be associated with a higher security level), a type of the transaction (e.g., a deposit may be associated with a lower security level and a withdrawal or a payment may be associated with a higher security level), a merchant involved in the transaction (e.g., a recognized merchant may be associated with a lower security level and an unrecognized merchant may be associated with a higher security level), and/or a location associated with the transaction (e.g., a location that is recognized as being associated with User A may be associated with a lower security level and a location that is not recognized as being associated with User A may be associated with a higher security level); and in response to receiving the score, authenticating the action using a different channel (Dhindsa: [0044] The biometric analysis module may include and/or be associated with a media-based biometric analysis model, such as a facial recognition model and/or a voice recognition model (=different channel). The facial recognition model and/or the voice recognition model (=different channel) utilize and/or may be trained based on the reference signatures in the user information database…[0045] Accordingly, the biometric analysis model may determine whether media content received in the authentication response includes a feature and/or is associated with a feature of an authorized user. If the feature is determined to be associated with a face of an authorized user and/or a voice of an authorized user (and/or speech by the authorized user), the authentication system may verify that the authentication response is associated with the user and/or determine that the authentication response is valid). It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Lee’s method of receive input of a first utterance of a user(=first communication) by applying Dhindsa’s method of verifying authentication response from the user and determining that the authentication response is valid or invalid. The motivation is to reducing or preventing malware or malicious actors from intercepting the information (Dhindsa: [0012]). Regarding Claim 8, Lee in view of Dhindsa and Liila teach: The method of claim 7 (see rejection of claim 7 above), receiving a message indicating that the action has been authenticated using the different channel (Dhindsa: [0025], the application server, within the notification (=message), may indicate the security level to indicate a type of authentication process (=different channel) that is to be performed by the authentication system). and in response to receiving the message that the action has been authenticated using the different channel (Dhindsa: [0025], the application server, within the notification (=message), indicate the security level to indicate a type of authentication process (=different channel) that is to be performed by the authentication system), executing a training routine of the machine learning model to train the machine learning model based on the score and a matching natural language representation (Liila: [0087] In the illustrated workflow 300, the machine learning system 335 uses the feedback (=message)350 to refine the machine learning model(s) used to score resident activity. For example, if the feedback 350 indicates that a specific transaction was valid, the machine learning system 335 may use the transaction attributes (along with the resident's attributes in some embodiments) as input to the model in order to generate a new validity score 340…[0141] Using the method 700, the machine learning system can therefore extract relevant attributes to train machine learning model(s) to predict activity validity, or to be used as input to trained models in order to generate predicted validity during runtime. [0223], the machine learning system could execute on a computing system in the cloud and train and/or use machine learning models. In such a case, the machine learning system 135 could train models to generate validity scores, and store the models at a storage location in the cloud. Doing so allows a user to access this information from any computing system attached to a network connected to the cloud (e.g., the Internet)). A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Lee in view of Dhindsa by applying the well-known technique as disclosed by Liila of training maching learning model by extracting natural language response as input. The motivation is to improve system and techniques to automatically monitor activity (Liila: [0006]) Regarding Claim 9, Lee in view of Dhindsa and Liila teach: The method of claim 2 (see rejection of claim 2 above), wherein obtaining the plurality of natural language representations of the action comprises (Dhindsa: [0018] As shown in FIG. 1A, and by reference number 105, a user device (User Device 1) provides a session input to the application server. The session input may be associated with requesting performance of an operation of the application server. For example, the session input may include and/or be associated with a user input from User A that is received via an application (e.g., an account management application, a browser, and/or an online portal) that is executing on the User Device 1) inputting the plurality of parameters into a large language model with an instruction to generate natural language phrases based on the plurality of parameters (Dhindsa: [0019] As an example, the session input may include a request or an instruction to perform an operation associated with a service provided via the application and/or the application server. For example, for a transaction account managed by the service provider system, the session input may include a request and/or instructions to cause the application server to perform and/or execute a transaction (e.g., a payment, a purchase, a withdrawal of funds, a deposit of funds, and/or a transfer of funds) involving the transaction account) It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Lee’s method of receive input of a first utterance of a user(=first communication) by applying Dhindsa’s method of providing a session input to the application server wherein session input which include a request and/or instructions to cause the application server to perform and/or execute a transaction (e.g., a payment, a purchase, a withdrawal of funds, a deposit of funds, and/or a transfer of funds) involving the transaction account. The motivation is to reducing or preventing malware or malicious actors from intercepting the information (Dhindsa: [0012]). Regarding Claim 10, Lee in view of Dhindsa and Liila teach: The method of claim 2 (see rejection of claim 2 above), receiving from the machine learning model a score below a first threshold that indicates that the natural language response corresponds to the one or more natural language representations of the plurality of natural language representations (Dhindsa: [0041], Using the similarity analysis, the contextual authentication module may determine a similarity score between a described characteristic and an indicated parameter of the operation. The authentication system may use the similarity score to determine whether a described characteristic is associated with a particular parameter. and above a second threshold that indicates that the natural language response does not match the one or more natural language representations of the plurality of natural language representations (Dhindsa: [0042], , when a described characteristic does not indicate an association with a corresponding parameter, the authentication system may infer that the response is not valid (e.g., was not provided by User A or an authorized user) because the user that provided the described characteristic should have known the parameter(s) of the operation. in response to receiving the score, selecting a parameter of the plurality of parameters (Dhindsa: [0012], the system may identify one or more parameters of an operation that is requested (e.g., by a user) during a user session and/or that is to be performed by an application server and send an authentication request to a user device associated with the user that requests the user to provide a contextual description of the operation. [0029] As shown in FIG. 1B, and by reference number 120, the authentication system determines parameters of the requested operation. For example, based on receiving the notification from the application server, a contextual authentication module of the authentication system may process the notification (and/or contextual information within the notification) to identify one or more parameters of the requested operation); and generating an additional prompt comprising a natural language query requesting a value associated with the parameter (Dhindsa: [0034], The authentication request may prompt the user device to provide a contextual description of the operation. For example, via User Device 2, the authentication request may prompt the application to present a message (e.g., via a display) to request User A to record a contextual description of the requested operation (e.g., a contextual description that describes one or more parameters of the operation). It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Liila’s method of training maching learning model by extracting natural language response as input by applying Dhindsa’s method of identifying one or more parameters of operation and prompt the user device to provide description of the requested operation. The motivation is to reducing or preventing malware or malicious actors from intercepting the information (Dhindsa: [0012]). Regarding Claim 11, Lee in view of Dhindsa and Liila teach: The method of claim 2 (see rejection of claim 2 above), wherein transmitting, to the user device of the user, the second communication comprising the prompt for the user to describe the action comprises (Dhinda: [0034], the authentication request may prompt the application to present a message (e.g., via a display) to request User A to record a contextual description of the requested operation (e.g., a contextual description that describes one or more parameters of the operation). Additionally, or alternatively, via User device 3, the authentication request may prompt User Device 3 to audibly request User A to provide the contextual description of the requested operation) transmitting a short message service message to a telephone number associated with the user (Dhindsa: [0034] As further shown in FIG. 1B, and by reference number 130, the authentication system sends an authentication request. The authentication request may prompt the user device to provide a contextual description of the operation. For example, via User Device 2 (=mobile phone), the authentication request may prompt the application to present a message (=Short message) (e.g., via a display) It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Liila’s method of training a matching learning model by extracting natural language response as input by applying Dhindsa’s method of sending authentication request by prompting the user device to provide description of the operation by displaying a message on the mobile device screen. The motivation is to reducing or preventing malware or malicious actors from intercepting the information (Dhindsa: [0012]). Regarding Claim 13, Lee teaches: one or more processors (Lee: the electronic apparatus 100 may include a memory 110 and a processor (e.g., including processing circuitry) 120); and a non-transitory, computer-readable storage medium storing instructions that when executed by the one or more processors cause the one or more processors to (Lee: [0175], the apparatuses may include an electronic apparatus according to the aforementioned embodiments (e.g.: an electronic apparatus 100). Where an instruction is executed by a processor, the processor may perform a function corresponding to the instruction by itself, or using other components under its control. A storage medium that is readable by machines may be provided in the form of a non-transitory storage medium.): This claim contains identical limitations found within that of claim 2 above albeit directed to a different statutory category (non-transitory medium). For this reason the same grounds of rejection are applied to claim 13. Regarding Claim 14, this claim contains identical limitations found within that of claim 3 above albeit directed to a different statutory category (non-transitory medium). For this reason the same grounds of rejection are applied to claim 14. Regarding Claim 15, this claim contains identical limitations found within that of claim 4 above albeit directed to a different statutory category (non-transitory medium). For this reason the same grounds of rejection are applied to claim 15. Regarding Claim 18. this claim contains identical limitations found within that of claim 7 above albeit directed to a different statutory category (non-transitory medium). For this reason the same grounds of rejection are applied to claim 18. Regarding Claim 19, this claim contains identical limitations found within that of claim 10 above albeit directed to a different statutory category (non-transitory medium). For this reason the same grounds of rejection are applied to claim 19. Regarding Claim 20, this claim contains identical limitations found within that of claim 12 above albeit directed to a different statutory category (non-transitory medium). For this reason the same grounds of rejection are applied to claim 20. Regarding Claim 21, The Lee in view of Dhindsa and Liila teaches, The method of claim 2 (see rejection of claim 2 above), wherein the second communication is a first short message service (SMS) message (Dhindsa: [0034] As further shown in FIG. 1B, and by reference number 130, the authentication system sends an authentication request. The authentication request may prompt the user device (=mobile device) to provide a contextual description of the operation. For example, via User Device 2 (=mobile device), the authentication request may prompt (=first short messages prompt) the application to present a message (=SMS) (e.g., via a display) to request User A to record a contextual description of the requested operation (e.g., a contextual description that describes one or more parameters of the operation) It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Lee’s method of receive input of a first utterance of a user(=first communication) by applying Dhindasa’s method of sending an authentication request which prompt the user device (=mobile device) to provide a contextual description of the operation and receiving a contextual description of the operation from the user device. The motivation is to determine whether the authentication response is valid based on a comparison of the described characteristic of the operation and the parameter of the operation and based on a determination that the authentication response is valid, performance of the operation based on the parameter (Dhindsa: Abstract]). Regarding Claim 22, The Lee in view of Dhindsa and Liila teaches, The method of claim 21 (see rejection of claim 21 above), wherein the second communication is a second short message service (SMS) message (Dhindsa: [0036], As shown in FIG. 1C, User A may provide a text-based contextual description of the operation (e.g., using a text message and/or a user input to a field of the application. [0038] As further shown in FIG. 1C, and by reference number 140, the authentication system receives the contextual description within an authentication response. For example, the authentication response and/or contextual description may include text data, audio data, and/or video data that is associated with one or more described characteristics of the operation from User A.); It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Lee’s method of receive input of a first utterance of a user(=first communication) by applying Dhindasa’s method of sending an authentication request which prompt the user device (=mobile device) to provide a contextual description of the operation and receiving a contextual description of the operation from the user device. The motivation is to determine whether the authentication response is valid based on a comparison of the described characteristic of the operation and the parameter of the operation and based on a determination that the authentication response is valid, performance of the operation based on the parameter (Dhindsa: Abstract]). Regarding Claim 23, The Lee in view of Dhindsa and Liila teaches, The method of claim 2 (see rejection of claim 2 above), wherein the indication indicates whether the natural language response matches the one or more natural language representations (Lee: [0067], the electronic apparatus 100 may compare information on the obtained acoustic feature of the user utterance with the pre-stored information on the acoustic feature of the user utterance, and recognize a user who is a subject of the received user utterance (=[0075] The Automatic Speech Recognition part 410 may include various processing circuitry and/or executable program elements for converting a user utterance into the form of a text that the electronic apparatus 100 can process, by performing voice recognition for the user utterance input through a microphone, etc. The Automatic Speech Recognition part 410 may include a language model for correcting a conversion error, a unique utterance of a user, an utterance error, etc.)). Regarding Claim 24, The Lee in view of Dhindsa and Liila teaches, The method of claim 2 (see rejection of claim 2 above), wherein the user device is a mobile device (Dhindasa: [0031], The user information database indicates that User Device 2 is a mobile phone and the authentication system may use a telephone number to authenticate User A via User Device 2 and/or to authenticate User A via the user account of the application. [0055] The user device 210 may include a communication device, the user device 210 may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a desktop computer, a gaming console, a set-top box, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, a head mounted display, or a virtual reality headset), or a similar type of device). It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Lee’s method of receive input of a first utterance of a user(=first communication) by applying Dhindasa’s method of sending an authentication request which prompt the user device (=mobile device) to provide a contextual description of the operation and receiving a contextual description of the operation from the user device. The motivation is to determine whether the authentication response is valid based on a comparison of the described characteristic of the operation and the parameter of the operation and based on a determination that the authentication response is valid, performance of the operation based on the parameter (Dhindsa: Abstract]). Claim(s) 12 is rejected under 35 U.S.C. 103 as being unpatentable over LEE et al. (U. S. PGPub. No. 2020/143809 A1 A1) (hereinafter “Lee”) and DHINDSA et al. (U. S. PGPub. No. 2022/0224685 A1) (hereinafter “Dhindsa”); and in further view of Liila et al. (U. S. PGPub. No 2023/0306504 A1) (hereinafter “Liila”); and in further view of “Voege, Peter, Iman IM Abu Sulayman, and Abdelkader Ouda. "Smart chatbot for user authentication." Electronics 11.23 (2022): 4016” (hereinafter “Smart chatbot for user authentication”) Regarding Claim 12, Lee in view of Dhindsa and Liila teach: The method of claim 1 (see rejection of claim 1 above), Lee in view of Dhindsa and Liila does not explicitly teaches: wherein obtaining, based on the plurality of parameters, the plurality of natural language representations of the action comprises: transmitting, to a database system, a query comprising one or more parameter names associated with the plurality of parameters; receiving, from the database system, a plurality of natural language templates; and generating the plurality of natural language representations based on the plurality of natural language templates and the plurality of parameters. However, in an analogous art, “Smart chatbot for user authentication” disclose: wherein obtaining, based on the plurality of parameters, the plurality of natural language representations of the action comprises (“Smart chatbot for user authentication”: [Page 11, paragraph 8, lines 2-3],finishing the construction of the natural language question, which is then presented to the user): transmitting, to a database system, a query comprising one or more parameter names associated with the plurality of parameters (“smart chatbot for user authentication”: [Page 10, section 4.3, paragraph 1, lines 3-8], These sentence templates are pre-set sentences designed by humans to be coherent and meaningful queries, but which come with placeholder identifiers in the place of the keywords of the sentence. As such, AIA C can create a coherent natural language query by extracting information from the anomaly it wishes to query with and then insert the keywords into the chosen question template). receiving, from the database system, a plurality of natural language templates (“Smart chatbot for user authentication”: [Section 4.3. “Forming a natural language question, page 10, para 1-2], It is difficult to dynamically generate natural language sentences that are reliably coherent and adequately convey the intended information, and so AIA C will circumvent the need to dynamically generate natural language by means of sentence templates. These sentence templates are pre-set sentences designed by humans to be coherent and meaningful queries, but which come with placeholder identifiers in the place of the keywords of the sentence. As such, AIA C can create a coherent natural language query by extracting information from the anomaly it wishes to query with and then insert the keywords into the chosen question template. Figure 6 shows a sentence template represented conceptually, showing the breakdown of fixed text and specific keyword placeholders. These placeholders can be broken down according to English grammatical rules, in case flexibility is required); and generating the plurality of natural language representations based on the plurality of natural language templates and the plurality of parameters (“Smart chatbot for user authentication”: [Section 4.3 “Forming a natural Language Question”, Page 10, para 3], For example, we can take the anomaly described in Table 2 and fit it into a hypothetical authentication challenge template. With a question of “On <time>, how much money did you spend on <action>?” and an answer of “<amount>.”, we can insert ‘the 26 May into the <time> placeholder, ‘health services’ into the <action> placeholder, and ‘$646.86’ into the <amount> placeholder. The text that is then presented to the user would be “On the 26th of May, how much money did you spend on health services?” with an expected answer of “$646.86”). A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Lee in view of Dhindsa and Liila by applying the well-known technique as disclosed by “Smart chatbot for user authentication” of generating the natural language template in order to present the authentication challenge questions. The motivation is improving user authentication based on user’s recent activity and prevent fraudster from brute-forced attack. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found in the attached PTO-892 Notice of References Cited. GILL et al. (U. S. PGPub. No. 2024/0214397 A1): In some aspects, a computing system may identify a feature that can be used to distinguish between data that is more likely to be representative of the target population. A computing system may identify a feature in a dataset where a first value of the feature is associated with a higher likelihood that a corresponding sample is not a member of the target population. Due to the differences between samples that have the first value and samples that have the second value, the computing system may determine that samples with the first value are less likely to be members of the target population or samples with the second value are more likely to be members of the target population. The computing system may determine that a training dataset should be generated using samples that have the second value. Jayaraman (U. S. PGPub. No. 2024/0305587 A1): A computing device, a computer program product, and a computer-implemented method for delivering enhanced financial services and, more particularly, for facilitating enhanced communication between a user and a financial institution via a client device. A digital financial management platform for the client device includes a chat support platform that facilitates multiple active virtual chat communication sessions involving the same user that are conducted simultaneously. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUPALI DHAKAD whose telephone number is (571)270-3743. The examiner can normally be reached M-F 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached at 5712705143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.D./Examiner, Art Unit 2437 /ALEXANDER LAGOR/Supervisory Patent Examiner, Art Unit 2437
Read full office action

Prosecution Timeline

Nov 27, 2023
Application Filed
Aug 21, 2025
Non-Final Rejection — §103, §112
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 05, 2025
Examiner Interview Summary
Nov 19, 2025
Response Filed
Dec 23, 2025
Final Rejection — §103, §112
Feb 23, 2026
Examiner Interview Summary
Feb 23, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592937
Method For Protection From Cyber Attacks To A Vehicle, And Corresponding Device
2y 5m to grant Granted Mar 31, 2026
Patent 12587544
METHOD AND SYSTEM TO REMEDIATE A SECURITY ISSUE
2y 5m to grant Granted Mar 24, 2026
Patent 12513154
BLOCKCHAIN-BASED DATA DETECTION METHOD, APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Patent 12495039
INTEGRATED AUTHENTICATION SYSTEM AND METHOD
2y 5m to grant Granted Dec 09, 2025
Patent 12468826
METHOD FOR OPERATING A PRINTING SYSTEM
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
39%
Grant Probability
71%
With Interview (+31.2%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month