Prosecution Insights
Last updated: April 19, 2026
Application No. 18/650,276

Dynamic Language Model Prompts for Fraud Detection

Non-Final OA §101§103§DP
Filed
Apr 30, 2024
Examiner
LEE, MICHAEL M
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Bitdefender Ipr Management Ltd.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
217 granted / 259 resolved
+25.8% vs TC avg
Strong +44% interview lift
Without
With
+44.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
286
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
22.6%
-17.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 259 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a non-final office action in response to applicant’s communication filed on 4/30/2024. Claims 1-21 are pending and being considered. Priority Applicant’s claim for the benefit of a prior-filed application (No. 63/612,405, filed on 12/20/2023) under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/09/2024, 4/16/2025, 4/23/2025, 8/27/2025, has been considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, initialed and dated copy of Applicant’s IDS forms 1449 filed as stated above are attached to the instant Office Action. Claim Objections Claims 1-4, 11-14, 21 are objected to because of the following informalities: Claim 1 recites intended use. Claim 1 lines 10-11 recites “the prompt manager is configured to: formulate the prompt to instruct the GLM to perform a fraud detection task …”. “to perform a fraud detection task …” is intended use. The claim recites no step(s) that can be understood as positive action for performing a fraud detection task. At best, the claim recites “the GLM is configured to: receive from the prompt manager a prompt formulated in the natural language, and in response, output to the prompt manager a predicted token comprising a likely continuation of the received prompt”. Similarly, claims 11, 21. Claim 1 line 11, “according to the target message” may read “according to the received target message”. Similarly claim 11 line 12, claim 21 line 12. Similarly claims 2 line 2, claim 3 line 3, claim 4 line 3, claim 12 line 2, claim 13 line 3, claim 14 line 3. Claim 1 lines 7-8, “… a predicted token comprising a likely continuation of …” may read “… a predicted token comprising a Similarly, for claim 11 line 8, claim 21 line 8. Appropriate correction is suggested. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1, 11, 21 are provisionally rejected on the ground of nonstatutory double patenting as being anticipated by the corresponding claims of US Application No. 18/650,312 (hereinafter “’312”). Claim 1 (or 10, 19) of ‘312 discloses all of the limitations recited in claim 1 (similarly claim 11, 21) of the instant application, as seen in the table below. Claim Comparison Instant Application 18/650,276 Co-pending 18,650,312 Claim 1 (similarly claim 11, 21). A computer system comprising at least one hardware processor configured to execute a chatbot configured to output a verdict formulated in a natural language and indicating whether a received target message is indicative of fraud, the chatbot comprising a prompt manager communicatively coupled to a generative language module (GLM), wherein: the GLM is configured to: receive from the prompt manager a prompt formulated in the natural language, and in response, output to the prompt manager a predicted token comprising a likely continuation of the received prompt; and the prompt manager is configured to: formulate the prompt to instruct the GLM to perform a fraud detection task according to the target message, determine whether the predicted token comprises a pre-determined flag token, in response, if the predicted token comprises the flag token, initiate an execution of a code snippet, wherein executing the code snippet causes an update of the prompt, transmit the updated prompt to the GLM, and determine the verdict according to an output produced by the GLM in response to the updated prompt. Claim 1 (or 10, 19). A computer system comprising at least one hardware processor configured to analyze a target text formulated in a natural language, wherein analyzing the target text comprises executing a prompt manager communicatively coupled to a generative language module (GLM), wherein: the GLM is configured to: receive from the prompt manager a language model (LM) prompt formulated in the natural language, and in response, output to the prompt manager a predicted token comprising a likely continuation of the received LM prompt; and the prompt manager is configured to: receive a logical prompt, the logical prompt including the target text and a code snippet comprising computer code for updating the LM prompt, formulate the LM prompt according to the logical prompt, the LM prompt instructing the GLM to perform a text analysis task according to the target text and in response, to output a flag token, in response to receiving the predicted token, initiate an execution of the code snippet to produce an updated LM prompt, transmit the updated LM prompt to the GLM, and formulate a reply message comprising a result of analyzing the target text, the reply message formulated according to an output of the GLM in response to the updated LM prompt. Claims 1, 11, 21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over the corresponding claims of US Application No. 18/513,750 (hereinafter “’750”), in view of Majmudar et al (US12292915B1, hereinafter, “Majmudar”). Claim 1 (or 10, 19) of ‘750 discloses all of the limitations recited in claim 1 (similarly claim 11, 21) of the instant application, as seen in the table below except limitation(s) as highlighted in bold below, in the same field of endeavor Majmudar teaches: determine whether the predicted token comprises a pre-determined flag token, in response, if the predicted token comprises the flag token, initiate an execution of a code snippet, wherein executing the code snippet causes an update of the prompt (Majmudar, discloses system and method for security threat mitigation for generative machine learning models of large language model, see [Abstract]. And [Col. 16 ll. 4-7] FIG. 3 depicts an example in which the LLM-based system 100 of FIG. 1 is used to detect an invalid action result received in response to an application programming interface call executed as part of an action plan. And [Col. 16 ll. 24-32] the action result data may also be used to update the prompt data (by prompt generator 104) and perform recursive LLM inference in parallel with validation of the action result data by the action result validation component 142 (to reduce latency). However, the resulting inference output may not be acted upon until the action result validation component 142 has fully-validated the result data (to ensure that no malicious prompt injection instruction data is detected)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Majmudar in the chatbot agent and a threat analyzer of ‘750 by updating the prompt data and performing recursive LLM inference. This would have been obvious because the person having ordinary skill in the art would have been motivated for security threat mitigation for generative machine learning models (Majmudar, [Abstract]). Claim Comparison Instant Application 18/650,276 Co-pending 18,513,750 Claim 1 (similarly claim 11, 21). A computer system comprising at least one hardware processor configured to execute a chatbot configured to output a verdict formulated in a natural language and indicating whether a received target message is indicative of fraud, the chatbot comprising a prompt manager communicatively coupled to a generative language module (GLM), wherein: the GLM is configured to: receive from the prompt manager a prompt formulated in the natural language, and in response, output to the prompt manager a predicted token comprising a likely continuation of the received prompt; and the prompt manager is configured to: formulate the prompt to instruct the GLM to perform a fraud detection task according to the target message, determine whether the predicted token comprises a pre-determined flag token, in response, if the predicted token comprises the flag token, initiate an execution of a code snippet, wherein executing the code snippet causes an update of the prompt, transmit the updated prompt to the GLM, and determine the verdict according to an output produced by the GLM in response to the updated prompt. Claim 1 (or 10, 19). A computer system comprising at least one hardware processor configured to execute a chatbot agent and a threat analyzer coupled to the chatbot agent, wherein: and the threat analyzer is configured to: carry out a fraud analysis of the target object to determine whether the target object is indicative of fraud, and output a result of the fraud analysis to the chatbot agent for transmission to the user. the chatbot agent is configured to: in response to receiving a natural language (NL) message from a user, formulate a language model prompt according to the NL message, transmit the language model prompt to a language model (LM) configured to determine an LM reply comprising a reply to the NL message, identify a target object for fraud analysis according to the LM reply, and transmit an indicator of the target object to the threat analyzer; Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-21 rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more. Eligibility Step 2A Prong One: Claim 1, similarly claim 11, 21, recites “receive … a prompt formulated in the natural language, … output to the prompt manager a predicted token comprising a likely continuation of the received prompt”, “formulate the prompt to instruct the GLM to perform a fraud detection task according to the target message”, “determine whether the predicted token comprises a pre-determined flag token”, and “determine the verdict according to an output produced by the GLM in response to the updated prompt”. These would be interpreted as being analogous to concepts relating to organizing or analyzing information in a way that can be performed mentally or human mental work. Accordingly, the claim recites the abstract idea. The limitation of determining(s), as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of relating to prompt manager and generic GLM of generic chatbot for the purpose of online fraud detection. Nothing in the claim element precludes the steps from practically being performed in the mind. Accordingly, the claim recites an abstract idea. Eligibility Step 2A Prong Two: Claims 1, 11, 21 recite additional limitations of “processor” to perform the steps of method claim discussed above. The limitation of “receive …”, “formulate …”, “determine whether the predicted token comprises a pre-determined flag token”, and “determine the verdict …”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “processor”, nothing in the claim element precludes the steps from practically being performed in the mind. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application because the claim only recites the additional limitations of “initiate an execution of a code snippet, wherein executing the code snippet causes an update of the prompt”, “transmit the updated prompt to the GLM”, which are merely used as generic and well-known terminologies, and they do not amount to significantly more than the abstract idea. In addition, the claims only recite additional elements – processor, to perform the steps of “receive …”, “formulate …”, “determine whether the predicted token comprises a pre-determined flag token”, and “determine the verdict …”. The processor is recited at a high level of generality (i.e., as a generic processor performing a generic computer function of above steps) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Eligibility Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using computing system to perform the “receive …”, “formulate …”, “determine whether the predicted token comprises a pre-determined flag token”, and “determine the verdict …” steps amounts to no more than mere instructions to apply the exception using generic computing system. Mere instructions to apply an exception using generic computing machines cannot provide an inventive concept. The claim is not patent eligible. Further recited elements within dependent claims taken individually do not amount to significantly more than just the abstract idea as previously identified above. Therefore, the claims are not patent eligible. Examiner Notes Examiner cites particular paragraphs, columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 11-13, 8, 18, 21 are rejected under 35 U.S.C. 103 as being unpatentable over Taheri (US20250117630A1, hereinafter, “Taheri”), in view of Meng et al (US20200045066A1, hereinafter, “Meng”), further in view of Majmudar et al (US12292915B1, hereinafter, “Majmudar”). Regarding claim 1, similarly claim 11, claim 21, Taheri teaches: A computer system, a computer-implemented method, a non-transitory computer-readable medium storing instructions, comprising at least one hardware processor configured to execute a chatbot configured to output a verdict formulated in a natural language and indicating whether a received target message is indicative of fraud, the chatbot comprising a prompt manager communicatively coupled to a generative language module (GLM) (Taheri, discloses apparatus and method for outputting by chatbot by displaying text response to user input conversation using prompts between user and chatbot with large language model (i.e., GLM), see [Abstract]. And [Claim 1] apparatus, [Claim 8] method, and [Claim 15] computer-readable storage medium. And refer to Fig. 1, User device, Application front-end and Back-end, and LLM), wherein: the GLM (Fig. 1 LLM 124) is configured to: receive from the prompt manager a prompt formulated in the natural language, and in response, output to the prompt manager a predicted token comprising a likely continuation of the received prompt (e.g., [0002] … converse with a user via a chatbot within a chat window of a software application, wherein the conversation comprises a reception of natural language inputs with requests. And [0005] receive an input from a user during a conversation that includes a plurality of prompts between the user and a chatbot within a chat window of a software application. And [0011] configured to one or more of receive a sequence of inputs from a user when in a conversation with a chatbot within a chat window of a software application, execute a large language model (LLM) on each input from the user to determine a next prompt (i.e., likely continuation of the received prompt) to output via the chatbot, respectively, wherein each execution of the LLM includes a new chat input from the user and a most-recent state of the conversation between the user and the chatbot within the chat window, and display the next prompt output by the chatbot within the chat window on a user device. And [0119] Referring to FIG. 8B, in 811, the method may include receiving a sequence of inputs from a user during a conversation between the user and a chatbot within a chat window of a software application. In 812, the method may include executing a large language model (LLM) on each input from the user to determine a next output to output via the chatbot, respectively, wherein each execution of the LLM includes a new chat input from the user and a most-recent state of the conversation between the user and the chatbot within the chat window. In 813, the method may include displaying the next output within a chat window on a user device with a description of the identified benefit obtained); and the prompt manager (Fig. 1 Applicant Frone-end and Application Back-end, or Fig. 2 Application 210) is configured to: [formulate the prompt to instruct the GLM to perform a fraud detection task according to the target message, determine whether the predicted token comprises a pre-determined flag token, in response, if the predicted token comprises the flag token, initiate an execution of a code snippet, wherein executing the code snippet causes an update of the prompt], transmit the updated prompt to the GLM, and determine the verdict according to an output produced by the GLM in response to the updated prompt (Fig. 8D, at steps from 831 to 835, and [0127] Referring to FIG. 8D, in 831, the method may include receiving an input from a user during a conversation that includes a plurality of prompts between the user and a chatbot within a chat window of a software application. In 832, the method may include converting text content within the received input into a vector. In 833, the method may include executing a large language model (LLM) on the vector and a database of vectorized responses to identify a vectorized response to output from among the plurality of vectorized responses within the database. In 834, the method may include converting the vectorized response into a text response). (See Meng and Majmudar below for limitations in bracket) While Taheri teaches the GLM and prompt manager as shown above but does not teach the following, in the same field of endeavor Meng teaches: formulate the prompt to instruct the GLM to perform a fraud detection task according to the target message (Meng, discloses systems and methods for online fraud protection by inputting received query with user account to deep neural network model to generate a prediction of whether the user account is fraudulent, see [Abstract]. And [0050] The training data can be generated from application-level event data received from one or multiple online service providers. In some implementations, the training labels are generated by unsupervised machine learning fraud detection algorithms, by human analysts that have domain expertise in fraud detection, or from user feedback, e.g., a credit card chargeback initiated by the fraud victim. In some implementations, the event data is processed to obtain common digital information (e.g., common digital information 18). This common digital information can be used to generate feature vectors, e.g., feature vectors 24, as input training data (i.e., prompt)); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Meng in the Chatbot with LLM of Taheri by configuring input training data as input to deep neural network model. This would have been obvious because the person having ordinary skill in the art would have been motivated to generate a prediction of whether the user account is fraudulent (Meng, [Abstract]). The combination of Taheri and Meng does not teach the following, in the same field of endeavor Majmudar teaches: determine whether the predicted token comprises a pre-determined flag token, in response, if the predicted token comprises the flag token, initiate an execution of a code snippet, wherein executing the code snippet causes an update of the prompt (Majmudar, discloses system and method for security threat mitigation for generative machine learning models of large language model, see [Abstract]. And [Col. 16 ll. 4-7] FIG. 3 depicts an example in which the LLM-based system 100 of FIG. 1 is used to detect an invalid action result received in response to an application programming interface call executed as part of an action plan. And [Col. 16 ll. 24-32] the action result data may also be used to update the prompt data (by prompt generator 104) and perform recursive LLM inference in parallel with validation of the action result data by the action result validation component 142 (to reduce latency). However, the resulting inference output may not be acted upon until the action result validation component 142 has fully-validated the result data (to ensure that no malicious prompt injection instruction data is detected)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Majmudar in the Chatbot with LLM of Taheri-Meng by updating the prompt data and performing recursive LLM inference. This would have been obvious because the person having ordinary skill in the art would have been motivated for security threat mitigation for generative machine learning models (Majmudar, [Abstract]). Regarding claim 2, similarly claim 12, Taheri-Meng-Majmudar combination teaches the computer system of claim 1, the method of claim 11, Meng further teaches: wherein the prompt manager is configured to formulate the prompt to instruct the GLM to determine whether the target message is indicative of fraud (Meng, [0053] The deep neural network model takes the common digital information as input and applies the model parameters to generate a model prediction output. The model prediction output can be a prediction of whether the particular user accounts associated with the common digital information are fraudulent). Same motivation as presented in claim 1, 11 would apply. Regarding claim 3, similarly claim 13, Taheri-Meng-Majmudar combination teaches the computer system of claim 2, the method of claim 12, Meng further teaches: wherein the prompt manager is configured to formulate the prompt to further instruct the GLM to output the flag token in response to a determination by the GLM that the target message is indicative of fraud (Meng, [0054] The system provides respective output predictions to the online service provider 210. And [0055] One of the ways the system improves its prediction accuracy is by incorporating feedback regarding its previous predictions into an updated model or future models. The feedback module 28 receives false positive (FP) and false negative (FN) requests through a customized application programming interface (API). The feedback data is then incorporated in the next iteration of model training). Same motivation as presented in claim 1, 11 would apply. Regarding claim 8, similarly claim 18, Taheri-Meng-Majmudar combination teaches the computer system of claim 1, the method of claim 11, Meng further teaches: wherein: the prompt manager is configured to formulate the prompt to instruct the GLM to output the flag token according to a result of the fraud detection task; and the update of the prompt instructs the GLM to perform another fraud detection task (Meng, e.g., [0050] The training data can be generated from application-level event data received from one or multiple online service providers. In some implementations, the training labels are generated by unsupervised machine learning fraud detection algorithms, by human analysts that have domain expertise in fraud detection, or from user feedback, e.g., a credit card chargeback initiated by the fraud victim. In some implementations, the event data is processed to obtain common digital information (e.g., common digital information 18). This common digital information can be used to generate feature vectors, e.g., feature vectors 24, as input training data (i.e., prompt)). Same motivation as presented in claim 1, 11 would apply. Claims 5, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Taheri-Meng-Majmudar combination as applied above to claim 1, 11 respectively, further in view of Ghaeini et al (US20250111202A1, hereinafter, “Ghaeini”). Regarding claim 5, similarly claim 15, Taheri-Meng-Majmudar combination teaches the computer system of claim 1, the method of claim 11, The combination of Taheri-Meng-Majmudar does not specifically teach, in the similar field of endeavor Ghaeini teaches: wherein: the prompt manager is configured to formulate the prompt to further include a placeholder token; and the update of the prompt comprises inserting a set of supplemental tokens into the prompt at a position of the placeholder token (Ghaeini, discloses systems and methods for dynamically generating prompts for a generative artificial intelligence (AI) model. And [0048] Generation of the prompt, in some examples, includes accessing a template that includes static segments and dynamic segments with placeholders for dynamic data, such as the identified similar trait data and the input content 280. And [0049] In the example provided above, the output instructions request that the output includes a relevancy score, a justification, and the particular trait data (e.g., statement(s)) that input content 280 is found to fall under (e.g., classified as). In addition, the input content 280 is provided into the input content placeholder of the dynamic input content segment of the prompt). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Ghaeini in the Chatbot with LLM of Taheri-Meng-Majmudar by including static segments and dynamic segments with placeholders for dynamic data. This would have been obvious because the person having ordinary skill in the art would have been motivated for dynamically generating prompts for a generative AI model with an output including an evaluation of input content (Ghaeini, [Abstract]). Claims 6, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Taheri-Meng-Majmudar combination as applied above to claim 1, 11 respectively, further in view of Cefalu et al (US20230359903A1-IDS, hereinafter, “Cefalu”). Regarding claim 6, similarly claim 16, Taheri-Meng-Majmudar combination teaches the computer system of claim 1, the method of claim 11, The combination of Taheri-Meng-Majmudar does not specifically teach, in the similar field of endeavor Cefalu teaches: wherein the update of the prompt comprises an action selected from a group consisting of inserting a plurality of supplemental tokens into the prompt and deleting a set of tokens from the prompt (Cefalu, discloses system and method of using an artificial intelligence (AI) model configured to accept text input and to perform deep learning to produce human-like text responsive to an input comprising tokens, see [Abstract]/[Title]. And [0045] Commands entered by a user into a GPT input prompt that are considered as related to undesired attributes are flagged and automatically removed by classifier 1000 from the input prompt before the GPT 100 processes the entry. The rules are custom configured on a platform-by-platform basis such that different entities can establish their custom policies and goals. Further, processor 1002 predicts subsequent words (which may be a token) and/or tokens 1006 (FIG. 13 that may follow an entered command that are considered by classifier 1000 to have undesired attributes and to prevent processing of the words and tokens 1006 by the GPT 100. Words and tokens 1006 that are part of a user-entered command are marked and flagged by processor 1002 for deletion and are automatically deleted from the user input in a way that is hidden from the user, in between when the user provides the input and when the input enters input 140 of GPT 100). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Cefalu in the Chatbot with LLM of Taheri-Meng-Majmudar by updating prompt from user input and removal of undesired attributes. This would have been obvious because the person having ordinary skill in the art would have been motivated to configuring AI model with trusted instructions (Cefalu, [Abstract]). Claims 10, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Taheri-Meng-Majmudar combination as applied above to claim 1, 11 respectively, further in view of Xu et al (US20230412639A1, hereinafter, “Xu”). Regarding claim 10, similarly claim 20, Taheri-Meng-Majmudar combination teaches the computer system of claim 1, the method of claim 11, The combination of Taheri-Meng-Majmudar does not specifically teach, in the similar field of endeavor Xu teaches: wherein the flag token comprises an attribute-value pair (Xu, discloses methods and systems for a malware detection system that detects whether an on-chain program associated with a cryptographic token is malicious based on output of a machine learning model, see [Abstract]. And [0061] wherein each attribute set comprise a value for one or more of creation block data, number of holders, number of transfers, transfer duration, earliest transfer, or latest transfer for each labeled cryptographic token of the respective labeled cryptographic tokens; generating a plurality of vectors based on the plurality of attribute sets; and training, the machine learning model, on the plurality of vectors to generate outputs of whether attributes for inputted cryptographic tokens indicate the malicious on-chain program). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Xu in the Chatbot with LLM of Taheri-Meng-Majmudar by associating cryptographic tokens with attribute set. This would have been obvious because the person having ordinary skill in the art would have been motivated to detecting malicious on-chain programs associated with cryptographic token using machine learning model (Xu, [Abstract]). Allowable Subject Matter Claims 4, 7, 9, 14, 17, 19 are objected to as being dependent upon a rejected base claim(s), but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims as well as resolving of any outstanding informalities and concerns under 35 USC 101 presented in this office action. The following is a statement of reasons for the indication of allowable subject matter: Claim 4 (similarly claim 14) depends on claim 3 (claim 13) which depends on claim 2 (claim 12) which depends on claim 1 (claim 11), further specifies “wherein the prompt manager is configured to formulate the prompt to further instruct the GLM to output another pre-determined flag token in response to a determination by the GLM that the target message is not indicative of fraud”. Claim 7 (similarly claim 17) depends on claim 1 (claim 11), further specifies “wherein the update of the prompt comprises inserting a sequence of supplemental tokens into the prompt, the sequence of supplemental tokens instructing the GLM to insert another pre-determined flag token into the prompt”. Claim 9 (similarly claim 19) depends on claim 1 (claim 11), further specifies “wherein the prompt manager is configured to select the code snippet from a plurality of pre-determined code snippets according to the flag token”. The prior arts identified, Taheri, Meng, Majmudar, Ghaeini, Cefalu, Xu, either singularly or in combination fails to anticipate or render obvious the claimed limitations of claims 4, 7, 9, 14, 17, 19 shown above. Citation of References The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are cited but not been replied upon for this office action: Golding (US20210264112A1) discloses method for managing bot dialogue. Cidon et al (US20190028499A1) discloses system and method for AI-based anti-fraud user training and protection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL M LEE whose telephone number is (571)272-1975. The examiner can normally be reached on M-F: 8:30AM - 5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on (571) 272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL M LEE/Primary Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Apr 30, 2024
Application Filed
Oct 23, 2025
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596786
ANOMALOUS EVENT AGGREGATION FOR ANALYSIS AND SYSTEM RESPONSE
2y 5m to grant Granted Apr 07, 2026
Patent 12579301
Data Plane Management Systems and Methods
2y 5m to grant Granted Mar 17, 2026
Patent 12580927
DETECTING AND PROTECTING CLAIMABLE NON-EXISTENT DOMAINS
2y 5m to grant Granted Mar 17, 2026
Patent 12579279
System and Method for Summarization of Complex Cybersecurity Behavioral Ontological Graph
2y 5m to grant Granted Mar 17, 2026
Patent 12580938
CONDITIONAL HYPOTHESIS GENERATION FOR ENTERPRISE PROCESS TREES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+44.1%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 259 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month