Prosecution Insights
Last updated: April 19, 2026
Application No. 18/347,527

SYSTEM AND METHOD OF AUTOMATICALLY GENERATING A NATURAL LANGUAGE WORKFLOW POLICY FOR A WORKFOW FOR CUSTOMER SUPPORT OF EMAILS

Final Rejection §101§103§DP
Filed
Jul 05, 2023
Examiner
GOLDBERG, IVAN R
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Forethought Technologies Inc.
OA Round
2 (Final)
35%
Grant Probability
At Risk
3-4
OA Rounds
4y 8m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
128 granted / 365 resolved
-16.9% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
57 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
27.7%
-12.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 365 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant The following is a Final Office action. In response to Examiner’s Non-Final Rejection of 3/20/25, Applicant, on 7/28/25, amended claims. Claims 25-42 are pending in this application and have been rejected below. Response to Amendment Applicant’s amendments are acknowledged. The 112 rejections are withdrawn in light of the amendments. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed application, Application No. 17/682,537, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. Claim 26 is about the available tools comprising an API for making a network call. However, this is not found in 17/682,537. However, it is found in 63/501,163 and 63/484,016, so the priority date here is treated as 2/9/2023 from 63/484,016. Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/12/25 and 9/17/25 are being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 25-42 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without reciting significantly more. Step One - First, pursuant to step 1 in MPEP 2106.03, the claim 25 is directed to a system which is a statutory category. Step 2A, Prong One - MPEP 2106.04 - The claim 25 recites– “implement a method of operating a chat… comprising: generating a granular taxonomy of topics from historical customer support tickets of agents' answers to customer questions; … generating natural language workflow policies for responding to customer support tickets for a plurality of topics, including: 1) identifying, for each topic of the plurality of topics, representative answers of agents from a collection of historic customer support tickets; 2) generating answer clusters for the representative answers of each topic of the plurality of topics; 3) inputting the answer clusters of each topic into a … language model and inferring for each topic a suggested natural language workflow policy including any associated available tools, actions, and text messages; 4) providing each suggested natural language workflow policy to an administrator; 5) receiving an input from an administrator to convert the suggested natural language workflow policy into an active natural language workflow policy; and answering a question of a customer by identifying a topic of the question based on the granular taxonomy, accessing the active natural language workflow policy associated with the topic, and using a … language model to convert the accessed active natural language workflow policy into a sequence of workflow steps to answer the question.” As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea grouping of “certain methods of organizing human activity” (“managing personal behavior (including social activities, teaching, and following rules or instructions)), as here we have answers for a question topic (which can be a refund -see [0092, 0204] as published), inferring a workflow policy formed in words (natural language) [0137 – refund policy; [0247- the workflow policy might be “Check whether the item was purchased within the last 30 days. If yes, give return instructions xyz. Otherwise, apologize because it's been longer than 30 days,” where a person (administrator) makes selections for what the policy/rules should be for responding to different questions. Accordingly, claim 25 is directed to an abstract idea because it is lays out a workflow policy for a customer support ticket/issue, then inferring a policy/rules/actions to resolve the issue. Steps also include same analysis that occurs manually to identify representative answers ([e.g. [0155] – algorithm to pick answer as representative; tickets close together in similarity represented in dimensional space also used for representative]. Step 2A, Prong Two - MPEP 2106.04 - This judicial exception is not integrated into a practical application. In particular, the claim 25 recites additional elements that are: A system including a server having a processor and a memory having program instructions that when executed on the processor implement a method of operating a chatbot, comprising: … automatically generating natural language workflow policies for responding to customer support tickets for a plurality of topics, including: … 3) inputting the answer clusters of each topic into a large language model and inferring for each topic a suggested natural language workflow policy including any associated available tools, actions, and text messages; … answering a question of a customer by identifying a topic of the question based on the granular taxonomy, accessing the active natural language workflow policy associated with the topic, and using a large language model to convert the accessed active natural language workflow policy into a sequence of workflow steps to answer the question (MPEP 2106.05f applies – each limitation in claim appears to involve “AI model”, which is interpreted as using a computer and is considered “apply it” – applying the abstract idea on a computer – merely uses a computer as a tool to perform an abstract idea; and MPEP 2106.05h (field of use) – for combination of computer and “large language model” and “chatbot” used to do the analysis to find the representative answers). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim also fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, and/or an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. See 84 Fed. Reg. 55. The claim is directed to an abstract idea. Step 2B in MPEP 2106.05 - The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a computer system and a “large language model” and a chatbot are MPEP 2106.05(f) (Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235) and “field of use” (MPEP 2106.05h). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. The claim is not patent eligible. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Independent claim 34 is directed to a method as described in the preamble. Step One - First, pursuant to step 1 in MPEP 2106.03, the claim 34 is directed to a method which is a statutory category. Claim 34 is rejected for similar reasons as claim 1 for step 2a, prong one; step 2a, prong two; step 2B. Claims 26, 35 recites additional elements – “available tools comprise an Application Programming Interface (API) for making a network call”. The limitations are similar to independent claim 25, 34 above. Here in claim 26, 35, using API is interpreted as using a computer and is considered “apply it [abstract idea] on a computer” (MPEP 2106.05f) as it merely uses a computer as a tool to perform an abstract idea; having “applicable API calls” is also MPEP 2106.05h field of use, for naming specific API for conducting same operations). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim also fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, and/or an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. See 84 Fed. Reg. 55. The claim is directed to an abstract idea. Step 2B in MPEP 2106.05 - The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a computer system and “API calls” are MPEP 2106.05(f) (Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235) and “field of use” (MPEP 2106.05h). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Dependent claims 29, 38 recite “function calls” which is rejected for similar reasons as claims 26, 35. Dependent claim 27, 36 further narrows the abstract idea by describing that the suggested workflow has various conditions, responses, and actions. Dependent claim 28, 37 further narrows the abstract idea by stating that the policy has at least one sentence (i.e. a description). Dependent claim 30, 38 further narrows the abstract idea by having a person/admin selecting/customize/edit suggestions. Dependent claim 31, 39 further narrows the abstract idea by grouping/clustering the topics. Dependent claim 32, 40, at Step 2A, Prong One - MPEP 2106.04 – is directed to an abstract idea as it recites - “prompting a … model with a natural language text description of the natural language workflow policy; prompting the … model with conversation information regarding a conversation between a customer and an … agent associated with the customer support ticket for the specific customer issue; prompting the … model with information describing applicable actions to implement the workflow, …; and prompting … model with at least one text message for the workflow; the … model observing the results of actions of the workflow, making decisions on information to request from the customer, and making decisions on actions to take to implement the workflow.” As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea grouping of “certain methods of organizing human activity” (“managing personal behavior (including social activities, teaching, and following rules or instructions)), as here we have a description of a workflow (which can be a refund or scheduling of a follow-up call – see [0142] as published), having a conversation associated with a customer issue, having actions to implement the workflow, having a “text message” for one of the workflow, then observing results of actions, making decisions on information to request from the customer, and making decisions on actions to implement the workflow. Accordingly, claim 32 is directed to an abstract idea because it lays out a workflow policy for a business problem that is part of a ticket for a customer issue, then taking actions for the workflow and determining actions to resolve the issue. Steps also include analyzing words to match to a topic, and making decisions on actions can be based on confidence level [see [0100] as published] and scoring answers ([0079] as published), which can also occur in a manual manner. The additional elements of “large language model” and “API calls” and “chatbot” are considered “apply it [abstract idea] on a computer” MPEP 2106.05f and “field of use” (MPEP 2106.05h) for the same reasons as addressed above. Claims 33, 42 narrow the abstract idea by stating rules in form of “guard rails” for the model; this is also viewed as “apply it [abstract idea] on a computer” (MPEP 2106.05f) in the sense of providing the rules as programming for the large language model. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. For more information on 101 rejections, see MPEP 2106. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 25-42 are rejected under 35 U.S.C. 103 as being unpatentable over Jonnalagadda (US 2021/0201144) and Yaghoub-Zadeh-Fard, et. al, “REST2Bot: bridging the gap between bot platforms and REST APIs,” 2020, In Companion Proceedings of the Web Conference 2020 (pages 245-248) (hereinafter “Yaghoub”) and Williams (US 2019/0347668). Concerning independent claim 25, Jonnalagadda discloses: A system including a server having a processor and a memory having program instructions that when executed on the processor implement a method of operating a chatbot (Jonnalagadda – see par 201 - The term “machine-readable medium” and “machine-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the presently disclosed technique and innovation.), comprising: Jonnalagadda discloses in background that an example of AI tools are “chatbots” (See par 3) and using conversation systems, where recipients communicating with an “automated machine” in a dialog or text message, or “chat” (See par 66). Yaghoub discloses “chatbot” (Yaghoub – see Abstract - bots emerged recently as natural interfaces to facilitate conversations between humans and API-accessible services; see page 246, col. 1, 2nd paragraph - conversational bots (e.g. flight booking) are useful, the premise of our research is that the ubiquity of such bots will have more value if they can easily integrate and reuse concomitant capabilities across large number of evolving and heterogeneous devices, data sources and applications (e.g. flight/hotel/car bookings all-in-one bot); see page 247, section 2.4 – Conversation Manager Generator - Conversation Manager Generator (CMG) is used to instantiate a conversation manager in a third-party bot development platform (e.g. Dialogflow)). Jonnalagadda and Yaghoub disclose: generating a granular taxonomy of topics from historical customer support tickets of agents' answers to customer questions (Jonnalagadda – Jonnalagadda – see par 77 – training of AI system, using AI models, CRM data to augment traditional training sets, and input from the training desk; social media exchanges may be useful as training source; a business often engages directly with customers on social media, leading to conversations back and forth that are again, specific and accurate to the business. As previously discussed, intents are a collection of categories used to answer some question about a document. For example, a question for the document could include “is the lead looking to purchase a car in the next month?” ( “purchase a car?” disclose “topics”); Answering this question can have direct and significant importance to a car dealership. Certain categories that the AI system generates may be relevant toward the determination of this question; see par 87 - a message builder 450 incorporates the actions into a message template; Dynamic message building design depends on ‘message building’ rules in order to compose an outbound document. A rules child class is built to gather applicable phrase components for an outbound message. The message builder 450 may include a hierarchical conversation library 451 for storing all the conversation components for building a coherent message; The hierarchical conversation library 451 may be a large curated library, organized and utilizing multiple inheritance along a number of axes: organizational levels, access-levels (rep->group->customer->public). The hierarchical conversation library 451 leverages sophisticated library management mechanisms, involving a rating system based on achievement of specific conversation objectives, gamification via contribution rewards, and easy searching of conversation libraries based on a clear taxonomy of conversations and conversation trees; “tickets” disclosed by - See par 117 - Objective definition can track the state of every target. The state of the conversation objectives can be tracked individually as shown below in Table 2 (e.g. Target ID, Conversation ID – disclosing “ticket”); See also FIG. 24, par 180 – “Lead information” – has specific ID number in FIG. 24; FIG. 24 provides an example illustration of an interface 2400 showing a contact's record. Information pertinent to the contact, such as their name, email, ID number); automatically generating natural language workflow policies for responding to customer support tickets for a plurality of topics (Jonnalagadda ‘144 See par 76 - To perform analysis of responses correctly, natural language processing by the AI is required, and the AI (or multiple AI models) must be correctly trained to make the appropriate inferences and classifications of the response message; See par 80, FIG. 2, 4A – message generator 220 – receiving intent of last received response; This information is provided to an assistant manager 410, which leverages information about the various AI assistants form an assistant dataset 420. Different assistants may have access to differing… domain specific datasets. For example, a sales assistant may have access to product information; a sales assistant may react entirely differently from a customer service assistant given the same intent inputs (disclosing workflow policies/rules/details for solving different topics); See par 105 -neural encoder accomplishes tasks by automatically deriving a list of intents that that describe a conversational domain such that for every response from the user, and the AI agent's policy… to determine the agent's action. This derivation of intents uses data obtained from many enterprise assistant conversation flows; Conversation flows can be various business functions; see par 88 - In addition to merely responding to a message with a response, the message builder 450 may also include a set of actions that may be undertaken linked to specific triggers, these actions and associations to triggering events may be stored in an action response library 452. For example, a trigger may include “Please send me the brochure.” This trigger may be linked to the action of attaching a brochure document to the response message, which may be actionable via a webhook or the like. The system may choose attachment materials from a defined library (SalesForce repository, etc.), driven by insights gained from parsing and classifying the previous response, or other knowledge obtained about the target, client, and conversation. Other actions could include initiating a purchase (order a pizza for delivery for example) or pre-starting an ancillary process with data known about the target (kick of an application for a car loan, with name, etc. already pre-filled in for example). Another action that is considered is the automated setting and confirmation of appointments)., including: 1) Identifying, for each topic of the plurality of topics, representative answers of agents from a collection of historic customer support tickets ([0240] as published - In block 5402, a large language model is optionally trained to infer a workflow policy, available tools, and text message for one or more workflows. In block 5406, representative answers are identified in customer tickets for a selected topic/intent. Jonnalagadda – see par 77 – training of AI system, using AI models, CRM data to augment traditional training sets, and input from the training desk; social media exchanges may be useful as training source; a business often engages directly with customers on social media, leading to conversations back and forth that are again, specific and accurate to the business. As previously discussed, intents are a collection of categories used to answer some question about a document. For example, a question for the document could include “is the lead looking to purchase a car in the next month?” ( “purchase a car?” disclose “topics”); Answering this question can have direct and significant importance to a car dealership. Certain categories that the AI system generates may be relevant toward the determination of this question; see par 87 - The hierarchical conversation library 451 leverages sophisticated library management mechanisms, involving a rating system based on achievement of specific conversation objectives, gamification via contribution rewards, and easy searching of conversation libraries based on a clear taxonomy of conversations and conversation trees. see par 114 - The user is then afforded the opportunity to modify the message templates to better reflect the new conversation (at 930). Since the objectives of many conversations may be similar, the user will tend to generate a library of conversations and conversation fragments that may be reused, with or without modification, in some situations.); 2) generating answer clusters for the representative answers of each topic of the plurality of topics (Jonnalagadda – See par 154 - typically in the industry frequently asked questions are identified and provided by the clients that are fed in the AI system; There are various lead responses that are best informed about the questions that were asked by leads to a particular client. Therefore, (1) the system may process the lead responses to detect the questions, (2) create topics and (3) cluster them into a group of topics that had the same answer. Through such an analysis, it was found that set of refined 11 clusters could answer more than 5000+ questions. see par 155 - the system to automatically extract not only the question but also learn how to answer a particular question (based on how rep responded). The system uses this mechanism to automatically generate the question-answer pair and will send it to clients for approval. If approved/and modified, this answer becomes as a “approved answer” in the system that is used to generate the answer.); 3) Inputting the answer clusters of each topic into a … language model and inferring for each topic a suggested natural language workflow policy including any associated available tools (for “tools” - claim 26 specifies – API for making a network call; paragraph 231 as published “actions may include use of one or more tools (e.g. API calls)) Jonnalagadda discloses the limitations based on broadest reasonable interpretation in light of the specification - see par 149 - Returning to FIG. 15, after NLG, this language may be used, along with other rule based analysis of intents, to formulate the action to be taken by the system (at 1580)… the action may additionally include other activities such as attaching a file to the message, setting up an appointment using scheduling software, calling a webhook, or the like; see par 134 - Rules are used to map the classifications to intents of the language. Classifications and intents are derived via both automated machine learned models as well as through human intervention via annotations. Additionally, external APIs may be leveraged in addition to, or instead of, internally derived methods for intent determination. Entity extraction may be completed using dictionary matches, recurrent neural networks (RNNs) regular expressions, open source third party extractors and/or external APIs), actions ([0137] as published states “In some implementations, a workflow task building module 915 may use the macro code to trigger a workflow action, such as issuing a customer survey to solicit customer feedback, scheduling follow-up workflow actions, such as scheduling a refund, follow-up call, etc; - Jonnalagadda discloses the limitations based on broadest reasonable interpretation in light of the specification – see par 88 – message builder includes a set of actions undertaken linked to specific triggers… stored in an action response library 452; example, a trigger may include “Please send me the brochure.” This trigger may be linked to the action of attaching a brochure document to the response message, which may be actionable via a webhook or the like. Other actions could include initiating a … or pre-starting an ancillary process with data known about the target (kick of an application for a car loan, with name, etc. already pre-filled in for example). See FIG. 5A, par 94 – input message 501 received that may be written in various platforms/formats; See par 95 – then sent to NLP server (shown in detail in FIG. 5B); par 97 – NLP output 525 provided to a classifier 530 that leverages AI modeler 540 to classify responses (shown in detail in FIG. 5C); See par 100-102 - The reasoner portion of the neural encoder classifies the individual instance or sequence of these resulting vectors into a different instance or sequence typically using … unsupervised approaches such as generative adversarial networks and auto-encoders (for reducing the dimensionality of data within the neural networks; see par 105 - conversational AI system is able to predict how likely the user wanted to express intent, and the AI agent's policy can be evaluated using the intents and corresponding entities in the response to determine the agent's action. Each conversation flow was designed based on the reason for communication, the targeted goal and objectives, and key verbiage from the customer to personalize the outreach. These conversation flows are subdivided by their business functions (e.g., sales assistants selling automobiles, technology products, financial products, etc); see par 136 - After language preference is determined, the system may determine what assistant, among a hierarchy of possible automated assistants, the contact is engaging with (at 1420). Each assistant includes different preferred classification and action response models, personality weights, and access permissions; par 149 - Returning to FIG. 15, after NLG, this language may be used, along with other rule based analysis of intents, to formulate the action to be taken by the system (at 1580). Generally, at a minimum, the action includes the ending of the generated message language back to the target, however the action may additionally include other activities such as attaching a file to the message, setting up an appointment using scheduling software.), and text messages ([0187] as published states “A workflow many be generated for a template based on a variety of different possible types of program synthesis. For example, a complete workflow with text messages, actions, and conditionals may be generated using template generation and program synthesis of workflow steps. FIG. 30A, illustrates a high-level method in which template generation is performed in block 3020 and program synthesis is performed in block 3040. For “text messages” - See par 94 - FIG. 5A is an example logical diagram of the message response system 230. In this example system, an input message 501 is initially received. see par 105 - the conversational AI system is able to predict how likely the user wanted to express intent, and the AI agent's policy can be evaluated using the intents and corresponding entities in the response to determine the agent's action. This derivation of intents uses data obtained from many enterprise assistant conversation flows. Each conversation flow was designed based on the reason for communication, the targeted goal and objectives, and key verbiage from the customer to personalize the outreach. These conversation flows are subdivided by their business functions (e.g., sales assistants selling automobiles, technology products, financial products see par 171 - Turning now to FIG. 21, an example illustration 2100 is presented for an interface enabling a customer/user of the dynamic conversation system 108 to create their own business questions/intents and train the AI to respond to it accordingly); see par 141 -the transactional assistant can further perform natural language generation (NLG) for the response (at 1570). NLG process is described in greater detail in relation to FIG. 16. NLG may include phrase selection and template population in much the manner already discussed). Jonnalagadda states that the it uses multiple components in a deep neural network including an encoder, reasoner, and decoder (See par 100, FIG. 5A), that the neural encoder can use generative adversarial networks (See par 102), and that AI Platform will apply “large knowledge sets” to perform actions based upon models (See par 133, FIG. 12; par 165, FIG. 19). Williams discloses: 3) Inputting the answer clusters of each topic into a “large language model” and inferring for each topic a suggested natural language workflow policy including any associated available tools (Williams – see par 66 - The models 118, which may access a corpus of content extracted by crawling a relevant set of pages on the Internet, are applied to the key phrases 112 to establish the clusters, which arrange topics around a core topic based on semantic similarity. see par 70 - The tool may them use this, along with a large amount of crawled online content that was analyzed, or along with extracted information resulting from such crawling of online content and prior stored search criteria and results, which is now context-based, to validate a topic against various criteria; see par 142 - The machine learning system 212 may train a generative model using a corpus of text. For example, the machine learning system 212 may train a generative model used to generate professional messages may be trained using a corpus of messages, professional articles, emails, text messages, and the like. For example, the machine learning system 212 may be provided messages drafted by users for intended objective. The machine learning system 212 may receive the messages, the intended objectives of the messages, and outcome data indicating whether the message was successful (e.g., generated a lead, elicited a response, was read by the recipient, and the like). ). To any extent Jonnalagadda only is considered “calling a webhook” or leveraging “external APIs” for intent determination (See Jonnalagadda par 134, 149), Yaghoub discloses: 3) Inputting the answer clusters of each topic into a large language model and inferring for each topic a suggested natural language workflow policy including any associated “available tools” (claim 26 specifies – API for making a network call; paragraph 231 as published “actions may include use of one or more tools (e.g. API calls)) Yaghoub discloses the limitations based on broadest reasonable interpretation in light of the specification – See page 245, col. 2, 1st paragraph – Virtual assistants (also known as bots) serve a wide range of user tasks by mapping user utterances (also called user expressions) into appropriate intents. Examples include reporting weather, booking flights; page 245, col. 2, 2nd paragraph – page 246, col. 1, 1st paragraph - Developing a bot typically implies the ability to invoke APIs corresponding to user utterances (e.g., “what will the weather be like tomorrow in NYC?"). This is done in two phases as briefly shown in Figure 1: (i) training a Natural Language Understanding (NLU) model to map user utterances to intents, and (ii) developing Webhook functions to map intents to APIs. Machine-learning based NLU techniques require definition of intents (e.g., booking hotels), entity types (e.g., location, date), and a set of annotated utterances in which entities are labeled with the entity types and intents)). Jonnalagadda and Yaghoub disclose: 4) providing each suggested natural language workflow policy to an administrator (Jonnalagadda - See par 184 - Turning to FIG. 27, an example interface 2700 for the training desk annotator is provided. This example interface is useful for training desk operators that enables annotation of responses to build and update the various AI models, as well as supporting conversation customizability. The training desk relies upon having a human-in-the-loop to support conversations. see par 185 - In the present illustrative interface the current exchange status is indicated, along with the conversation subject and the response message (or portion of the response being analyzed). The annotator is presented with global intents to select from, and a series of variable intents and entities. See par 186 - The annotator is able to rapidly select intents, global intents, variable intents and entities and submit them for classification model refinement). 5) receiving an input from an administrator to convert the suggested natural language workflow policy into an active natural language workflow policy ([0232] as published states - In block 4810, an admin may be provided a preview of the execution of the workflow policy for one or more test cases. For example, an admin may tweak the text of the workflow in response to a preview. In block 4812, the workflow policy is implemented. Jonnalagadda ‘144 –see par 141 - the transactional assistant can further perform natural language generation (NLG) for the response (at 1570). NLG process is described in greater detail in relation to FIG. 16. If the intents are new (or a new combination of intents and entities for the given exchange) then it may be desirable to have human intervention (at 1616). See par 184, FIG. 27 - This example interface is useful for training desk operators that enables annotation of responses to build and update the various AI models, as well as supporting conversation customizability. The training desk relies upon having a human-in-the-loop to support conversations. See par 186, FIG. 28 - The annotator is able to rapidly select intents, global intents, variable intents and entities and submit them for classification model refinement. These intent selections are used by the system to generate a transition for the exchange using the action response model(s). This transition is then presented to the annotator for agreement or disagreement, as seen in FIG. 28 at the example interface 2800); and answering a question of a customer by identifying a topic of the question based on the granular taxonomy, accessing the active natural language workflow policy associated with the topic, and using a large language model to convert the accessed active natural language workflow policy into a sequence of workflow steps to answer the question (Applicant’s [0183] as published states “In block 2806, workflow sequences are generated for template answers for the selected topics. [0185] as published states “In block 2908, the output of the of fine-tuned generative AI model is used to generate recommended template answer(s) to an administrator/supervisor. In block 2910, optional revisions are received to the template answer(s). For example, an administrator or supervisor may be given options to accept a template answer, reject a template answer, or edit a template answer. The administrator or supervisor may be given a single template answer to approve or edit for a select topic. However, more generally, the administrator or supervisor may be given a selection of template answers to choose from for a given topic. In block 2912, the template answer is implemented to answer customer queries for a selected topic/intent.” Jonnalagadda describes the limitations based on broadest reasonable interpretation in light of the specification – See par 80, FIG. 2, 4A – message generator 220 – receiving intent of last received response; This information is provided to an assistant manager 410, which leverages information about the various AI assistants form an assistant dataset 420. Different assistants may have access to differing… domain specific datasets. For example, a sales assistant may have access to product information; a sales assistant may react entirely differently from a customer service assistant given the same intent inputs (disclosing workflow policies/rules/details for solving different topics); See par 87, FIG. 4A - a message builder 450 incorporates the actions into a message template obtained from a template database 408 (when appropriate). Dynamic message building design depends on ‘message building’ rules in order to compose an outbound document. FIG. 4D provides an example of this message builder 450 that may include a hierarchical conversation library 451 for storing all the conversation components; The hierarchical conversation library 451 may be a large curated library, organized and utilizing multiple inheritance along a number of axes: organizational levels, access-levels (rep->group->customer->public). The hierarchical conversation library 451 leverages sophisticated library management mechanisms, involving a rating system based on achievement of specific conversation objectives; see par 155- the system exposes to the clients (where rep can answer the client question) to these questions thereby enabling the system to automatically extract not only the question but also learn how to answer a particular question (based on how rep responded). The system uses this mechanism to automatically generate the question-answer pair and will send it to clients for approval. If approved/and modified, this answer becomes as a “approved answer” in the system that is used to generate the answer. See par 187, FIG. 29 - This platform enables multi-turn conversations, in multiple channels, multiple languages, supporting multiple AI assistants with multiple objectives. Using this platform the conversations can be edited and customized at differing levels, including system wide, industry vertical, customer and individual levels. The platform organizes the conversations into “trees” which include the baseline text, input variables from third party systems, such as marketing automations, and CRM systems, synonym variables and phrase packages, time variables, and the like). Jonnalagadda, Yaghoub, and Williams are analogous art as they are directed to analyzing questions/conversations with people (see Jonnalagadda Abstract, par 66; Yaghoub Abstract; Williams Abstract, par 193). 1) Jonnalagadda discloses in background that an example of AI tools are “chatbots” (See par 3) and using conversation systems, where recipients communicating with an “automated machine” in a dialog or text message, or “chat” (See par 66). Jonnalagadda discloses that it can have an action of “calling” a webhook (see par 149) and using external APIs (See par 134). Yaghoub improves upon Jonnalagadda by disclosing API calls with a chatbot to respond appropriately based on intents such as “booking a hotel” and having bots for conversations. One of ordinary skill in the art would be motivated to further include a API calls with a chatbot for responding to efficiently improve upon the “automated machine” used for a chat (par 66) and the use of APIs in Jonnalagadda. 2) Jonnalagadda states that the it uses multiple components in a deep neural network including an encoder, reasoner, and decoder (See par 100, FIG. 5A), that the neural encoder can use generative adversarial networks (See par 102), and that AI Platform will apply “large knowledge sets” to perform actions based upon models (See par 133, FIG. 12; par 165, FIG. 19). Williams improves upon Jonnalagadda and Yaghoub by explicitly disclosing using “a large language model” for a chatbot (see e.g. par 142). One of ordinary skill in the art would be motivated to further include a large language model for a chatbot to efficiently improve upon the “automated machine” used for a chat (par 66) and the use of “large knowledge sets” with an encoder, reasoner, and decoder (See par 100; 133, 165) in Jonnalagadda and the bots for conversations that are trained in Yaghoub (See page 247, section 2.4) Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scripting questions for an agent in Jonnalagadda, and to further answer questions using chatbots with API calls as disclosed in Yaghoub, and use a generative model trained on a corpus of text for drafting responses to messages in Williams, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning independent claim 34, Jonnalagadda discloses: A method for responding to a customer service ticket comprising: Generating, using at least one server… (Jonnalagadda – see par 201 - The term “machine-readable medium” and “machine-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the presently disclosed technique and innovation.; “tickets” disclosed by - See par 117 - Objective definition can track the state of every target. The state of the conversation objectives can be tracked individually as shown below in Table 2 (e.g. Target ID, Conversation ID – disclosing “ticket”); See also FIG. 24, par 180 – “Lead information” – has specific ID number in FIG. 24; FIG. 24 provides an example illustration of an interface 2400 showing a contact's record. Information pertinent to the contact, such as their name, email, ID number), comprising: Jonnalagadda discloses in background that an example of AI tools are “chatbots” (See par 3) and using conversation systems, where recipients communicating with an “automated machine” in a dialog or text message, or “chat” (See par 66). Yaghoub discloses “chatbot” (Yaghoub – see Abstract - bots emerged recently as natural interfaces to facilitate conversations between humans and API-accessible services; see page 246, col. 1, 2nd paragraph - conversational bots (e.g. flight booking) are useful, the premise of our research is that the ubiquity of such bots will have more value if they can easily integrate and reuse concomitant capabilities across large number of evolving and heterogeneous devices, data sources and applications (e.g. flight/hotel/car bookings all-in-one bot); see page 247, section 2.4 – Conversation Manager Generator - Conversation Manager Generator (CMG) is used to instantiate a conversation manager in a third-party bot development platform (e.g. Dialogflow)). Remaining limitations are similar to claim 25 above. The claim is rejected for the same reasons as claim 25 over Jonnalagadda, Yaghoub, and Williams. Concerning claims 26 and 35, Jonnalagadda, Yaghoub, and Williams disclose: The system of claim 25, wherein the available tools comprise an Application Programming Interface (API) for making a network call (Applicant’s [0177] as published states “a workflow steps program synthesis module 2704 generates workflow steps. A workflow step may, for example, include a message step or a network call having an API call step. A message step may correspond to a text message sent to a customer. An API call step may correspond to an agent triggering API calls using button clicks to implement a network call.” Yaghoub discloses the limitations based on broadest reasonable interpretation in light of the specification – See page 245, col. 2, 1st paragraph – Virtual assistants (also known as bots) serve a wide range of user tasks by mapping user utterances (also called user expressions) into appropriate intents. Examples include reporting weather, booking flights; page 245, col. 2, 2nd paragraph – page 246, col. 1, 1st paragraph - Developing a bot typically implies the ability to invoke APIs corresponding to user utterances (e.g., “what will the weather be like tomorrow in NYC?"). This is done in two phases as briefly shown in Figure 1: (i) training a Natural Language Understanding (NLU) model to map user utterances to intents, and (ii) developing Webhook functions to map intents to APIs. Machine-learning based NLU techniques require definition of intents (e.g., booking hotels), entity types (e.g., location, date), and a set of annotated utterances in which entities are labeled with the entity types and intents See also Williams – see par 210 - the client-specific service system data structure may define the microservices that support the selected service features and may include the mechanisms by which those microservices are accessed (e.g., API calls that are made to the respective microservices and the customization parameters used to parameterize the API calls). It would have been obvious to combine Jonnalagadda, Yaghoub, and Williams for the same reasons as discussed with regards to claim 25. Concerning claims 27 and 36, Jonnalagadda, Yaghoub, and Williams disclose: The system of claim 25, wherein the suggested natural language workflow comprises conditions, responses, and actions (Applicant’s [0231] as published states “FIG. 48 is a flowchart of an example method of generating a natural language workflow policy to aid in solving a customer support ticket. The workflow policy may include, for example, a description of conditions, responses, and actions.”. Jonnalagadda – see par 88 - For example, a trigger may include “Please send me the brochure.” This trigger may be linked to the action of attaching a brochure document to the response message, which may be actionable via a webhook or the like. see par 120 - In some embodiment, a single phrase can be chosen randomly from possible phrases for each template component. Alternatively, as noted before, phrases are gathered and ranked by “relevance”. Each phrase can be thought of as a rule with conditions that determine whether or not the rule can apply and an action describing the phrase's content. See par 121 - Relevance is calcula
Read full office action

Prosecution Timeline

Jul 05, 2023
Application Filed
Mar 16, 2025
Non-Final Rejection — §101, §103, §DP
Jul 28, 2025
Response Filed
Oct 07, 2025
Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596970
SYSTEM AND METHOD FOR INTERMODAL FACILITY MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12591826
SYSTEM FOR CREATING AND MANAGING ENTERPRISE USER WORKFLOWS
2y 5m to grant Granted Mar 31, 2026
Patent 12586020
DETERMINING IMPACTS OF WORK ITEMS ON REPOSITORIES
2y 5m to grant Granted Mar 24, 2026
Patent 12579493
SYSTEMS AND METHODS FOR CLIENT INTAKE AND MANAGEMENT USING HIERARCHICAL CONFLICT ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12555055
CENTRALIZED ORCHESTRATION OF WORKFLOW COMPONENT EXECUTIONS ACROSS SOFTWARE SERVICES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
35%
Grant Probability
72%
With Interview (+36.9%)
4y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 365 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month