Prosecution Insights
Last updated: April 19, 2026
Application No. 18/891,705

WORKFLOW CREATION

Non-Final OA §101§103§112
Filed
Sep 20, 2024
Examiner
GOLDBERG, IVAN R
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
35%
Grant Probability
At Risk
1-2
OA Rounds
4y 8m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
128 granted / 365 resolved
-16.9% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
57 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
27.7%
-12.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 365 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant The following is a Non-Final Office action, responsive to Applicant’s communication of 9/20/24, in which Applicant filed the application. Claims 1-20 are pending in this application and have been rejected below. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/1/25 are being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 11 recites the limitation "the response content" and “the user”. There is insufficient antecedent basis for either limitation in the claim. Claim 11 depends from claims 1, 7, and 10. However, there is no previous recitation of “content” or “user.” Claim 10 recites “in response to the first node being triggered, presenting the target question and the set of candidate answers.” Examiner’s best guess is that claim 11 could recite: “further comprising a response content comprises: a selection by a user of at least one candidate answers in the set of candidate answers, or a response message input by the user.” Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without reciting significantly more. Step One - First, pursuant to step 1 in MPEP 2106.03, the claim 1 is directed to a method which is a statutory category. Step 2A, Prong One - MPEP 2106.04 - The claim 1 recites– A method for workflow creation, comprising: in response to a received operation (Applicant’s [0036, 0043, 0077] as published gives example of user “editing” or creation of a workflow), adding a first node associated with a question-and-answer interaction to a node connected graph (Applicant’s [0038] as published gives example where the “nodes” contain questions and answers); obtaining configuration information for the first node via a set of input controls associated with the first node, the configuration information indicating: a target question to be provided in response to the first node being triggered, and a target action to be executed by the first node based on a response received for the target question (Applicant’s 0045 as published states “Additionally, the electronic device 110 may provide an input control 320 for configuring an input parameter of the first node 315”; FIG. 3A includes a drop-down menu with ability to click “select” for 320; Applicant’s 0053] as published gives examples where the target question can be a “follow-up” question; FIGS. 3C-3D give examples; [0064] as published gives example “As shown in FIG. 3D, when the first node 315 is triggered to be executed, a target question 375 (“What do you think would be the next step for character X?”) determined based on the question description text may be provided to the user. Further, three candidate answers 380-1, 380-2, and 380-3 configured based on the input controls 370-1 through 370-3 may also be provided.”; [0067] as published gives example where “actions” can be “various candidate answers or the further option”); and creating a target workflow based on the node connected graph. As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea grouping of “certain methods of organizing human activity” (managing personal behavior or relationships or interactions between people (including following rules or instructions), as the claim involves a user describing a process, adding a “first node” associated with questions and answers to a node connected graph, which is a user deciding which steps/questions and subsequent questions/answers should be in the node-connected graph representing a series of instructions a user can follow in a workflow; the process then obtains from a user, configuration information for a first node indicating a target question and a target action/option based on a response received for the question, then creating a target workflow. The claim is receiving description for performing a process (e.g. a business process), then creating the workflow/nodes/steps for a user to view, where the steps can describe a character, and “actions” a user can take. Accordingly, at this time, claim 1 is directed to an abstract idea. Step 2A, Prong Two - MPEP 2106.04 - This judicial exception is not integrated into a practical application. In particular, the claim recites additional elements that are: obtaining configuration information for the first node via a set of input controls associated with the first node. At this time, no computer is even recited to perform the method steps. Examiner recommends as an initial step, amending the claim so a computer performs each step. Once the computer is recited for performing each step, the claims, individually or when viewed in combination, are viewed reciting the computer at a high-level of generality (i.e., as a generic processor performing each step) such that it amounts no more than mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f). At best it is “field of use” (MPEP 2106.05h) for the “input controls” in that a computer is displaying a graphical user interface with various “controls” (e.g. FIG. 3A shows dropdown buttons, inputs for text areas). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim also fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, and/or an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. See 84 Fed. Reg. 55. The claim is directed to an abstract idea. Step 2B in MPEP 2106.05 - The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements in claim 1 of “computer” [once amended in] and “input controls” are “apply it” [abstract idea] on a computer. (See MPEP 2106.05(f) – Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235) and “field of use” (MPEP 2106.05h). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. The claim is not patent eligible. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Independent claim 12 is directed to an apparatus at step 1, which is a statutory category. Claim 12 recites similar limitations as claim 1, though claim 12 recites “processing unit, memory, and memory storing instructions executed by processing unit to cause electronic device to perform” each step. These are additional elements analyzed at step 2A, prong 2 and step 2B; individually or in combination, they are viewed as “apply it [abstract idea] on a computer” (MPEP 2106.05f), as above if claim 1 was amended to have a computer perform each step. Accordingly, claim 12 is rejected for the same reasons at step 2A, prong one, step 2A, prong 2, and step 2B. The claim is not patent eligible. Independent claim 20 is directed to an article of manufacture at step 1, which is a statutory category. Claim 20 recite similar limitations as claim 1, 12 and are rejected for the same reasons at step 2A, prong one. At step 2A, prong 2, claim 20 recites a non-transitory computer readable storage medium… computer program executable by a processor” to perform each step. Similar to the analysis of claim 12 above, this is just “apply it” on a computer (MPEP 2106.05f) for the same reasons stated above with regards to claim 1, 12 at step 2A, prong 2 and step 2B. The claim is not patent eligible. Claims 2, 13 have additional elements of “presenting a plurality of interface elements” corresponding to node types and “selection of a target interface element” for adding a node. Applicant’s example appears to be a button such as 305-4 for adding a “question and answer” node. This is viewed as “apply it [abstract idea] on a computer” MPEP 2106.05(f); and “field of use” (MPEP 2106.05h) for similar reasons as claim 1. Claims 3, 14 narrow the abstract idea by obtaining “question description text” and determining a “target question” in response. Claims also have additional elements of a “first input control” (Applicant’s [0048] as published states “use an input control 330 to obtain question description text, e.g., “What do you think would be an appropriate name for {output}?”). Similar to claims 1, 12 above, this is also viewed as an additional element of “apply it [abstract idea] on a computer” MPEP 2106.05(f) and “field of use” (MPEP 2106.05h) for displaying the input text area. Claims 4, 15 depend from claim 3, 14, but narrows the abstract idea as it only describes that the “question description text references a target data object”, and conveys meaning to a human reader of the process in the workflow. Claims 5, 16 narrow the abstract idea by having indication that 1st node corresponds to 1st question type, determining parameter description information, and parameter description information. Claims also have additional elements of a “second input control” (e.g. [0047] as published – input control 325 for different candidate question types with drop-down menu as example) and “third input control” (e.g. [0051] as published – input control 335 for “name”, and parameter for character name). Similar to claims 1, 12 and 3, 14 above, this is also viewed as an additional element of “apply it [abstract idea] on a computer” MPEP 2106.05(f) and “field of use” (MPEP 2106.05h) for displaying the input text area. Claims 6, 17 depend from claims 5, 16 but only state the parameter value is given to a third node in the graph. This narrows the abstract idea by further describing the process which is a set of rules a user will follow. Claims 7, 18 narrow the abstract idea similar to claim 5, and then giving a user a set of candidate answers. The “fourth input control”, similar to claims 1, 12 and 3, 14 above, this is also viewed as an additional element of “apply it [abstract idea] on a computer” MPEP 2106.05(f) and “field of use” (MPEP 2106.05h) for displaying the input text area. Claims 8, 19 trigger actions to occur. As best understood, this can be just presenting text to a user or describing answers (see e.g. FIG. 3D – 380-1 “option A: action 1”. This just narrows the abstract idea; to extent it is “by a computer” to do some nebulous act by the computer, it is viewed as “apply it [abstract idea] on a computer” MPEP 2106.05(f). Claim 9 is rejected for similar reasons as claim 8, as it just state there’s a specific amount of nodes (four) and having some answers triggering the 4th node. Claim 10 narrows the abstract idea by “presenting” target question and a set of candidate answers. To extent it is “displayed/presented” by a computer, this is also viewed as an additional element of “apply it [abstract idea] on a computer” MPEP 2106.05(f) and “field of use” (MPEP 2106.05h) for displaying the input text area. Claim 11 as best understood, is describing responses by a person of a selection of an answer, or a response message input by a user; this just narrows the abstract idea by gathering information from a user for following rules or instructions in a process/template/workflow. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. For more information on 101 rejections, see MPEP 2106. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US 2018/0052664) in view of Moya (US 2023/0298568). Concerning claim 1, Zhang discloses: A method for … creation (Zhang – see par 45 - virtual agent development engine 170 in this example may develop a customized virtual agent for a developer via a bot design programming interface provided to the developer. The virtual agent development engine 170 can work with multiple developers 160 at the same time. Each developer may request a customized virtual agent with a specific service or domain. see par 95 - FIG. 11 is a flowchart of an exemplary process of a virtual agent development engine, e.g. the virtual agent development engine 170 in FIG. 10, according to an embodiment of the present teaching. A bot design programming interface is provided at 1102 to a developer. see par 103 - as shown in FIG. 13A, the developer has selected a number of bot design graphical programming objects arranged in an order, i.e., a sequence of actions to be performed by the virtual bot currently being designed), comprising: Zhang discloses a “flowchart” for an exemplary “process” for a bot, that is designed/customized by a developer (see par 45, 95, 103). Zhang does not explicitly disclose the flowchart/process as “workflow”. Moya discloses: A method for “workflow” creation (Moya – see par 57 - system receives input data, e.g., from a human designer, and in response configures a directed acyclic graph (DAG) that represents a conversation flow. The directed acyclic graph comprises a set of nodes, and wherein a DAG node includes a conversational bot prompt, a set of possible human responses to that prompt, and an indication of a default path when one of the set of possible responses is not received. see par 58 - assume the human user does not want to answer the bot's questions, and instead asks their own. This is sometimes referred to herein as “Q/A content.” Using the combined DAG and data model, the bot respects such choices, even if this sometimes results in adapting the conversation to a completely different flow. Based on the DAG, and in cases where the bot can answer an isolated (random) user question, the bot can then resume following the specified workflow where it left off. see par 59 - DAG-based workflow ). Zhang and Moya disclose: in response to a received operation, adding a first node associated with a question-and-answer interaction to a node connected graph (Zhang – see par 45, FIG. 1 - The virtual agent development engine 170 in this example may develop a customized virtual agent for a developer via a bot design programming interface provided to the developer. The virtual agent development engine 170 can work with multiple developers 160 at the same time. Each developer may request a customized virtual agent with a specific service or domain. See par 101, FIG. 13A - an exemplary bot design programming interface 1300 for a developer; disclosed system can present a plurality of bot design graphical programming objects 1311-1318 available to a developer; the bot design graphical programming object 1311 represents “Information Collection” module which, once executed, causes the underlying virtual agent to take an action to collect information (from a chat user) needed for performing the task that the virtual agent is designed to perform; see par 103 - developer can use such graphical bot design programming objects to quickly and efficiently program a virtual agent by arranging a sequence of actions to be performed by the virtual agent by simply dragging and dropping corresponding bot design graphical programming objects in a sequence); obtaining configuration information for the first node via a set of input controls associated with the first node, the configuration information indicating: a target question to be provided in response to the first node being triggered, and a target action to be executed by the first node based on a response received for the target question (Applicant’s 0045 as published states “Additionally, the electronic device 110 may provide an input control 320 for configuring an input parameter of the first node 315”; FIG. 3A includes a drop-down menu with ability to click “select” for 320; Zhang – see par 103 - developer can use such graphical bot design programming objects to quickly and efficiently program a virtual agent by arranging a sequence of actions to be performed by the virtual agent by simply dragging and dropping corresponding bot design graphical programming objects in a sequence; the sequence of actions is represented by (1) action 1302 set up by dragging and dropping bot design graphical programming object 1311 to collect information (1302, 1311 disclose “input controls” for indicating “target question”), (2) action 1304 set up by dragging and dropping bot design graphical programming object 1312 for the virtual bot to speaks something to the chat user, (3) action 1306 set up by dragging and dropping bot design graphical programming object 1313 to invoke an action via a specific service (e.g., weather.com) (1304 and 1306 disclose “input controls” for indicating… “target action”)); and creating a target workflow based on the node connected graph (Zhang – see par 96 - At 1112, it is determined whether it is ready to integrate the program to generate the customized virtual agent. If so, the process goes to 1114, where program source codes are retrieved from a database based on visual inputs and/or the determined modules. Then the program codes are modified at 1116 based on a machine learning model. The modified program codes are integrated at 1118 to generate a customized virtual agent; see par 110 - the developer may set a condition for executing the application action module 1306, e.g., the application action module 1306 will only be executed when all parameters, e.g. city, date, etc. have been collected from the chat user; see also Moya for “workflow”– see par 59 - The DAG-based workflow herein preferably is implemented by an additional Action Selector, and an additional Critic. In particular, “ActionSelectorMotivation” is an action selector that references the DAG to search from the root thereof to find a question for the bot to ask that the bot does not already know the answer to. Technically, the action selector is looking for a bot prompt that leads to assigning a value to a variable that the system does not know the value of yet. see par 65 - FIG. 5 depicts an example directed acyclic graph (DAG) 500 designed to implement a conversational flow. In this example, the DAG has been designed in a visual builder tool, the DAG depicts a number of top-down-oriented question and answer conversational pathways that the conversation may follow.). Both Zhang and Moya are analogous art as they are directed to developing a bot that asks questions to get responses from people to perform processes (See Zhang Abstract, par 83; Moya Abstract, par 57). Zhang discloses a “flowchart” for an exemplary “process” for a bot, that is designed/customized by a developer (see par 45, 95, 103). Moya improves upon Zhang by disclosing having workflow templates, configuration specifications (See par 32) where workflow to be implemented with configuration data and parameters can then be saved as a template (See par 49). One of ordinary skill in the art would be motivated to further include explicitly stating that the process is a “workflow” to efficiently improve upon the customized question and answer design for a bot, such as one that asks which city to retrieve weather information for, in Zhang. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of having a GUI to design questions and responses for a bot in Zhang to further have workflow templates, configuration specifications (See par 32) where workflow to be implemented with configuration data and parameters can then be saved as a template (See par 49), where a human design uses questions and answers in a directed acyclic graph (See par 57) as disclosed in Moya, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning independent claim 12, Zhang discloses: An electronic device (Zhang – see par 116 - FIG. 17 depicts the architecture of a computing device which can be used to realize a specialized system implementing the present teaching), comprising: at least one processing unit (Zhang – see par 116-117 - The computer 1700 also includes a central processing unit (CPU) 1720, in the form of one or more processors, for executing program instructions. see par 119-120 - computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution); and at least one memory, the at least one memory being coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instruction, when executed by the at least one processing unit, causing the electronic device to perform operations comprising (Zhang – see par 117 - The exemplary computer platform includes an internal communication bus 1710, program storage and data storage of different forms, e.g., disk 1770, read only memory (ROM) 1730, or random access memory (RAM) 1740, for various data files to be processed and/or communicated by the computer, as well as possibly program instructions to be executed by the CPU. see par 119-120 - computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution). The remaining limitations are similar to claim 1. Claim 12 is rejected for the same reasons as claim 1. It would have been obvious to combine Zhang and Moya for the same reasons as discussed with regards to claim 1. Concerning independent claim 20, Zhang discloses: A non-transitory computer readable storage medium having a computer program stored thereon, wherein the computer program is executable by a processor to implement a method comprising (Zhang – see par 117 - The exemplary computer platform includes an internal communication bus 1710, program storage and data storage of different forms, e.g., disk 1770, read only memory (ROM) 1730, or random access memory (RAM) 1740, for various data files to be processed and/or communicated by the computer, as well as possibly program instructions to be executed by the CPU. see par 119-120 - computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution). The remaining limitations are similar to claim 1. Claim 12 is rejected for the same reasons as claim 1. It would have been obvious to combine Zhang and Moya for the same reasons as discussed with regards to claim 1. Concerning claims 2 and 13, Zhang discloses: The method of claim 1, wherein adding the first node associated with the question-and-answer interaction to the node connected graph in response to the received operation comprises: presenting a plurality of interface elements corresponding to a plurality of node types (Zhang – see FIG. 13A, par 101 – 1311-1318 - he disclosed system can present a plurality of bot design graphical programming objects 1311-1318 available to a developer, via the bot design programming interface 1300. Each of the plurality of bot design graphical programming objects represents a module corresponding to an action or a sub-task to be performed by the virtual agent; see par 103 - the sequence of actions is represented by (1) action 1302 set up by dragging and dropping bot design graphical programming object 1311 to collect information, (2) action 1304 set up by dragging and dropping bot design graphical programming object 1312 for the virtual bot to speaks something to the chat user, (3) action 1306 set up by dragging and dropping bot design graphical programming object 1313 to invoke an action via a specific service (e.g., weather.com)); and in response to a selection of a target interface element from the plurality of interface elements, adding the first node to the node connected graph, the target interface element corresponding to a question-and-answer node (Zhang - See par 101, FIG. 13A - an exemplary bot design programming interface 1300 for a developer; disclosed system can present a plurality of bot design graphical programming objects 1311-1318 available to a developer; the bot design graphical programming object 1311 represents “Information Collection” module which, once executed, causes the underlying virtual agent to take an action to collect information (from a chat user) needed for performing the task that the virtual agent is designed to perform). Concerning claims 3 and 14, Zhang discloses: The method of claim 1, wherein obtaining the configuration information for the first node via the set of input controls associated with the first node comprises: obtaining question description text via a first input control in the set of input controls (Applicant’s [0048] as published states “use an input control 330 to obtain question description text, e.g., “What do you think would be an appropriate name for {{output}}?.” Zhang – See FIG. 13A, par 104 - the action of collecting information 1302, when executed, is to help to gather needed information from a chat user in order to provide the information the chat user is querying about. For example, the developer can make use of the collect information module 1302 to design how a chat bot is to collect information, e.g., the city to which a query about weather is directed; box includes question of “which city?”); and determining, based on the question description text, the target question to be provided in response to the first node being triggered (Zhang – see par 55 - see par 55 - real-time task manager 230 in this example may receive estimated user intent and dialog state data from the dynamic dialog state analyzer 210, customized FAQ data from the customized FAQ generator 220, and information from the customized task database 139 . Based on the dialog state and the FAQ data, the real-time task manager 230 may determine a next task for the service virtual agent 1 142 to perform. Such decisions may be made based also on information or knowledge from the customized task database 139. if an underlying task is assist a chat user to find the weather of a locale, the knowledge from the customized task database 139 for this particular tasks may indicate that for this particular task, a virtual agent or bot needs to collection information about the locale (city), date, or even time in order to proceed to get appropriate weather information;). Concerning claims 4 and 15, Zhang discloses: The method of claim 3, wherein the question description text references a target data object in a second node associated with the first node (Zhang – see par 55 - real-time task manager 230 in this example may receive estimated user intent and dialog state data from the dynamic dialog state analyzer 210, customized FAQ data from the customized FAQ generator 220, and information from the customized task database 139 . Based on the dialog state and the FAQ data, the real-time task manager 230 may determine a next task for the service virtual agent 1 142 to perform. Such decisions may be made based also on information or knowledge from the customized task database 139. For example, if an underlying task is assist a chat user to find the weather of a locale, the knowledge from the customized task database 139 for this particular tasks may indicate that for this particular task, a virtual agent or bot needs to collection information about the locale (city), date, or even time in order to proceed to get appropriate weather information; see par 104-105, FIG. 13A-13B – developer clicks on expand button 1332 to trigger pull down list of “different ways to answer “San Jose”; the disclosed system deploy a deep learning model to identify an entity name from various sentences or text strings. In this example, although there are different ways to answer “San Jose” to a question on “Which city,” the deep learning model can be trained to recognize city name “San Jose” from all these various ways to say “San Jose.” (San Jose is “target data object” of a “city” from the question “which city?”). Concerning claims 5 and 16, Zhang discloses: The method of claim 1, wherein obtaining the configuration information for the first node via the set of input controls associated with the first node comprises: in response to a second input control in the set of input controls indicating that the first node corresponds to a first question type (Applicant’s [0047 as published states “the electronic device 110 may provide an input control 325 for configuring a question type corresponding to the first node 315. In some embodiments, the input control 325 may, for example, provide two candidate question types” where 325 shows a drop-down menu with “open-ended question mode” being selectable; Zhang – see par 105 - In FIG. 13A, the answer to that question is “San Jose.” In FIG. 13B, a developer click on expand button 1332 (in FIG. 13A), which triggers a pull down list of different ways to answer “San Jose” (or different ways to answer “which city?”), PNG media_image1.png 600 908 media_image1.png Greyscale While claim 5 only recites “first question type”, claim 7 appears to have “candidate answers,” leading to possibility that claim 5 at future point will clarify “open-ended question.” Moya discloses “open-ended” question – see par 60 - the directed graph is a graph of variables to be assigned, together with how to branch based on the variable(s) values that have been assigned. Associated variable definitions are referenced to determine what text prompt the bot should use to ask the user for any value of any variable. As noted, the graph comprises nodes, where each node includes a bot prompt to ask the user a question, and a set of possible responses the human may give, e.g., by clicking a button, or typing open ended natural language that gets classified into one of the labels recognized by a statistical model (as was described above) determining parameter description information via at least one third input control in the set of input controls, the parameter description information indicating at least one parameter to be determined based on the response ([0051] as published states “using the input control 335, that a name of the parameter to be extracted is “Name”, a type of the parameter to be extracted is a string type, and the parameter is configured to describe a character name.” Zhang see par 91 - For example, for a weather agent having a module collecting information about the city in which weather is queried, the developer may enter several city names as examples. The machine learning engine 1016 may obtain training data from the training database 1018 and modify the codes to adapt to all city names as in the examples. par 100 - utterance (b) above “What's the weather like in San Jose?” (1204) includes both word “weather” which can be used to trigger a weather virtual agent and “San Jose” which is a parameter needed by the weather virtual agent in order to check weather related information. According to the present teaching, “San Jose” may be identified as a city name from the utterance. With this known parameter extracted from the utterance, the weather virtual agent, once triggered no longer has the need to ask the chat user about the city name any more Moya discloses “open-ended” question – see par 60 - the directed graph is a graph of variables to be assigned, together with how to branch based on the variable(s) values that have been assigned. Associated variable definitions are referenced to determine what text prompt the bot should use to ask the user for any value of any variable. As noted, the graph comprises nodes, where each node includes a bot prompt to ask the user a question, and a set of possible responses the human may give, e.g., by clicking a button, or typing open ended natural language that gets classified into one of the labels recognized by a statistical model (as was described above); see par 28 - utterances such as described preferably are used as training data for a machine learning (ML)-based statistical classifier; upon training, the classifier is then useful for both checking for exact matches, as well as for further generalization, i.e., finding other wordings that have a similar meaning to words and phrases recognized by the classifier). It would have been obvious to combine Zhang and Moya for the same reasons as discussed with regards to claim 1. In addition, Zhang discloses “ analysis of the user's input may be achieved via natural language processing (NLP)” (See par 36, 51) and “the disclosed system may include an NLU (natural language understanding) based user intent analyzer 120” (See par 40). Moya improves upon Zhang by disclosing “open-ended” question as part of the dialog flow, and then analyzing utterances/words/text given to further improve upon NLP and natural language understanding aspects of Moya. Concerning claims 6 and 17, Zhang discloses: The method of claim 5, further comprising: during execution of the target workflow and in response to a parameter value of the at least one parameter being determined based on the response, providing the parameter value of the at least one parameter to a third node associated with the first node (Applicant’s specification [0060 has example “With continued reference to FIG. 3A, the node connected graph, for example, may further include a third node 340 connected to the first node 315. The third node 340 may, for example, support referencing a parameter, e.g., “Name”, extracted by the first node 315.” Zhang – see par 100 - utterance (b) above “What's the weather like in San Jose?” (1204) includes both word “weather” which can be used to trigger a weather virtual agent and “San Jose” which is a parameter needed by the weather virtual agent in order to check weather related information; see par 104 - developer can make use of the collect information module 1302 to design how a chat bot is to collect information, e.g., the city to which a query about weather is directed). Concerning claims 7 and 18, Zhang discloses: The method of claim 1, wherein obtaining the configuration information for the first node via the set of input controls associated with the first node comprises: in response to a second input control in the set of input controls indicating that the first node corresponds to a second question type Zhang – see par 105 - In FIG. 13A, the answer to that question is “San Jose.” In FIG. 13B, a developer click on expand button 1332 (in FIG. 13A), which triggers a pull down list of different ways to answer “San Jose” (or different ways to answer “which city?”), PNG media_image1.png 600 908 media_image1.png Greyscale , determining a set of candidate answers to the target question via at least one fourth input control in the set of input controls (Applicant’s [0063] as published states “FIG. 3C, for a question with preset options, the electronic device 110 may configure a set of candidate answers to the target question through, for example, an input control 370-1, an input control 370-2, and an input control 370-3.” 0064 states “As shown in FIG. 3D, when the first node 315 is triggered to be executed, a target question 375 (“What do you think would be the next step for character X?”) determined based on the question description text may be provided to the user. Further, three candidate answers 380-1, 380-2, and 380-3 (e.g. Action 1, Action 2, or Action 3) configured based on the input controls 370-1 through 370-3 may also be provided.”). Zhang see par 37 - More specifically, based on machine learning and AI technique, the disclosed system can learn how to strategically ask user questions, present intermediate candidates to the users based on historical human-human or human-machine or machine-machine conversation data, together with human or machine action data that involves calling third party applications, services or databases. The disclosed system can also learn and build/enlarge high quality answer knowledge base by identifying important frequent questions from historical conversational data and proposing new identified FAQs and their answers to be added to the knowledge base, which may be reviewed by human agents. see par 91 - For example, for a weather agent having a module collecting information about the city in which weather is queried, the developer may enter several city names as examples. The machine learning engine 1016 may obtain training data from the training database 1018 and modify the codes to adapt to all city names as in the examples. par 100 - utterance (b) above “What's the weather like in San Jose?” (1204) includes both word “weather” which can be used to trigger a weather virtual agent and “San Jose” which is a parameter needed by the weather virtual agent in order to check weather related information. According to the present teaching, “San Jose” may be identified as a city name from the utterance. With this known parameter extracted from the utterance, the weather virtual agent, once triggered no longer has the need to ask the chat user about the city name any more). Concerning claims 8 and 19, Zhang discloses: The method of claim 7, wherein the configuration information further indicates: a first action to be triggered for execution by a target candidate answer in the set of candidate answers (Examiner notes claim is in the alternative. Nonetheless, art applied to each limitation Zhang – see par 106 - In this example, the weather virtual agent, after the chat user answers “San Jose,” the virtual agent may proceed to gather the weather information on San Jose and during that time, the weather virtual agent is programmed to use the first “bot says” module 1304 to let the chat user know the status by saying “Just a moment, searching for weather for you . . . ” ), and/or a second action to be triggered for execution in response to none of the set of candidate answers being matched (Applicant’s 0067 as published states “ shown in FIG. 3C, the electronic device 110 may further receive a configuration operation that indicates actions corresponding to various candidate answers or the further option. For example, the electronic device 110 may add a corresponding fourth node following a corresponding input control to indicate an action triggered to be executed when the candidate answer is matched or none of the candidate answers are matched.” Zhang – See FIG. 13A, 1306 “city doesn’t match with previous definition, day has not been collected”; see par 108 - the developer can make use of the application action module 1306 to interface with an external weather reporting service such as Yahoo! Weather to gather weather information for a specific city of a given date, or by running an embedded internal application, on weather related information gathering. In this example, based on chat user's input, the virtual agent may also generate warnings, e.g. a warning that city does not match with previous definition when the city provided by the chat user is not previously defined.). It would have been obvious to combine Higgins and Srinath for the same reasons as discussed with regards to claim 1. Concerning claim 9, Zhang discloses: The method of claim 8, wherein the node connected graph comprises a connection element between the target candidate answer and a fourth node, the connection element indicating that the target candidate answer triggers the fourth node to be executed (Zhang –see par 55 - For example, if an underlying task is assist a chat user to find the weather of a locale, the knowledge from the customized task database 139 for this particular tasks may indicate that for this particular task, a virtual agent or bot needs to collection information about the locale (city), date, or even time in order to proceed to get appropriate weather information. see par 108- based on chat user's input, the virtual agent may also generate warnings, e.g. a warning that city does not match with previous definition when the city provided by the chat user is not previously defined; or a warning that date has not been collected, when the virtual agent does not have the information about the date for the weather search.; see par 110 - As shown in FIG. 13A (see 1332 with “plus sign” for adding more information), although a module may be executed without any condition (or unconditionally), the developer may also set a condition under which the module is to be executed. For example, as shown, the developer may set a condition for executing the application action module 1306, e.g., the application action module 1306 will only be executed when all parameters, e.g. city, date, etc. have been collected from the chat user. ). Concerning claim 10, Zhang discloses: The method of claim 7, further comprising: during execution of the target workflow and in response to the first node being triggered, presenting the target question and the set of candidate answers (Zhang – see par 37 - he disclosed system can learn how to strategically ask user questions, present intermediate candidates to the users based on historical human-human or human-machine or machine-machine conversation data, together with human or machine action data that involves calling third party applications, services or databases; see FIG. 8, par 79 - It also provides agent-selectable actions (860) which may be presented, once clicked, as a drop-down list, editable tags (870). The bot-assisted agent may also add topic tags about the current chat. The agent is assisted by a bot. For example, when the chat user asked “What is your return policy?” (in 840), the bot that is assisting the human agent provides a list of possible responses corresponding to a list of possible utterances tagged as “Assisted by Rulai.” ). Concerning claim 11, Zhang discloses: The method of claim 10, wherein the response content comprises: a selection of at least one candidate answers in the set of candidate answers, or a response message input by the user (Zhang – see par 106 - the weather virtual agent, after the chat user answers “San Jose,” the virtual agent may proceed to gather the weather information on San Jose). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Ross, “The Programmer’s Assistant: Conversational Interaction with a Large Language Model for Software Development,” 2023, In Proceedings of the 28th International Conference on Intelligent User Interfaces, pages 491-514 – directed to software development where engineers converse with a code-fluent LLM (large language model) (See abstract) Ray (US 2019/0213254) – directed to having a graph with suggested nodes for next steps for a sample script for a story with characters (See par 96, FIG. 8; Abstract) Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN R GOLDBERG whose telephone number is (571)270-7949. The examiner can normally be reached 830AM - 430PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IVAN R GOLDBERG/ Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Sep 20, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596970
SYSTEM AND METHOD FOR INTERMODAL FACILITY MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12591826
SYSTEM FOR CREATING AND MANAGING ENTERPRISE USER WORKFLOWS
2y 5m to grant Granted Mar 31, 2026
Patent 12586020
DETERMINING IMPACTS OF WORK ITEMS ON REPOSITORIES
2y 5m to grant Granted Mar 24, 2026
Patent 12579493
SYSTEMS AND METHODS FOR CLIENT INTAKE AND MANAGEMENT USING HIERARCHICAL CONFLICT ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12555055
CENTRALIZED ORCHESTRATION OF WORKFLOW COMPONENT EXECUTIONS ACROSS SOFTWARE SERVICES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
35%
Grant Probability
72%
With Interview (+36.9%)
4y 8m
Median Time to Grant
Low
PTA Risk
Based on 365 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month