Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of the Application
The following is a Final Office Action.
In response to Examiner's communication of 9/17/2025, Applicant responded on 1/16/2026. Amended claim 1, 3-5, 7, 9, 12, 14-16, 18 and 20. Cancelled claims 2, 6 and 13.
Claims 1, 3-5, 7-12 and 14-20 are pending in this application and have been examined.
Response to Amendment
Applicant's amendments to claims 1, 3-5, 7, 9, 12, 14-16, 18 and 20 are not sufficient to overcome all of the 35 USC 112 rejections set forth in the previous action. Applicant’s amendments are sufficient to overcome part of the 35 USC 112 rejections, and the part of the 35 USC 112 rejections are withdrawn.
Applicant's amendments to claims 1, 3-5, 7, 9, 12, 14-16, 18 and 20 are not sufficient to overcome the 35 USC 101 rejections set forth in the previous action.
Applicant's amendments to claims 1, 3-5, 7, 9, 12, 14-16, 18 and 20 are not sufficient to overcome the prior art rejections set forth in the previous action.
Response to Arguments – 35 USC § 101
Applicant’s arguments with respect to the rejections have been fully considered, but they are not persuasive.
Applicant submits, “…The claims reduce this "large amount of computational power" by, instead of executing machine learning to generate evaluation categories for every interaction intent, the claim require "generating one or more evaluation categories for the new interaction intents using machine learning" (Claim 1 as representative, emphasis added). Executing the machine learning for the different new interaction intents, compared to a naive approach that would execute machine learning for every interaction intent, reduces the number of times the machine learning is executed and thus, the computational power the computer executing the machine learning consumes. The claims further reduce computational power by, instead of generating an evaluation form from scratch for all evaluation questions, the claim require "generating an evaluation form by updating evaluation questions in previously generated evaluation forms refined based on the generated evaluation questions for the evaluation categories for the new interaction intents" (Claim 1 as representative, emphasis added). Reusing and refining an initial "previously generated" evaluation form based on its change to update evaluation questions for "new interaction intents" is more efficient and uses less computing resources than generating an evaluation form from scratch. The USPTO December 4, 2025 Memorandum on Subject Matter Eligibility Declarations explains, "when the claimed system changes the architecture itself-e.g., how information flows, not just what it does-that may satisfy eligibility." (pp. 1-2). Indeed, the claims are directed to a streamlined algorithm that reduces the "large amount of computational power" used for machine learning and reuses previously generated data structures to improve the efficiency of the computer…” The Examiner respectfully disagrees.
While Applicant’s amendments further prosecution, unlike the 2025 Memo and Ex Parte Desjardins, the claims and the argued elements, are directed to, …generating evaluation forms…generating an evaluation form by updating evaluation questions in previously generated evaluation forms refined based on the generated evaluation questions for the evaluation categories for the new interaction intents…Reusing and refining an initial "previously generated" evaluation form based on its change to update evaluation questions for "new interaction intents" is more efficient…, is a problem directed to mental process(i.e. human generating questions for evaluating forms to evaluate other humans), organizing human activity (i.e. human generating questions for evaluating forms to evaluate other humans, as established in Step 2A Prong 1. This problem does not specifically arise in the realm of computer technology, but rather, this problem existed and was addressed long before the advent of computers. Thus, the claims do not recite a technical improvement to a technical problem. Additionally, pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components, i.e. computer and machine learning. Further, these additional elements generally link the abstract idea to a technical environment, namely the environment of a computer, performing extra solution activities. Therefore, as a whole, the additional elements do not integrate the abstract ideas into a practical application in Step 2A Prong 2 or amount to significantly more in Step 2B.
Even novel and newly discovered judicial exceptions are still exceptions, despite their novelty. July 2015 Update, p. 3; see SAP America Inc. v. Investpic, LLC, No. 2017-2081, slip op. at 2 (Fed Cir. May 15, 2018).
Simply reciting specific limitations that narrow the abstract idea does not make an abstract idea non-abstract. 79 Fed. Reg. 74631; buySAFE Inc. v. Google, Inc., 765 F.3d 1350, 1355 (2014); see SAP America at p. 12. As discussed in SAP America, no matter how much of an advance the claims recite, when “the advance lies entirely in the realm of abstract ideas, with no plausibly alleged innovation in the non-abstract application realm,” “[a]n advance of that nature is ineligible for patenting.” Id. at p. 3.
Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015).
Response to Arguments – Prior Art
Applicant’s arguments with respect to the rejections have been fully considered, but they are not persuasive. However, Applicants remarks are moot in light of new grounds of rejections necessitated by Applicant’s amendments.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 15, 16, 18 are rejected under is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant(s) regard as their invention.
Claim 15, 16, recites, “a machine learning model”, it is unclear if these elements refer to “using a machine learning model” in Claim 12. Appropriate correction is required.
Claim 18 recites, “the machine learning comprises”, it is unclear if these elements refer to “using a machine learning model” in Claim 12. Further, “the machine learning” lacks antecedent basis. Appropriate correction is required.
Claim Rejections – 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-5, 7-12 and 14-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 1 (similarly 12, 20) recite, A method of automatically generating evaluation forms from interaction recordings, the method comprising:
identifying one or more interaction intents from an interaction transcript of an interaction recording;
identifying one or more new interaction intents within the one or more interaction intents by detecting the one or more identified interaction intents that has not been identified in an interaction intent database;
generating one or more evaluation categories for the one or more new interaction intents using …;
generating evaluation questions for the one or more evaluation categories using …; and
generating an evaluation form by updating evaluation questions in previously generated evaluation forms based on the generated evaluation questions for the evaluation categories for the new interaction intents which have been identified in the interaction recording recorded subsequent to prior creation of the updated evaluation questions.
Analyzing under Step 2A, Prong 1:
The limitations regarding, … identifying one or more interaction intents from an interaction transcript of an interaction recording; identifying one or more new interaction intents within the one or more interaction intents by detecting the one or more identified interaction intents that has not been identified in an interaction intent database; generating one or more evaluation categories for the one or more new interaction intents using …; generating evaluation questions for the one or more evaluation categories using …; and generating an evaluation form by updating evaluation questions in previously generated evaluation forms based on the generated evaluation questions for the evaluation categories for the new interaction intents which have been identified in the interaction recording recorded subsequent to prior creation of the updated evaluation questions.…, under the broadest reasonable interpretation, can include a human using their mind and using pen and paper to perform the identified limitations; therefore, the claims are directed to a mental process.
Further, …identifying one or more interaction intents from an interaction transcript of an interaction recording; identifying one or more new interaction intents within the one or more interaction intents by detecting the one or more identified interaction intents that has not been identified in an interaction intent database; generating one or more evaluation categories for the one or more new interaction intents using …; generating evaluation questions for the one or more evaluation categories using …; and generating an evaluation form by updating evaluation questions in previously generated evaluation forms based on the generated evaluation questions for the evaluation categories for the new interaction intents which have been identified in the interaction recording recorded subsequent to prior creation of the updated evaluation questions…, under the broadest reasonable interpretation, are human generating questions for evaluating forms to evaluate other humans, therefore it is, managing personal behavior or relationships or interactions between people. Thus, the claims are directed to certain methods of organizing human activity.
Accordingly, the claims are directed to a mental process, certain methods of organizing human activity, and thus, the claims are directed to an abstract idea under the first prong of Step 2A.
Analyzing under Step 2A, Prong 2:
This judicial exception is not integrated into a practical application under the second prong of Step 2A.
In particular, the claims recite the additional elements beyond the recited abstract idea identified under Step 2A, Prong 1, such as:
Claim 1, 12, 20: machine learning, A system for generating evaluation forms from interaction recordings, the system comprising: a computing device; a memory; and a processor, the processor configured to
Claim 9, 18: large language model
, and pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components. Further, these additional elements generally link the abstract idea to a technical environment, namely the environment of a computer.
Additionally, with respect to, “identifying…”, “generating…”, these elements do not add a meaningful limitations to integrate the abstract idea into a practical application because they are extra-solution activity, pre and post solution activity - i.e. data gathering – “identifying…”, data output – “generating…”
Analyzing under Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B.
As noted above, the aforementioned additional elements beyond the recited abstract idea are not sufficient to amount to significantly more than the recited abstract idea because, as an order combination, the additional elements are no more than mere instructions to implement the idea using generic computer components (i.e. apply it).
Additionally, as an order combination, the additional elements append the recited abstract idea to well-understood, routine, and conventional activities in the field as individually evinced by the applicant’s own disclosure, as required by the Berkheimer Memo, in at least:
[0041] Before at least one embodiment of the invention is explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is applicable to other embodiments that may be practiced or carried out in various ways as well as to combinations of the disclosed embodiments. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
[0042] Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, “enhancing” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system’s registers and/or memories into other data similarly represented as physical quantities within the computing system’s memories, registers or other such information storage, transmission or display devices. Any of the disclosed modules or units may be at least partially implemented by a computer processor.
[0056] In practice, a NN, or NN learning, can be simulated by one or more computing nodes or cores, such as generic central processing units (CPUs, e.g., as embodied in personal computers) or graphics processing units (GPUs such as provided by Nvidia Corporation), which can be connected by a data network. A NN can be modelled as an abstract mathematical object and translated physically to CPU or GPU as for example a sequence of matrix operations where entries in the matrix represent neurons (e.g., artificial neurons connected by edges or links) and matrix functions represent functions of the NN.
[0059] FIG. 1 shows a high-level block diagram of an exemplary computing device which may be used with embodiments of the present invention. Computing device 100 may include a controller or processor 105 that may be, for example, a central processing unit processor (CPU), a chip or any suitable computing or computational device, an operating system 115, a memory 120, a storage 130, input devices 135 and output devices 140 such as a computer display or monitor displaying for example a computer desktop system. Each of modules and equipment and other devices and modules discussed herein, e.g. computing device 202, agent device 210, customer device 220, user device 230, a intent identifier 308, section identifier 316, form generator service 322 or a form designer service 318, and modules in FIGS. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 may be or include, or may be executed by, a computing device such as included in FIG. 1 although various units among these modules may be combined into one computing device.
[0063] Embodiments of the invention may include one or more article(s) (e.g. memory 120 or storage 130) such as a computer or processor non-transitory readable medium, or a computer or processor non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory, encoding, including or storing instructions, e.g., computer-executable instructions, which, when executed by a processor or controller, carry out methods disclosed herein.
[0064] FIG. 2 is a schematic drawing of a system 200 according to some embodiments of the invention. System 200 may include a computing device 202 including a processor 203 and storage 204. Computing agent device 202 may be connected to an agent device 210 that includes processor 211. Computing device 202 may be connected to a server 220 including processor 221. Computing device 202 may be connected to a user device 230 including processor 231. Server 220 and Agent device 210 may provide computing device 202 with interaction recordings. Alternatively, interaction recordings may be stored in storage 204 of computing device 202.
[0065] Computing devices 100, 202, 210, 220 and 230 may be servers, personal computers, desktop computers, mobile computers, laptop computers, and notebook computers or any other suitable device such as a cellular telephone, personal digital assistant (PDA), video game console, etc., and may include wired or wireless connections or modems. Computing devices 100, 202, 210, 220 and 230 may include one or more input devices, for receiving input from a user (e.g., via a pointing device, click-wheel or mouse, keys, touch screen, recorder/microphone, or other input components). Computers 100, 202, 210, 220 and 230 may include one or more output devices (e.g., a monitor, screen, or speaker) for displaying or conveying data to a user.
[0066] Any computing devices of FIGs. 1 and 2 (e.g., 100, 202, 210, 220 and 230), or their constituent parts, may be configured to carry out any of the methods of the present invention. Any computing devices of Figs. 1 and 2, or their constituent parts, may include a intent identifier 308, section identifier 316, form generator service 322 or a form designer service 318, or another engine or module, which may be configured to perform some or all of the methods of the present invention. Systems and methods of the present invention may be incorporated into or form part of a larger platform or a system/ecosystem, such as agent management platforms. The platform, system, or ecosystem may be run using the computing devices of FIGs. 1 and 2, or their constituent parts. For example, a processor such as processor 203 of computing device 202 processor 211 of device 210, and/or processor 221 of computing device 220 may be configured to identify one or more interaction intents, such as “customer inquiry”, “order placement”, technical support”, billing and payments”, “product information”, “shipping and delivery” from an interaction transcript. A processor such as processor 203 of computing device 202 processor 211 of device 210, and/or processor 221 of computing device 220 may be configured to generate one or more evaluation categories for the one or more interaction intents using machine learning. For example, a processor such as processor 203, 211 and/or 221 may be configured to create at least one evaluation category for the at least one interaction intent using machine learning. A processor such as processor 203 of computing device 202 processor 211 of device 210, and/or processor 221 of computing device 220 may be configured to generate evaluation questions for the one or more evaluation categories using machine learning. For example, a language model may be configured to create evaluation questions for the at least one evaluation category using machine learning, such as “customer information”, “account verification”, “inquiry details”, “follow-up and next steps”. A processor such as processor 203 of computing device 202 processor 211 of device 210, and/or processor 221 of computing device 220 may be configured to provide an evaluation form based on the evaluation questions. For example, a processor such as processor 203, 211 and/or 221 may be configured to generate an evaluation form based on the evaluation questions that includes evaluation questions such as: “Did the agent perform a test to measure the internet speed?” or “Has the agent verified the current internet speed by conducting a test?”. A processor such as processor 203 of computing device 202 may be configured to receive interaction recordings, e.g. interaction transcripts, that are stored, e.g. in storage of a user device 230, an agent device 210 or server 220. A processor such as processor 203 of computing device 202 may be configured to record interactions, e.g. phone calls, text chats, etc., to generate interaction recordings, e.g. interaction transcripts. User device 230 may be a computing device, e.g. computing device 100, that is used by a user such as a supervisor of an agent or an evaluator of an agent of a contact center. For example, agent X using agent device 210 may be in a text-based chat with customer Y. Processor 211 of agent device 210 may execute an interaction recording service that records the interaction between agent X and customer Y. When the interaction ends, an interaction recording between agent X and customer Y may be stored, e.g. in storage of agent device 210 or may be sent to storage 204 of computing device 202.
[0068] An artificial intelligence platform, e.g. service 306, may be integrated into embodiments of the present invention. This service can be used for various purposes: generating one or more intents from interaction transcripts, generating possible sections/categories that must be given importance to for evaluating interaction of given intent, generating different types of questions for each category, e.g. each identified section, and generating re-phrased alternate questions for given question. For example, service 306 may be an artificial intelligence platform such as Azure Open AI by Open AI but other AI models or services known in the art may be used.
[0073] A QM web application 320 may be an application that provides quality management application features to contact centers. One of the features of this application is the evaluation form management. Web application 320 may automatically prompt or request a designer service to generate evaluation forms. A form designer service may forward a request or a prompt for the generation of an evaluation form to a form generator service, which generates an evaluation form, for example using generative artificial intelligence (AI) by providing prompts to a Large Language Model (LLM), e.g. using a Chat Generative Pre-trained Transformer (ChatGPT) developed by OpenAI. An LLM can be hosted in a data cloud or can be executed by a processor of any computing device of a contact center, e.g. computing device 100 or 202. Web application 320 may present a user, e.g. supervisors and evaluators of an agent with intents and respective sections to choose from during this process. In response, form designer service may automatically return generated forms with given intent and section. This form is then rendered on user interface by QM web application.
[0144] Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described.
[0145] Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks.
[0146] The term “method” may refer to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the art to which the invention belongs.
[0147] The descriptions, examples and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only.
[0148] Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined.
[0149] The present invention may be implemented in the testing or practice with materials equivalent or similar to those described herein.
[0150] While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other or equivalent variations, modifications, and applications are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents.
Furthermore, as an ordered combination, these elements amount to generic computer components receiving or transmitting data over a network, performing repetitive calculations, electronic record keeping, and storing and retrieving information in memory, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d).
Moreover, the remaining elements of dependent claims do not transform the recited abstract idea into a patent eligible invention because these remaining elements merely recite further abstract limitations that provide nothing more than simply a narrowing of the abstract idea recited in the independent claims.
Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components to “apply” the recited abstract idea, perform insignificant extra-solution activity, and generally link the abstract idea to a technical environment. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claim as a whole amounts to significantly more than the abstract idea itself. Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1, 3-5, 7-12 and 14-20 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections – 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3-5, 7-12 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable by US Patent Publication to US20250124022A1 to GHOSH et al., (hereinafter referred to as “GHOSH”) in view of US Patent to US20250190708A1 to Mathew, (hereinafter referred to as “Mathew”)
As per Claim 1, GHOSH teaches: (Currently Amended) A method of automatically generating evaluation forms from interaction recordings, the method comprising:
identifying one or more interaction intents from an interaction transcript …; (in at least [0018] FIG. 1 illustrates a form building workflow 100 using an example generative artificial intelligence form builder 102. The form building workflow 100 presents a user interface object 104 including a form builder user prompt input field 106, which accepts a user input allowing the user to specify the kind of form they wish to generate and providing a user prompt 108 directed to that intent. In the illustrated example, the user prompt states that the user wishes to “Create an employee satisfaction survey. [0019] The user prompt 108 is submitted to the generative artificial intelligence form builder 102, which generates a renderable form 110 with various form prompt items (see, e.g., the text of a form prompt item 112) and form response items (see, e.g., the text of a form response item 114). [0029] FIG. 2 illustrates a form refinement workflow 200 using an example generative artificial intelligence form builder 202. Generally, the form refinement workflow 200 is intended to refine an already existing or already generated form. For example, in one implementation, a user wishing to generate a renderable form can run through the workflow of FIG. 1 . After reviewing the renderable form generated by the workflow of FIG. 1 , the user may decide that the language is too formal for the user's intended audience. Alternatively, the user may import a previously generated form or another form template into the generative artificial intelligence form builder 202 with the desire to refine the existing form. In either case, the user can provide a refinement instruction to refine the original renderable form into a refined renderable form. [0031] The refinement instruction 208 is submitted to the generative artificial intelligence form builder 202, which generates a refined renderable form 212 with various refined form prompt items. The previously input user prompt and system-provided prompts may be resubmitted to the generative artificial intelligence form builder 202, or the generative artificial intelligence form builder 202 may cache these inputs from the previous iteration. [0041] An intent classifier 410 of the input processing system 406 receives the user prompt 402 and evaluates the user prompt 402 to classify the intent of the user prompt 402. In one implementation, a large language model (LLM) inputs the user prompt 402 and predicts the intent of the user prompt 402 for the purposes of identifying system-provided prompts to submit to a generative AI model 412 that generates form items for the renderable form (represented by a generated form schema 414). Given the user prompt 402, the task of predicting an intent (represented by a text-class label) to the user prompt 402 is transformed to generating a predefined textual response (e.g., positive, negative, etc.) conditioned on the user prompt 402 using the large language model. This example implementation may be termed prompt-based in-context learning. In such an implementation, the text-class label represents the intent discerned by the LLM for the user prompt 402. [0048] a dynamic prompt assistant to increase the effectiveness of user prompts. In one such implementation, the input validator receives a user prompt and generates a custom-designed system-provided prompt that is sent with the user prompt to a generative artificial intelligence model to generate a set of follow-up questions that may be helpful in collecting more relevant contextual information from the user. The output of the generative artificial intelligence model can present the follow-up questions (e.g., through a forms web client) in an attempt to solicit supplemental input information and/or corrective input information that is expected to enhance the performance of the generative artificial intelligence form builder 400 and the quality of the generated outcomes.)
identifying one or more new interaction intents within the one or more interaction intents by detecting the one or more identified interaction intents that has … been identified in an interaction intent database; (in at least [0058] FIG. 6 illustrates example operations 600 for building a renderable form using a generative artificial intelligence form builder. A classifying operation 602 classifies an intent based on a received prompt. In one implementation, an intent classifier employs a large language model to determine the intent of a user prompt, although other techniques may be used to determine an intent (e.g., a look-up table, a similarity measurement, and Retrieval Augmented Generation (RAG)). For example, a similarity measurement may be employed to look-up and categorize user prompts into different intent categories or “buckets.” [0060] A prompt identifying operation 604 identifies system-provided prompts based on the intent. For example, a system prompt constructor can look-up system-provided prompts in a prompt template library or generate system-provided prompts using a generative artificial intelligence model. In one implementation, the prompt identifying operation 604 searches a prompt template library based on the intent and identifies the system-provided prompts that correspond to the intent. In another implementation, the prompt identifying operation 604 generates the system-provided prompts based on the intent using a generative artificial intelligence model. Other methods of developing system-provided prompts corresponding to the intent may be employed.)
generating one or more evaluation categories for the one or more new interaction intents using machine learning; (in at least [0058] FIG. 6 illustrates example operations 600 for building a renderable form using a generative artificial intelligence form builder. A classifying operation 602 classifies an intent based on a received prompt. In one implementation, an intent classifier employs a large language model to determine the intent of a user prompt, although other techniques may be used to determine an intent (e.g., a look-up table, a similarity measurement, and Retrieval Augmented Generation (RAG)). For example, a similarity measurement may be employed to look-up and categorize user prompts into different intent categories or “buckets.” [0059] Retrieval Augmented Generation (RAG) is an architecture that augments the capabilities of a Large Language Model (LLM), like ChatGPT, by adding an information retrieval system that provides the data. Adding an information retrieval system gives a developer and/or user control over the data used by an LLM when it formulates a response. For an enterprise solution, RAG architecture means that natural language processing can be constrained to intended content (e.g., an enterprise's proprietary content) sourced from vectorized documents, images, audio, and video. [0034] FIG. 3 illustrates a multi-section form 300 generated using an example generative artificial intelligence form builder. The multi-section form 300 is an example of a renderable form that has been instrumented according to formatting parameters, although other formatting parameters may be created and applied. For example, a generative artificial intelligence model receives various prompts and instructions to output form prompt items and form response items. The generative artificial intelligence model can also identify and/or generate format items that include formatting parameters, such as section parameters, text formatting parameters, form formatting parameters, formatting controls (e.g., filtering or sorting controls), and other formatting parameters.)
generating evaluation questions for the one or more evaluation categories using machine learning; and (in at least [0019] The user prompt 108 is submitted to the generative artificial intelligence form builder 102, which generates a renderable form 110 with various form prompt items (see, e.g., the text of a form prompt item 112) and form response items (see, e.g., the text of a form response item 114). In the illustrated example, the form response item 114 is a dynamic object configured to receive input via a user interface and communicate a user's response back to another process to collect, analyze, summarize, communicate, and/or present the responses. In some applications, the form prompt item 112 may also be a dynamic object (e.g., capable of annotation, dynamic formatting, visual effects). [0032] When comparing the renderable form 110 from FIG. 1 to the refined renderable form 212 of FIG. 2 , it is apparent that the generative artificial intelligence form builder 202 has modified the form items to be less formal. For example, the text of the form prompt item 112 from FIG. 1 has been changed from “How satisfied are you with your work environment?” to “How do you like your work environment?” in form prompt item 214, and the text of form response item 114 has been changed from “Very dissatisfied ⋆⋆⋆⋆⋆ Very satisfied” to “Hate it ⋆⋆⋆⋆⋆ Love it” in form response item 216. Also, note that the preliminary text 218 has been modified to be less formal compared to that of FIG. 1. [0047] the authoring user may review the generated form as it is rendered in a user interface and desire to refine the form to change the number of questions, to change the tone (e.g., more formal/informal), to obtain a different set of question, to change the format of one or more questions (e.g., changing a question from multiple choice to short answer), etc. Accordingly, the authoring user can iterate back to the input phases and specify certain refinements to the form (see, e.g., the workflow illustrated in FIG. 2 ). Whether by selecting refinement options or by entering specific refinement instructions, the refinement instructions are used to annotate the user prompt 402. The annotated user prompt is then input to the generative artificial intelligence form builder 400 through the input interface 404, and the generative artificial intelligence form builder 400 generates a refined form using the annotated user prompt and a refined set of system-provided prompts. In this iterative manner, an authoring user can tune the resulting form to satisfy both substantive and formatting objectives.)
generating an evaluation form by updating evaluation questions in previously generated evaluation forms based on the generated evaluation questions for the evaluation categories for the new interaction intents which have been identified in the … recorded subsequent to prior creation of the updated evaluation questions. (in at least [0019] The user prompt 108 is submitted to the generative artificial intelligence form builder 102, which generates a renderable form 110 with various form prompt items (see, e.g., the text of a form prompt item 112) and form response items (see, e.g., the text of a form response item 114). In the illustrated example, the form response item 114 is a dynamic object configured to receive input via a user interface and communicate a user's response back to another process to collect, analyze, summarize, communicate, and/or present the responses. In some applications, the form prompt item 112 may also be a dynamic object (e.g., capable of annotation, dynamic formatting, visual effects). [0029] Generally, the form refinement workflow 200 is intended to refine an already existing or already generated form. For example, in one implementation, a user wishing to generate a renderable form can run through the workflow of FIG. 1 . After reviewing the renderable form generated by the workflow of FIG. 1 , the user may decide that the language is too formal for the user's intended audience. Alternatively, the user may import a previously generated form or another form template into the generative artificial intelligence form builder 202 with the desire to refine the existing form. In either case, the user can provide a refinement instruction to refine the original renderable form into a refined renderable form. [0030] in FIG. 2 , the form refinement workflow 200 presents a user interface object 204 including a form builder refinement input field 206, which accepts a user input allowing the user to specify the refinement they wish to apply. In the illustrated example, the user has input a refinement instruction 208 to “Make the form less formal.” Other available options are listed along location 210, although other refinement instructions may also be employed, whether through the selection of predefined instructions, text input, or other methods. [0031] The refinement instruction 208 is submitted to the generative artificial intelligence form builder 202, which generates a refined renderable form 212 with various refined form prompt items. The previously input user prompt and system-provided prompts may be resubmitted to the generative artificial intelligence form builder 202, or the generative artificial intelligence form builder 202 may cache these inputs from the previous iteration. [0034] FIG. 3 illustrates a multi-section form 300 generated using an example generative artificial intelligence form builder. The multi-section form 300 is an example of a renderable form that has been instrumented according to formatting parameters, although other formatting parameters may be created and applied. For example, a generative artificial intelligence model receives various prompts and instructions to output form prompt items and form response items. The generative artificial intelligence model can also identify and/or generate format items that include formatting parameters, such as section parameters, text formatting parameters, form formatting parameters, formatting controls (e.g., filtering or sorting controls), and other formatting parameters. [0062] the building of a renderable form using a generative artificial intelligence form builder may also include an input validating operation that validates the received prompt for compliance with a predefined policy prior to inputting the system-provided prompts and the received prompt to the generative artificial intelligence model.)
Although implied, GHOSH does not expressly disclose the following limitations, which however, are taught by Mathew,
identifying one or more interaction intents from an interaction transcript of an interaction recording (in at least [0014] a conversation log 106 may capture, include a summary, or a transcript of interactions between input received from user 110 directed to DA 104 and input from DA 104 directed to user 110. These communications may include audible and/or textual communications. With audible communications, the conversation log 106 may include a sound-to-text capture of the words or sounds received from the user 110, as well as the text or transcript of what was audibly output by DA 104.)
identifying one or more new interaction intents within the one or more interaction intents by detecting the one or more identified interaction intents that has not been identified in an interaction intent database; (in at least [0026] a single DA 104 may be configured to fulfill a single intent 120, or multiple different intents 120. For simplicity, only a single intent 120 is illustrated, however it is understood that the DA 104 may be configured to fulfill multiple intents 120. In some embodiments, a categorizer 122 may determine into which category 124 each fallback convo 114 is to be categorized or grouped. In some embodiments, a category 124 may correspond to an intent 120. For example, DA 104 may be configured to help users 110 with medical issues, and may include three intents 120: 1) medicine or prescription refill 2) transportation and 3) doctor appointments. The fallback convos 114 may include interactions from users 110 that are directed to any of the intents 120, and/or may include other requests not covered by the existing intents 120. [0032] to identify which of the existing intents 120, if any, may need to be updated or improved to address the fallback 116 in the fallback convos 114, categorizer 122 may divide the fallback convos 114 into one of four categories, which may include any of the existing three intents 120 of 1) medicine or prescription refill 2) transportation and 3) doctor appointments (in continuing the example above), or a fourth category of “other”, indicating interactions or fallback convos 114 that do not fall into any one of the existing intents 120. In other embodiments, if there are five intents 120, then categorizer 122 may use six possible categories 124. In some embodiments, a developer may provide a list of the existing intents 120 to DIS 102. [0034] there may be 100 fallback convos 114 in the ‘other’ category 124, and LM 126 may determine that 21 of those fallback convos 114 are related to medical device issues (which is not related to an existing intent 120) based on the text of what the user 110 is inputting for the DA 104. LM 126 may further determine that the threshold 130 for a new intent 128 is 20, in which case, LM 126 may identify medical device as a new intent 128. The other 79 fallback convos 114 that remain in the other category 124 may not fit into any detectable pattern or any pattern that exceeds threshold 130.)
At the time the invention was filed, it would have been obvious for one of ordinary skill in the art to have modified the teachings of GHOSH with the aforementioned teachings of Mathew, with a reasonable expectation of success if arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make this modification to the teachings of GHOSH with the motivation of, … help improve the responsiveness and effectiveness of a digital assistant (DA) 104.…identify, generate, propose a solution 108 to improve the functionality of the DA 104…received as feedback that may be used to improve the efficiency of processing subsequent conversation logs 106 and generating subsequent solutions 108, as well as improving the functionality of the DA 104…to identify which of the existing intents 120, if any, may need to be updated or improved to address the fallback 116 in the fallback convos 114…to improve the functionality of DA 104 with respect to any of the detected fallbacks 116 in the fallback convos 114. In some embodiments, solution 108 may include an indication as to which fallback convos 114 belong to which categories 124, in some embodiments, solution 108 may include a pointer, link, or transcript of the relevant fallback convos 114. From this solution 108, a developer may quickly see and decide which fallback convos 114 to use to address or improve DA 104…a feedback engine 136 may receive the response 134 and use the response 134 to improve the processing of DIS 102 and/or LM 126….., as recited in Mathew.
As per Claim 3, GHOSH teaches: (Currently Amended) A method according to claim 1,
wherein the one or more evaluation categories are generated from the new interaction intents identified in the interaction transcript. (in at least [0029] FIG. 2 illustrates a form refinement workflow 200 using an example generative artificial intelligence form builder 202. Generally, the form refinement workflow 200 is intended to refine an already existing or already generated form. For example, in one implementation, a user wishing to generate a renderable form can run through the workflow of FIG. 1 . After reviewing the renderable form generated by the workflow of FIG. 1 , the user may decide that the language is too formal for the user's intended audience. Alternatively, the user may import a previously generated form or another form template into the generative artificial intelligence form builder 202 with the desire to refine the existing form. In either case, the user can provide a refinement instruction to refine the original renderable form into a refined renderable form [0032] When comparing the renderable form 110 from FIG. 1 to the refined renderable form 212 of FIG. 2 , it is apparent that the generative artificial intelligence form builder 202 has modified the form items to be less formal. For example, the text of the form prompt item 112 from FIG. 1 has been changed from “How satisfied are you with your work environment?” to “How do you like your work environment?” in form prompt item 214, and the text of form response item 114 has been changed from “Very dissatisfied ⋆⋆⋆⋆⋆ Very satisfied” to “Hate it ⋆⋆⋆⋆⋆ Love it” in form response item 216. Also, note that the preliminary text 218 has been modified to be less formal compared to that of FIG. 1 .)
As per Claim 4, GHOSH teaches: (Currently Amended) A method according to claim 1,
wherein identifying one or more interaction intents comprises providing a machine learning model with an intent identification prompt comprising a part of the interaction transcript. (in at least [0018] FIG. 1 illustrates a form building workflow 100 using an example generative artificial intelligence form builder 102. The form building workflow 100 presents a user interface object 104 including a form builder user prompt input field 106, which accepts a user input allowing the user to specify the kind of form they wish to generate and providing a user prompt 108 directed to that intent. In the illustrated example, the user prompt states that the user wishes to “Create an employee satisfaction survey. [0019] The user prompt 108 is submitted to the generative artificial intelligence form builder 102, which generates a renderable form 110 with various form prompt items (see, e.g., the text of a form prompt item 112) and form response items (see, e.g., the text of a form response item 114). [0029] FIG. 2 illustrates a form refinement workflow 200 using an example generative artificial intelligence form builder 202. Generally, the form refinement workflow 200 is intended to refine an already existing or already generated form. For example, in one implementation, a user wishing to generate a renderable form can run through the workflow of FIG. 1 . After reviewing the renderable form generated by the workflow of FIG. 1 , the user may decide that the language is too formal for the user's intended audience. Alternatively, the user may import a previously generated form or another form template into the generative artificial intelligence form builder 202 with the desire to refine the existing form. In either case, the user can provide a refinement instruction to refine the original renderable form into a refined renderable form)
As per Claim 5, GHOSH teaches: (Currently Amended) A method according to claim 1,
wherein generating one or more evaluation categories comprises providing a machine learning model with a category generation prompt comprising the one or more identified intents. (in at least [0034] FIG. 3 illustrates a multi-section form 300 generated using an example generative artificial intelligence form builder. The multi-section form 300 is an example of a renderable form that has been instrumented according to formatting parameters, although other formatting parameters may be created and applied. For example, a generative artificial intelligence model receives various prompts and instructions to output form prompt items and form response items. The generative artificial intelligence model can also identify and/or generate format items that include formatting parameters, such as section parameters, text formatting parameters, form formatting parameters, formatting controls (e.g., filtering or sorting controls), and other formatting parameters. [0035] in FIG. 3 , the multi-section form 300 includes two section items (section item 302 and section item 304) separating related groups of form prompt items and form response items. Section dividers help organize a long form to make the form more user-friendly. The section items can be represented in the renderable form, as formatting instructions in JSON or in other renderable form formats. [0036] formatting parameters in system-provided prompts, a user prompt, and/or refinement instructions can trigger the generative artificial intelligence model to output formatting items along with the form prompt items and form response items. For example, a system-provided prompt may specify splitting up less related format items into different sections or limiting the number of format items per section to a specified number in an effort to make a longer form more accessible/understandable to a user. Formatting may include section titles, section descriptions, and other formatting parameters (e.g., fonts, font sizes, paragraph formatting, form themes, language).)
As per Claim 7, GHOSH teaches: (Currently Amended) A method according to claim 1,
wherein generating evaluation questions comprises providing a machine learning model with a question generation prompt comprising the one or more identified intents, the one or more evaluation categories, and a number setting a limit for generated evaluations questions for each of the one or more evaluation categories. (in at least [0034] FIG. 3 illustrates a multi-section form 300 generated using an example generative artificial intelligence form builder. The multi-section form 300 is an example of a renderable form that has been instrumented according to formatting parameters, although other formatting parameters may be created and applied. For example, a generative artificial intelligence model receives various prompts and instructions to output form prompt items and form response items. The generative artificial intelligence model can also identify and/or generate format items that include formatting parameters, such as section parameters, text formatting parameters, form formatting parameters, formatting controls (e.g., filtering or sorting controls), and other formatting parameters. [0047] the authoring user may review the generated form as it is rendered in a user interface and desire to refine the form to change the number of questions, to change the tone (e.g., more formal/informal), to obtain a different set of question, to change the format of one or more questions (e.g., changing a question from multiple choice to short answer), etc. Accordingly, the authoring user can iterate back to the input phases and specify certain refinements to the form (see, e.g., the workflow illustrated in FIG. 2 ). Whether by selecting refinement options or by entering specific refinement instructions, the refinement instructions are used to annotate the user prompt 402. The annotated user prompt is then input to the generative artificial intelligence form builder 400 through the input interface 404, and the generative artificial intelligence form builder 400 generates a refined form using the annotated user prompt and a refined set of system-provided prompts. In this iterative manner, an authoring user can tune the resulting form to satisfy both substantive and formatting objectives.)
As per Claim 8, GHOSH teaches: A method according to claim 7,
wherein the question generation prompt comprises a customer domain. (in at least [0001] form builder applications can be used to collect customer feedback, measure employee satisfaction, improve your product or service, or organize company events. [0039] the input validator 408 evaluates the input (e.g., user prompt, system-provided prompts) to eliminate or reduce harmful content from the prompts passed and aligns the intention of the customer with the forms creation scenario, rather than prompt injection (jail break) or any invalid user prompts. The input validator 408 may employ Azure's Language Detector, Azure Content Moderator, and GuardList, as well as a robust custom-made Forms Intent Classifier.)
As per Claim 9, GHOSH teaches: (Currently Amended) A method according to claim 1,
wherein the machine learning comprises a large language model. (in at least [0041] An intent classifier 410 of the input processing system 406 receives the user prompt 402 and evaluates the user prompt 402 to classify the intent of the user prompt 402. In one implementation, a large language model (LLM) inputs the user prompt 402 and predicts the intent of the user prompt 402 for the purposes of identifying system-provided prompts to submit to a generative AI model 412 that generates form items for the renderable form (represented by a generated form schema 414). Given the user prompt 402, the task of predicting an intent (represented by a text-class label) to the user prompt 402 is transformed to generating a predefined textual response (e.g., positive, negative, etc.) conditioned on the user prompt 402 using the large language model. This example implementation may be termed prompt-based in-context learning. In such an implementation, the text-class label represents the intent discerned by the LLM for the user prompt 402.)
As per Claim 10, GHOSH teaches: A method according to claim 1,
wherein the generated evaluation forms are periodically updated based on interaction intents which have been identified in subsequent interactions. (in at least [0029] FIG. 2 illustrates a form refinement workflow 200 using an example generative artificial intelligence form builder 202. Generally, the form refinement workflow 200 is intended to refine an already existing or already generated form. For example, in one implementation, a user wishing to generate a renderable form can run through the workflow of FIG. 1 . After reviewing the renderable form generated by the workflow of FIG. 1 , the user may decide that the language is too formal for the user's intended audience. Alternatively, the user may import a previously generated form or another form template into the generative artificial intelligence form builder 202 with the desire to refine the existing form. In either case, the user can provide a refinement instruction to refine the original renderable form into a refined renderable form.)
As per Claim 11, GHOSH teaches: A method according to claim 10,
wherein the updating of generated evaluation forms comprises generating a refinement prompt that comprises evaluation questions and submitting the refinement prompt to a machine learning model. (in at least [0029] FIG. 2 illustrates a form refinement workflow 200 using an example generative artificial intelligence form builder 202. Generally, the form refinement workflow 200 is intended to refine an already existing or already generated form. For example, in one implementation, a user wishing to generate a renderable form can run through the workflow of FIG. 1 . After reviewing the renderable form generated by the workflow of FIG. 1 , the user may decide that the language is too formal for the user's intended audience. Alternatively, the user may import a previously generated form or another form template into the generative artificial intelligence form builder 202 with the desire to refine the existing form. In either case, the user can provide a refinement instruction to refine the original renderable form into a refined renderable form.)
As per Claim 12 14-19 for a system (see at least GHOSH [0003]), substantially recite the subject matter of Claim 1, 3-4, 7-10 and are rejected based on the same reasoning and rationale.
As per Claim 20 for method, substantially recite the subject matter of Claim 1 and are rejected based on the same reasoning and rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PO HAN MAX LEE whose telephone number is (571)272-3821. The examiner can normally be reached on Mon-Thurs 8:00 am - 7:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached on (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PO HAN LEE/Primary Examiner, Art Unit 3623