DETAILED ACTION
Claims 1-20 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 8-10 and 18-20 are objected to because of the following informalities: The intended meaning of “CRM” in claims 8, 10, 18, and 20 is not explicitly clarified in the claims and “CRM” may have multiple meanings. In light of the Specification, “CRM” seems to mean “customer relationship management.” To improve clarity, it would be helpful if the intended meaning of “CRM” were spelled out at least the first time it is referenced in each set of claims. Claims 9 and 19 inherit this objection. Appropriate correction and/or clarification is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claimed invention is directed to workflow management (Spec: p. 1) without significantly more.
Step
Analysis
1: Statutory Category?
Yes – The claims fall within at least one of the four categories of patent eligible subject matter. Process (claims 11-20), Article of Manufacture (claims 1-10)
Independent claims:
Step
Analysis
2A – Prong 1: Judicial Exception Recited?
Yes – Aside from the additional elements identified in Step 2A – Prong 2 below, the claims recite:
[Claims 1, 11] a. receive as an input, unstructured data;
b. output a request to associate the unstructured data to a workflow selected among a set of workflows, wherein each workflow in the set of workflows is characterized by a series of steps;
c. receive a result of processing an identification of the selected workflow.
Aside from the additional elements, the aforementioned claim details exemplify the abstract idea(s) of a mental process (since the details include concepts performed in the human mind, including an observation, evaluation, judgment, and/or opinion). As explained in MPEP § 2106(a)(2)(C)(III), “The courts consider a mental process (thinking) that ‘can be performed in the human mind, or by a human using a pen and paper’ to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, ‘methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’’ 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)).” The limitations reproduced above, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components. That is, other than reciting the additional elements identified in Step 2A – Prong 2 below, nothing in the claim elements precludes the steps from practically being performed in the mind and/or by a human using a pen and paper. For example, but for the recitations of generic computer and other processing components (identified in Step 2A – Prong 2 below), the respectively recited steps/functions of the claims, as drafted and set forth above, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind and/or with the use of pen and paper. Aside from the generic processing elements, including a GUI to receive input and present output, a human user could receive unstructured data (including text and audio-related data), output a request to associate the unstructured data to a workflow characterized by a series of steps, present information on a display, and extract workflow data from records. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind (and/or with pen and paper) but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
2A – Prong 2: Integrated into a Practical Application?
No – The judicial exception(s) is/are not integrated into a practical application.
Claim 1 recites a non-transitory computer-readable storage medium storing instructions, that when executed by one or more computers configures the one or more computers to perform the recited steps.
Claim 1 receives input via an input of the one or more computers; outputs at an interface of the one or more computers, a request to a Generative Al system; and receives from the interface as a result of processing by the Generative AI system an identification of the selected workflow.
Claim 11 recites a method executed by one or more computers comprising the recited steps.
Claim 11 receives input via an input of the one or more computers; outputs at an interface of the one or more computers, a request to a Generative Al system; and receives from the interface as a result of processing by the Generative AI system an identification of the selected workflow.
The claims as a whole merely describe how to generally “apply” the abstract idea(s) in a computer environment. The claimed processing elements are recited at a high level of generality and are merely invoked as a tool to perform the abstract idea(s). Simply implementing the abstract idea(s) on a general-purpose processor is not a practical application of the abstract idea(s); Applicant’s specification discloses that the invention may be implemented using general-purpose processing elements and other generic components (Spec: p. 12, including the following statement: “The hardware associated with the CRM software application 12 is not being discussed here in detail because it is an aspect known in the art.”).
The use of a processor/processing elements (e.g., as recited in all of the claims) facilitates generic processor operations. The use of a memory or machine-readable media with executable instructions facilitates generic processor operations.
The additional elements are recited at a high-level of generality (i.e., as generic processing elements performing generic computer functions) such that the incorporation of the additional processing elements amounts to no more than mere instructions to apply the judicial exception(s) using generic computer components. There is no indication in the Specification that the steps/functions of the claims require any inventive programming or necessitate any specialized or other inventive computer components (i.e., the steps/functions of the claims may be implemented using capabilities of general-purpose computer components). Accordingly, the additional elements do not integrate the abstract ideas into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea(s).
The processing components presented in the claims simply utilize the capabilities of a general-purpose computer and are, thus, merely tools to implement the abstract idea(s). As seen in MPEP § 2106.05(a)(I) and § 2106.05(f)(2), the court found that accelerating a process when the increased speed solely comes from the capabilities of a general-purpose computer is not sufficient to show an improvement in computer-functionality and it amounts to a mere invocation of computers or machinery as a tool to perform an existing process (see FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016)).
Considering that the implementation of the machine learning model and/or the training of the model is performed using generic processing elements, such an implementation is presented as a generic recitation of machine learning in the claims and as a general link to technology. The machine learning-based processing elements are simply tools to generally automate the underlying process that could be performed by a human. It is further noted that, as described in Applicant’s Specification, the machine learning operations are generic machine learning operations (Spec: p. 1: “The invention relates to computer implemented methodologies and systems for workflow management utilizing a Generative AI framework, notably one that employs a language model like a Large Language Model (LLM).”; p. 3: “A Generative AI system, also known as a Generative model, is a type of artificial intelligence that is designed to generate new data samples that resemble a given dataset. It is a class of AI models capable of learning the underlying patterns and structures of the training data and then using that knowledge to produce new, synthetic data that resembles the original data distribution.”). The Specification presents no assertion that there is any improvement in the automated machine learning process itself. Such a generic recitation of machine learning, as recited in the claims, is little more than automating an analogous process that can be performed by a human.
There is no transformation or reduction of a particular article to a different state or thing recited in the claims.
Additionally, even when considering the operations of the additional elements as an ordered combination, the ordered combination does not amount to significantly more than what is present in the claims when each operation is considered separately.
2B: Claim(s) Provide(s) an Inventive Concept?
No – The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception(s). As discussed above with respect to integration of the abstract idea(s) into a practical application, the use of the additional elements to perform the steps identified in Step 2A – Prong 1 above amounts to no more than mere instructions to apply the exceptions using a generic computer component(s). Mere instructions to apply an exception using a generic computer component(s) cannot provide an inventive concept. The claims are not patent eligible.
Dependent claims:
Step
Analysis
2A – Prong 1: Judicial Exception Recited?
Yes – Aside from the additional elements identified in Step 2A – Prong 2 below, the claims recite:
[Claims 4, 14] wherein the unstructured data includes text.
[Claims 5, 15] wherein the request conveys the unstructured data, which includes text.
[Claims 6, 16] wherein the request conveys the set of workflows from which to select the workflow in the set associated with the unstructured data.
[Claims 7, 17] wherein the unstructured data is derived from audio [e.g., audio-related data].
[Claims 8, 18] activating in a CRM record, which includes a plurality of segments associated with one or more of the workflows in the set of workflows, a segment corresponding to the selected workflow.
[Claims 9, 19] a. implementing a display displaying to a user the workflow, allowing the user to validate the selected workflow;
b. in response to user input, activating the segment of the CRM record corresponding to the selected workflow.
[Claims 10, 20] extracting the set of workflows from the CRM record.
The dependent claims further present details of the abstract ideas identified in regard to the independent claims.
Aside from the additional elements, the aforementioned claim details exemplify the abstract idea(s) of a mental process (since the details include concepts performed in the human mind, including an observation, evaluation, judgment, and/or opinion). As explained in MPEP § 2106(a)(2)(C)(III), “The courts consider a mental process (thinking) that ‘can be performed in the human mind, or by a human using a pen and paper’ to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, ‘methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’’ 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)).” The limitations reproduced above, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components. That is, other than reciting the additional elements identified in Step 2A – Prong 2 below, nothing in the claim elements precludes the steps from practically being performed in the mind and/or by a human using a pen and paper. For example, but for the recitations of generic computer and other processing components (identified in Step 2A – Prong 2 below), the respectively recited steps/functions of the claims, as drafted and set forth above, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind and/or with the use of pen and paper. Aside from the generic processing elements, including a GUI to receive input and present output, a human user could receive unstructured data (including text and audio-related data), output a request to associate the unstructured data to a workflow characterized by a series of steps, present information on a display, and extract workflow data from records. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind (and/or with pen and paper) but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
2A – Prong 2: Integrated into a Practical Application?
No – The judicial exception(s) is/are not integrated into a practical application.
The dependent claims include the additional elements of their independent claims.
Claim 1 recites a non-transitory computer-readable storage medium storing instructions, that when executed by one or more computers configures the one or more computers to perform the recited steps.
Claim 1 receives input via an input of the one or more computers; outputs at an interface of the one or more computers, a request to a Generative Al system; and receives from the interface as a result of processing by the Generative AI system an identification of the selected workflow.
Claim 11 recites a method executed by one or more computers comprising the recited steps.
Claim 11 receives input via an input of the one or more computers; outputs at an interface of the one or more computers, a request to a Generative Al system; and receives from the interface as a result of processing by the Generative AI system an identification of the selected workflow.
Claims 2 and 12 recite wherein the input is implemented by a GUI.
Claims 3 and 13 recite wherein the GUI includes a control element.
Claims 4 and 14 recite the control element being configured to capture a text input from a user.
Claims 6 and 16 recite wherein the request conveys the set of workflows from which the Generative AI system is to select the workflow in the set associated with the unstructured data.
Claims 7 and 17 recite wherein the unstructured data is derived from audio.
Claims 9 and 19 recite:
a. implementing at the one or more computers a Graphical User Interface (GUI) displaying to a user the workflow selected by the Generative AI system, wherein the GUI includes a validation control allowing the user to validate the selected workflow;
b. in response to user input at the validation control indicating that selected workflow is valid, activating the segment CRM application corresponding to the selected workflow.
The claims as a whole merely describe how to generally “apply” the abstract idea(s) in a computer environment. The claimed processing elements are recited at a high level of generality and are merely invoked as a tool to perform the abstract idea(s). Simply implementing the abstract idea(s) on a general-purpose processor is not a practical application of the abstract idea(s); Applicant’s specification discloses that the invention may be implemented using general-purpose processing elements and other generic components (Spec: p. 12, including the following statement: “The hardware associated with the CRM software application 12 is not being discussed here in detail because it is an aspect known in the art.”).
The use of a processor/processing elements (e.g., as recited in all of the claims) facilitates generic processor operations. The use of a memory or machine-readable media with executable instructions facilitates generic processor operations.
The additional elements are recited at a high-level of generality (i.e., as generic processing elements performing generic computer functions) such that the incorporation of the additional processing elements amounts to no more than mere instructions to apply the judicial exception(s) using generic computer components. There is no indication in the Specification that the steps/functions of the claims require any inventive programming or necessitate any specialized or other inventive computer components (i.e., the steps/functions of the claims may be implemented using capabilities of general-purpose computer components). Accordingly, the additional elements do not integrate the abstract ideas into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea(s).
The processing components presented in the claims simply utilize the capabilities of a general-purpose computer and are, thus, merely tools to implement the abstract idea(s). As seen in MPEP § 2106.05(a)(I) and § 2106.05(f)(2), the court found that accelerating a process when the increased speed solely comes from the capabilities of a general-purpose computer is not sufficient to show an improvement in computer-functionality and it amounts to a mere invocation of computers or machinery as a tool to perform an existing process (see FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016)).
Considering that the implementation of the machine learning model and/or the training of the model is performed using generic processing elements, such an implementation is presented as a generic recitation of machine learning in the claims and as a general link to technology. The machine learning-based processing elements are simply tools to generally automate the underlying process that could be performed by a human. It is further noted that, as described in Applicant’s Specification, the machine learning operations are generic machine learning operations (Spec: p. 1: “The invention relates to computer implemented methodologies and systems for workflow management utilizing a Generative AI framework, notably one that employs a language model like a Large Language Model (LLM).”; p. 3: “A Generative AI system, also known as a Generative model, is a type of artificial intelligence that is designed to generate new data samples that resemble a given dataset. It is a class of AI models capable of learning the underlying patterns and structures of the training data and then using that knowledge to produce new, synthetic data that resembles the original data distribution.”). The Specification presents no assertion that there is any improvement in the automated machine learning process itself. Such a generic recitation of machine learning, as recited in the claims, is little more than automating an analogous process that can be performed by a human.
There is no transformation or reduction of a particular article to a different state or thing recited in the claims.
Additionally, even when considering the operations of the additional elements as an ordered combination, the ordered combination does not amount to significantly more than what is present in the claims when each operation is considered separately.
2B: Claim(s) Provide(s) an Inventive Concept?
No – The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception(s). As discussed above with respect to integration of the abstract idea(s) into a practical application, the use of the additional elements to perform the steps identified in Step 2A – Prong 1 above amounts to no more than mere instructions to apply the exceptions using a generic computer component(s). Mere instructions to apply an exception using a generic computer component(s) cannot provide an inventive concept. The claims are not patent eligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Pandey et al. (US 2025/0117629).
[Claim 1] Pandey discloses a non-transitory computer-readable storage medium storing instructions, that when executed by one or more computers configures the one or more computers (¶¶ 169-173 – computing devices, memory, computer-executable instructions) to perform the steps of:
a. receive at an input of the one or more computers, unstructured data (¶ 57 – “…the user devices 110 and 130 may exchange speech, text, images, and the like, submitted through the software application 122.”; ¶ 58 – “…audio may be spoken, text may be entered into a text box, documents may be viewed, and the like.”; ¶ 59 – “The content may be recorded by the software application 122 and provided to the GenAI model 124.”; ¶ 60 – “…the GenAI model 124 may ingest the content recorded from the meeting and generate additional content that can be displayed during the meeting and additional content that can be used after the meeting, such as a call script, a call list, and the like.”; ¶ 86 – “In some embodiments, the software application 522 may include a teleconferencing feature, a chat feature, a speech recording feature, meeting software, voice over IP (VOIP) call system, and the like. The teleconferencing software may generate call transcripts with a description of the text/words discussed during the call in the sequence they are discussed, along with identifiers of the users that made each piece of speech. Here, the software application 522 may record conversation data such as audio that is spoken during the conversation, text that is typed into a chat window of the conversation, or the like, convert it into a transcript, and transfer the conversation data to a generative artificial intelligence (GenAI) model 524, which generates the call script 540.”);
b. output at an interface of the one or more computers, a request to a Generative Al system to associate the unstructured data to a workflow selected among a set of workflows, wherein each workflow in the set of workflows is characterized by a series of steps (¶ 62 – “Referring to FIG. 2, a software application 210 may request execution of the model 224 by submitting a request to the host platform 220. In response, an AI engine 222 may receive the request and trigger the model 224 to execute within a runtime environment of the host platform 220.”; fig. 3C, ¶¶ 76-81 – A user may interact with the GUI menu to help define features to be used by the GenAI model. This presents details of a request to a Generative AI system.; ¶ 205 – “The engine, when activated, initiates a real-time transcription of a video call between a user and another participant and captures any accompanying visual elements like charts, slides, or gestures. By applying the GenAI model as described herein on this transcribed content and content in the initially displayed report, the portions of the report discussed during the call are determined. The system enhances these identified sections on the user interface using modified visuals.” The user may also trigger the GenAI to be applied to captured content.; ¶ 79 – “…the GenAI model described herein may be trained based on custom-defined prompts designed to draw out specific attributes associated with a user's goal. These same prompts may be output during live execution of the GenAI model. For example, a user may input a goal description and other attributes.” Responses to prompts may also trigger the Generative AI system to run a model(s).; ¶ 179 – “As target customer segments are examined, the initial interactions (i.e., primary calls) are more closely examined. Leveraging a speech-to-text conversion model, spoken words from the primary calls are transformed into textual transcripts. These transcripts are further analyzed to ensure that the content resonates on a personal level. This is achieved through a semantic modification process. By aligning the text with a customer's past interactions and specific word choices, the resulting call script becomes personalized, enhancing engagement and rapport. Using an LLM model, continuous integration ensures that each call script remains not just current but evolves with the shifting needs and preferences of the customer base.”; In general, the user activity and interactions with the system triggers requests to the Generative AI system.; ¶ 167 – “…the report may include a sequence of windows of content to be displayed during the call, and the modifying comprises rearranging the sequence of windows of content based on the identified content.” Content may be selected based on a recommended script, which is a type of workflow. A sequence of windows of content is an example of a workflow with a series of steps.; ¶ 81 – “Prompt engineering is the process of structing sentences (prompts) so that the GenAI model understands them. A prompt may include a description of a goal, such as a goal of purchasing a particular type of asset. The prompt may also provide an amount to purchase, a price range, and the like. All of this information may be input to the GenAI model and used to create a custom content about the asset to enable the user to visualize the asset and understand how to add the asset to their portfolio, such as the steps to take to obtain the asset. Part of the prompting process may include delays/waiting times intentionally included within the script so the model has time to think/understand the input data.”);
c. receive from the interface as a result of processing by the Generative AI system an identification of the selected workflow (¶ 167 – “…the report may include a sequence of windows of content to be displayed during the call, and the modifying comprises rearranging the sequence of windows of content based on the identified content.”; ¶ 168 – “In some embodiments, the executing may further include converting audio content from the call into text via a converter and executing the GenAI model on the text of the call to determine the identified content. In some embodiments, the method may further include identifying call content to be discussed at a later point in time during the call based on execution of the GenAI model on the content that is heard during the call, and displaying the call content on a user interface of the different user device. In some embodiments, the modification may include modifying an appearance of the identified content within the report to distinguish the identified content from others.”; ¶¶ 131-144 – Examples of call scripts being selected for presentation; ¶ 139 – “In this example, the GenAI model 1024 may visually emphasize a different display module corresponding to the different topics being discussed during the conversation at this time (i.e., in real-time). Here, the GenAI model 1024 determines that the users are discussing an asset containing content within a display module 1052. In response, the GenAI model 1024 can emphasize the display module 1052 by submitting instructions to the software application 1020 to move the display module 1052, enlarge the display module 1052, change a color of the display module 1052, change a shading of the display module 1052, or the like. The reconfiguring may cause the currently displayed display module (e.g., the display module 1042) to be darkened, greyed out, moved to a different play, covered by another module, etc.”; ¶ 81 – “Prompt engineering is the process of structing sentences (prompts) so that the GenAI model understands them. A prompt may include a description of a goal, such as a goal of purchasing a particular type of asset. The prompt may also provide an amount to purchase, a price range, and the like. All of this information may be input to the GenAI model and used to create a custom content about the asset to enable the user to visualize the asset and understand how to add the asset to their portfolio, such as the steps to take to obtain the asset. Part of the prompting process may include delays/waiting times intentionally included within the script so the model has time to think/understand the input data.”).
[Claim 2] Pandey discloses wherein the input is implemented by a GUI (fig. 3C, ¶¶ 76-81 – A user may interact with the GUI menu to help define features to be used by the GenAI model.; ¶ 72 – “Here, the training process may use executional results that have already been generated/output by the GenAI model 322 in a live environment (including any customer feedback, etc.) to retrain the GenAI model 322. For example, predicted outputs/images custom generated by the GenAI model 322 and the user feedback of the images may be used to retrain the model to enhance the images generated for all users. The responses may indicate whether the generated content is correct and, if not, what aspects of the images and text are incorrect. This data may be captured and stored within a runtime log 325 or other data stored within the live environment and can be subsequently used to retrain the GenAI model 322.”; ¶ 83 – “…the software application 420 may output queries on a user interface 412 of the user device 410 with user information requests. The user may enter values into the fields on the user interface corresponding to the queries and submit/transfer the data to the software application 420, for example, by pressing a submit button, etc. In this example, the application may combine the query with the response from the user interface and generate a prompt submitted to the GenAI model 422. For example, each prompt may include a combination of a query on the UI plus the response from the user. For example, if the query is “Please describe the type of assets you prefer” and the response is “Investment vehicles with low risk and less return,” then the text from both the prompt and the response to the prompt may be submitted to the GenAI model 422.”; ¶ 139 – “In this example, the GenAI model 1024 may visually emphasize a different display module corresponding to the different topics being discussed during the conversation at this time (i.e., in real-time). Here, the GenAI model 1024 determines that the users are discussing an asset containing content within a display module 1052.”).
[Claim 3] Pandey discloses wherein the GUI includes a control element (fig. 3C, ¶¶ 76-81 – A user may interact with the GUI menu to help define features to be used by the GenAI model.; ¶ 72 – “Here, the training process may use executional results that have already been generated/output by the GenAI model 322 in a live environment (including any customer feedback, etc.) to retrain the GenAI model 322. For example, predicted outputs/images custom generated by the GenAI model 322 and the user feedback of the images may be used to retrain the model to enhance the images generated for all users. The responses may indicate whether the generated content is correct and, if not, what aspects of the images and text are incorrect. This data may be captured and stored within a runtime log 325 or other data stored within the live environment and can be subsequently used to retrain the GenAI model 322.”; ¶ 83 – “…the software application 420 may output queries on a user interface 412 of the user device 410 with user information requests. The user may enter values into the fields on the user interface corresponding to the queries and submit/transfer the data to the software application 420, for example, by pressing a submit button, etc. In this example, the application may combine the query with the response from the user interface and generate a prompt submitted to the GenAI model 422. For example, each prompt may include a combination of a query on the UI plus the response from the user. For example, if the query is “Please describe the type of assets you prefer” and the response is “Investment vehicles with low risk and less return,” then the text from both the prompt and the response to the prompt may be submitted to the GenAI model 422.”; ¶ 139 – “In this example, the GenAI model 1024 may visually emphasize a different display module corresponding to the different topics being discussed during the conversation at this time (i.e., in real-time). Here, the GenAI model 1024 determines that the users are discussing an asset containing content within a display module 1052. In response, the GenAI model 1024 can emphasize the display module 1052 by submitting instructions to the software application 1020 to move the display module 1052, enlarge the display module 1052, change a color of the display module 1052, change a shading of the display module 1052, or the like. The reconfiguring may cause the currently displayed display module (e.g., the display module 1042) to be darkened, greyed out, moved to a different play, covered by another module, etc.”).
[Claim 4] Pandey discloses wherein the unstructured data includes text, the control element being configured to capture a text input from a user (¶ 57 – “…the user devices 110 and 130 may exchange speech, text, images, and the like, submitted through the software application 122.”; ¶ 58 – “…audio may be spoken, text may be entered into a text box, documents may be viewed, and the like.”; ¶ 59 – “The content may be recorded by the software application 122 and provided to the GenAI model 124.”; ¶ 60 – “…the GenAI model 124 may ingest the content recorded from the meeting and generate additional content that can be displayed during the meeting and additional content that can be used after the meeting, such as a call script, a call list, and the like.”; ¶ 86 – “In some embodiments, the software application 522 may include a teleconferencing feature, a chat feature, a speech recording feature, meeting software, voice over IP (VOIP) call system, and the like. The teleconferencing software may generate call transcripts with a description of the text/words discussed during the call in the sequence they are discussed, along with identifiers of the users that made each piece of speech. Here, the software application 522 may record conversation data such as audio that is spoken during the conversation, text that is typed into a chat window of the conversation, or the like, convert it into a transcript, and transfer the conversation data to a generative artificial intelligence (GenAI) model 524, which generates the call script 540.”; ¶ 205 – “The engine, when activated, initiates a real-time transcription of a video call between a user and another participant and captures any accompanying visual elements like charts, slides, or gestures.”).
[Claim 5] Pandey discloses wherein the request conveys the unstructured data, which includes text (¶ 57 – “…the user devices 110 and 130 may exchange speech, text, images, and the like, submitted through the software application 122.”; ¶ 58 – “…audio may be spoken, text may be entered into a text box, documents may be viewed, and the like.”; ¶ 59 – “The content may be recorded by the software application 122 and provided to the GenAI model 124.”; ¶ 60 – “…the GenAI model 124 may ingest the content recorded from the meeting and generate additional content that can be displayed during the meeting and additional content that can be used after the meeting, such as a call script, a call list, and the like.”; ¶ 86 – “In some embodiments, the software application 522 may include a teleconferencing feature, a chat feature, a speech recording feature, meeting software, voice over IP (VOIP) call system, and the like. The teleconferencing software may generate call transcripts with a description of the text/words discussed during the call in the sequence they are discussed, along with identifiers of the users that made each piece of speech. Here, the software application 522 may record conversation data such as audio that is spoken during the conversation, text that is typed into a chat window of the conversation, or the like, convert it into a transcript, and transfer the conversation data to a generative artificial intelligence (GenAI) model 524, which generates the call script 540.”; fig. 3C, ¶¶ 76-81 – A user may interact with the GUI menu to help define features to be used by the GenAI model. This presents details of a request to a Generative AI system.; ¶ 205 – “The engine, when activated, initiates a real-time transcription of a video call between a user and another participant and captures any accompanying visual elements like charts, slides, or gestures. By applying the GenAI model as described herein on this transcribed content and content in the initially displayed report, the portions of the report discussed during the call are determined. The system enhances these identified sections on the user interface using modified visuals.” The user may also trigger the GenAI to be applied to captured content.; ¶ 79 – “…the GenAI model described herein may be trained based on custom-defined prompts designed to draw out specific attributes associated with a user's goal. These same prompts may be output during live execution of the GenAI model. For example, a user may input a goal description and other attributes.” Responses to prompts may also trigger the Generative AI system to run a model(s).; ¶ 179 – “As target customer segments are examined, the initial interactions (i.e., primary calls) are more closely examined. Leveraging a speech-to-text conversion model, spoken words from the primary calls are transformed into textual transcripts. These transcripts are further analyzed to ensure that the content resonates on a personal level. This is achieved through a semantic modification process. By aligning the text with a customer's past interactions and specific word choices, the resulting call script becomes personalized, enhancing engagement and rapport. Using an LLM model, continuous integration ensures that each call script remains not just current but evolves with the shifting needs and preferences of the customer base.”; In general, the user activity and interactions with the system triggers requests to the Generative AI system.).
[Claim 6] Pandey discloses wherein the request conveys the set of workflows from which the Generative AI system is to select the workflow in the set associated with the unstructured data (¶ 81 – “Prompt engineering is the process of structing sentences (prompts) so that the GenAI model understands them. A prompt may include a description of a goal, such as a goal of purchasing a particular type of asset. The prompt may also provide an amount to purchase, a price range, and the like. All of this information may be input to the GenAI model and used to create a custom content about the asset to enable the user to visualize the asset and understand how to add the asset to their portfolio, such as the steps to take to obtain the asset. Part of the prompting process may include delays/waiting times intentionally included within the script so the model has time to think/understand the input data.”; ¶ 179 – “As target customer segments are examined, the initial interactions (i.e., primary calls) are more closely examined. Leveraging a speech-to-text conversion model, spoken words from the primary calls are transformed into textual transcripts. These transcripts are further analyzed to ensure that the content resonates on a personal level. This is achieved through a semantic modification process. By aligning the text with a customer's past interactions and specific word choices, the resulting call script becomes personalized, enhancing engagement and rapport. Using an LLM model, continuous integration ensures that each call script remains not just current but evolves with the shifting needs and preferences of the customer base.”; ¶ 182 – Personalized financial strategies may be tailored to a particular client.).
[Claim 7] Pandey discloses wherein the unstructured data is derived from audio (¶ 162 – “In some embodiments, the receiving may include listening to an audio call between the user device and the second user device and identifying the topic of conversation while the audio call is taking place. In some embodiments, the identifying may further include converting audio content from the audio call into text via a converter and executing the GenAI model on the text of the audio call to identify the topic of the conversation.”; ¶ 179 – “As target customer segments are examined, the initial interactions (i.e., primary calls) are more closely examined. Leveraging a speech-to-text conversion model, spoken words from the primary calls are transformed into textual transcripts. These transcripts are further analyzed to ensure that the content resonates on a personal level. This is achieved through a semantic modification process. By aligning the text with a customer's past interactions and specific word choices, the resulting call script becomes personalized, enhancing engagement and rapport. Using an LLM model, continuous integration ensures that each call script remains not just current but evolves with the shifting needs and preferences of the customer base.”).
[Claim 8] Pandey discloses activating in a CRM record, which includes a plurality of segments associated with one or more of the workflows in the set of workflows, a segment corresponding to the selected workflow (¶ 71 – “The GenAI model 322 may be executed on training data via an AI engine 321 of the host platform 320 during training. The training data may include a large corpus of generic images and text that is related to those images. In the example embodiments, the training data may include asset data such as web pages of content on different assets, performance data of the assets, predicted performance data of the assets (in the future), portfolio data of users, account data history of users, and the like.”; ¶ 98 – “FIGS. 6A-6C illustrate a process of detecting a missing asset of interest using GenAI and generating a new portfolio according to example embodiments. For example, FIG. 6A illustrates a process 600 of dynamically generating a new portfolio 630 for a client based on contextual data 612 observed from the client device (e.g., user device 610) by a software application 622 hosted by a host platform 620. Here, the contextual data 612 may include a cookies file or the like extracted from a browser on the user device 610, which includes browsing history data of the user device 610, and which is passed to the host platform 620 by the user device 610.”; ¶ 111 – “Here, the GenAI model 724 may be trained to learn a correlation between text and life events based on historical text associated with the life events. In this example, the GenAI model 724 identifies an upcoming life event 712 based on the conversation with the client.”; ¶ 120 – “For example, FIG. 8B illustrates a process 850 of a next meeting between the client and the advisor. In this example, the advisor may query the software application 822 with an identifier of the user of the user device 810, such as a username, an email address, an account number, etc. In response, the software application 822 may trigger the GenAI model 824 to identify an asset of interest in the browsing history of the user device 810 and generate content about the asset of interest, such as a performance graph 852 that is integrated into a future/predicted portfolio of the user. The predicted portfolio may also include predictions about the performance of other assets already existing. The performance of the other assets may also be displayed in comparison to the performance graph 852 to enable the user to visualize how the assets are expected to perform.”; ¶ 126 – “The GenAI model 924 may be trained to identify investment strategies based on goals, including investment goals, financial planning goals, retirement goals, life event goals, etc., which may be learned by executing the GenAI model 924 on historical portfolios of other users. The GenAI model 924 may be used to make recommendations based thereon.” In other words, certain aspects of a client’s/user’s profile may be focused on to guide a script/interactions with the client/user.).
[Claim 9] Pandey discloses:
a. implementing at the one or more computers a Graphical User Interface (GUI) displaying to a user the workflow selected by the Generative AI system, wherein the GUI includes a validation control allowing the user to validate the selected workflow (¶ 72 – “As another example, the IDE 310 may be used to retrain the GenAI model 322 after the model has already been deployed. Here, the training process may use executional results that have already been generated/output by the GenAI model 322 in a live environment (including any customer feedback, etc.) to retrain the GenAI model 322. For example, predicted outputs/images custom generated by the GenAI model 322 and the user feedback of the images may be used to retrain the model to enhance the images generated for all users. The responses may indicate whether the generated content is correct and, if not, what aspects of the images and text are incorrect. This data may be captured and stored within a runtime log 325 or other data stored within the live environment and can be subsequently used to retrain the GenAI model 322.”; ¶ 66 – “In some embodiments, the software application 210 may display a user interface enabling a user to provide feedback from the output provided by the model 224. For example, a user may input a confirmation that the asset of interest generated by a GenAI model is correct or is liked. This information may be added to the results of execution and stored within a log 225.” Direct feedback from the user is one example of workflow validation.; ¶¶ 137-139 – The GenAI model may also detect a topic being discussed by the users and determine if the displayed information corresponds to the topic being discussed. If not, the display can be revised to better match the actual conversation. The observed discussion is another example of user input that conveys validation of a selected workflow.);
b. in response to user input at the validation control indicating that selected workflow is valid, activating the segment CRM application corresponding to the selected workflow (¶ 167 – “…the report may include a sequence of windows of content to be displayed during the call, and the modifying comprises rearranging the sequence of windows of content based on the identified content.” Content may be selected based on a recommended script, which is a type of workflow. A sequence of windows of content is an example of a workflow with a series of steps.; ¶ 72 – “As another example, the IDE 310 may be used to retrain the GenAI model 322 after the model has already been deployed. Here, the training process may use executional results that have already been generated/output by the GenAI model 322 in a live environment (including any customer feedback, etc.) to retrain the GenAI model 322. For example, predicted outputs/images custom generated by the GenAI model 322 and the user feedback of the images may be used to retrain the model to enhance the images generated for all users. The responses may indicate whether the generated content is correct and, if not, what aspects of the images and text are incorrect. This data may be captured and stored within a runtime log 325 or other data stored within the live environment and can be subsequently used to retrain the GenAI model 322.”; ¶ 66 – “In some embodiments, the software application 210 may display a user interface enabling a user to provide feedback from the output provided by the model 224. For example, a user may input a confirmation that the asset of interest generated by a GenAI model is correct or is liked. This information may be added to the results of execution and stored within a log 225.” Direct feedback from the user is one example of workflow validation.; ¶¶ 137-139 – The GenAI model may also detect a topic being discussed by the users and determine if the displayed information corresponds to the topic being discussed. If not, the display can be revised to better match the actual conversation. The observed discussion is another example of user input that conveys validation of a selected workflow. Deciding to go with a different workflow or continue with an existing one based on customer relationship management information, e.g., in terms of selected content (including a series of windows) for display and/or script, is an example of activating the corresponding workflow and relevant CRM application information (or segment).).
[Claim 10] Pandey discloses extracting the set of workflows from the CRM record (¶ 71 – “The GenAI model 322 may be executed on training data via an AI engine 321 of the host platform 320 during training. The training data may include a large corpus of generic images and text that is related to those images. In the example embodiments, the training data may include asset data such as web pages of content on different assets, performance data of the assets, predicted performance data of the assets (in the future), portfolio data of users, account data history of users, and the like.”; ¶ 98 – “FIGS. 6A-6C illustrate a process of detecting a missing asset of interest using GenAI and generating a new portfolio according to example embodiments. For example, FIG. 6A illustrates a process 600 of dynamically generating a new portfolio 630 for a client based on contextual data 612 observed from the client device (e.g., user device 610) by a software application 622 hosted by a host platform 620. Here, the contextual data 612 may include a cookies file or the like extracted from a browser on the user device 610, which includes browsing history data of the user device 610, and which is passed to the host platform 620 by the user device 610.”; ¶ 111 – “Here, the GenAI model 724 may be trained to learn a correlation between text and life events based on historical text associated with the life events. In this example, the GenAI model 724 identifies an upcoming life event 712 based on the conversation with the client.”; ¶ 120 – “For example, FIG. 8B illustrates a process 850 of a next meeting between the client and the advisor. In this example, the advisor may query the software application 822 with an identifier of the user of the user device 810, such as a username, an email address, an account number, etc. In response, the software application 822 may trigger the GenAI model 824 to identify an asset of interest in the browsing history of the user device 810 and generate content about the asset of interest, such as a performance graph 852 that is integrated into a future/predicted portfolio of the user. The predicted portfolio may also include predictions about the performance of other assets already existing. The performance of the other assets may also be displayed in comparison to the performance graph 852 to enable the user to visualize how the assets are expected to perform.”; ¶ 126 – “The GenAI model 924 may be trained to identify investment strategies based on goals, including investment goals, financial planning goals, retirement goals, life event goals, etc., which may be learned by executing the GenAI model 924 on historical portfolios of other users. The GenAI model 924 may be used to make recommendations based thereon.” In other words, certain aspects of a client’s/user’s profile may be focused on to guide a script/interactions with the client/user.; ¶¶ 137-139 – The GenAI model may also detect a topic being discussed by the users and determine if the displayed information corresponds to the topic being discussed. If not, the display can be revised to better match the actual conversation. The observed discussion is another example of user input that conveys validation of a selected workflow. Details of user discussions are part of a CRM record, as seen in ¶¶ 86, 114.).
[Claims 11-20] Claims 11-20 recite limitations already addressed by the rejections of claims 1-10 above; therefore, the same rejections apply.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gaur et al. (US 2023/0061906) – Uses deep generative language models to generate questions adaptively (¶¶ 21, 68).
Miller et al. (US 2023/0316186) – Uses generative machine learning models to guide conversation.
Urdiales et al. (US 2021/0357378) -- Uses generative machine learning models to guide conversation.
Taheri (US 2025/0117595) – Uses LLM and GenAI to recommend responses.
Mishra (US 2023/0410801) – Generates scripts for a conversation using generative artificial intelligence.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUSANNA M DIAZ whose telephone number is (571)272-6733. The examiner can normally be reached M-F, 8 am-4:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at (571) 270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SUSANNA M. DIAZ/
Primary Examiner
Art Unit 3625A