DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
This Office Action is in response to the Election filed on 01/13/2026. Applicant elects Group II (claims 8-22) without traverse. Claims 1-7 are withdrawn as being directed towards the non-elected invention. Claims 8-22 are currently pending and examined below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 8-22 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a nature phenomenon, or an abstract idea) without significantly more.
Step 1:
Claims 8-22 is/are directed towards a statutory category (i.e., a process, machine, manufacture, or composition of matter) (Step 1, Yes).
Step 2A Prong One:
Claim 8 recites (additional elements underlined):
A method for providing resolution guidance to a help center agent, comprising:
providing a current user utterance from a customer to an intent engine;
determining, by the intent engine, an intent of the current user utterance as a user intent;
prompting a generative artificial intelligence (GenAI) model to generate at least one action leading to a resolution of the current user utterance with a prompt including: the current user utterance, and domain-specific documentation; and
providing, to the customer, a resolution selected by the help center agent from among the at least one action.
Under the broadest reasonable interpretation, the limitations outlined above that describe or set forth the abstract idea, cover performance of the limitations in the mind but for the recitation of generic computer(s) and/or generic computer component(s). That is, other than reciting the additional elements identified below, nothing in the claim precludes the limitations from practically being performed in the mind. These limitations are considered a mental process because the limitations include an observation, evaluation, judgement, and/or opinion. These limitations are also similar to “collecting information, analyzing it, and displaying certain results of the collection and analysis” and/or “collecting and comparing known information” which were determined to be mental processes in MPEP 2106.04(a)(2)(III)(A). The Examiner notes that “[c]laims can recite a mental process even if they are claimed as being performed on a computer” (see MPEP 2106.04(a)(2)(III)(C)). The mere nominal recitation of the additional elements identified above do not take the claims out of the mental process grouping. Therefore, the claim recite a mental process (Step 2A Prong One, Yes).
The limitations outlined above also describe or set forth a commercial interaction (e.g., advertising, marketing or sales activities or behaviors, business relations). Commercial interactions fall within the certain method of organizing human activity enumerated grouping of abstract ideas. The limitations outlined above also describe or set forth a fundamental economic principle or practice because commercial interactions are related to commerce and economy. The limitations outlined above also describe or set forth the managing of personal behavior or relationships or interactions between people (e.g., between a help center agent and a customer). Therefore, the claim recites a certain method of organizing human activity (Step 2A Prong One, Yes).
Step 2A Prong Two:
In Step 2A Prong Two, these additional element(s) are recited at a high level of generality, and under the broadest reasonable interpretation, are generic computer(s) and/or generic computer component(s) that perform generic computer functions. The additional element(s) are merely used as tools, in their ordinary capacity, to perform the abstract idea. The additional element(s) amount adding the words “apply it” with the judicial exception. Merely implementing an abstract idea on generic computer(s) and/or generic computer component(s) does not integrate the judicial exception similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer. “[T]he use of generic computer elements like a microprocessor or user interface do not alone transform an otherwise abstract idea into patent eligible subject matter" (see pp 10-11 of FairWarning IP, LLC. v. Iatric Systems, Inc. (Fed. Cir. 2016)). The additional elements also amount to generally linking the use of the abstract idea to a particular technological environment or field of use. The type of information being manipulated does not impose meaningful limitations or render the idea less abstract. Further, the courts have found that simply limiting the use of the abstract idea to a particular environment does not integrate the judicial exception into a practical application. Viewing the limitations as an ordered combination does not add anything further than looking at the limitations individually. The additional elements amount no more than mere instructions to apply the abstract idea using generic computer(s) and/or generic computer component(s). Their collective functions merely provide generic computer implementation. There is no indication that the combination of elements improves the functioning of a computer, improves any other technology or technical field, applies or uses the judicial exception to effect a particular treatment or prophylaxis for disease or medical condition, applies the judicial exception with, or by use of a particular machine, effects a transformation or reduction of a particular article to a different state or thing, or applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claims as a whole is more than a drafting effort designed to monopolize the exception. (Step 2A Prong Two, No).
Step 2B:
In Step 2B, the additional elements also do not amount to significantly more for the same reasons set forth with respect to Step 2A Prong Two. The Examiner notes that revised Step 2A Prong Two overlaps with Step 2B, and thus, many of the considerations need not be reevaluated in Step 2B because the answer will be the same. Viewing the limitations as an ordered combination does not add anything further than looking at the limitations individually. The additional elements amount no more than a mere instruction to apply the abstract idea using generic computer(s) and/or generic computer component(s) (Step 2B, No).
Claims 9-15 recite further limitations that also fall within the same abstract ideas identified above with respect to claim 8 (i.e., certain methods of organizing human activities and/or mental processes).
Claims 9-10 and 12-13 recite the additional elements of “embeddings”. Claim 14 recites the additional element “wherein the GenAI model is a pre-trained large language model fined-tuned”. Claim 15 recites the additional element “wherein the intent engine comprises a large language model (LLM) trained to”. However, these additional elements also do not integrate the judicial exception into a practical application or amount to significantly more because they amount to adding the words “apply it” with the judicial exception, mere instructions to implement the idea on a computer, merely using a computer as a tool to perform an abstract idea, and generally linking the use of the judicial exception to a particular technological environment or field of use.
Claim 11 does not recite any other additional elements. Therefore, for the same reasons explained above with respect to claim 8, claim 11 also does not integrate the judicial exception into a practical application or amount to significantly more.
Claim 16 recites (additional elements underlined):
A processing system for providing resolution guidance to a help center agent, comprising:
a historical interactions datastore having a corpus of historical interactions stored therein;
a knowledge datastore having a corpus of domain-specific documentation stored therein;
a memory comprising computer-executable instructions; and
one or more processors configured to execute the computer-executable instructions and cause the processing system to:
provide a current user utterance from a customer to an intent engine;
determine, by the intent engine, an intent of the current user utterance as a user intent;
compare the user intent to a set of stored intent embeddings to identify one or more stored intent embeddings that are similar to the user intent;
prompt a generative artificial intelligence (GenAI) model to generate at least one action leading to a resolution of the current user utterance with a prompt including: the current user utterance, historical interactions related to the one or more stored intent embeddings selected from the corpus of historical interactions, and documentation associated with the one or more stored intent embeddings selected from the corpus of domain-specific documentation; and
providing, to the customer, a resolution selected by the help center agent from among the at least one action.
For the same reasons explained above with respect to claim 8, claim 16 also recites an abstract idea in Step 2A Prong One (i.e., mental process and certain methods of organizing human activities). For the same reasons explained above with respect to claim 8, claim 16 also does not integrate the judicial exception into a practical application or amount to significantly more.
Claims 17-22 recite further limitations that also fall within the same abstract ideas identified above with respect to claim 16 (i.e., certain methods of organizing human activities and/or mental processes).
Claims 17 and 19-20 recite the additional elements of “wherein the one or more processors further cause the processing system to” and “embeddings”. Claim 21 recites the additional element “wherein the GenAI model is a pre-trained large language model fined-tuned”. Claim 22 recites the additional element “wherein the intent engine comprises a large language model (LLM) trained to”. However, these additional elements also do not integrate the judicial exception into a practical application or amount to significantly more because they amount to adding the words “apply it” with the judicial exception, mere instructions to implement the idea on a computer, merely using a computer as a tool to perform an abstract idea, and generally linking the use of the judicial exception to a particular technological environment or field of use.
Claim 18 does not recite any other additional elements. Therefore, for the same reasons explained above with respect to claim 16, claim 18 also does not integrate the judicial exception into a practical application or amount to significantly more.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 8 is/are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Ghoche et al. (US 2024/0386213 A1, hereinafter “Ghoche”).
As per Claim 8, Ghoche discloses A method for providing resolution guidance to a help center agent, comprising (¶ 67 “The present disclosure describes systems and methods for aiding human agents to service customer support issues.” Also see Figures 1-2B.):
providing a current user utterance from a customer to an intent engine (¶ 91 “The classification performed by the trained ML model may be viewed as a form of intent detection in terms of predicting the intent of the user's question, and identifying which bucket the issue in the ticket falls under regarding a macro that can be applied.” ¶ 225 “In one implementation, the large language model is provided to support an AI chatbot implementing a workflow. For example, in one implementation, a customer may input free-form text describing their problem. Upon intent detection (e.g., detecting that the intent of the customer corresponds to checking order status), a workflow is triggered.” ¶ 233 “FIG. 49 is a flowchart of an example of using a workflow policy in accordance with an implementation. In block 4902, the intent of a customer ticket/customer is detected. This may, for example, be performed using any of the previously described intent detection techniques in this application, such as using the granular topic detection/taxonomy discussed earlier.”);
determining, by the intent engine, an intent of the current user utterance as a user intent (¶ 91 “The classification performed by the trained ML model may be viewed as a form of intent detection in terms of predicting the intent of the user's question, and identifying which bucket the issue in the ticket falls under regarding a macro that can be applied.” ¶ 225 “In one implementation, the large language model is provided to support an AI chatbot implementing a workflow. For example, in one implementation, a customer may input free-form text describing their problem. Upon intent detection (e.g., detecting that the intent of the customer corresponds to checking order status), a workflow is triggered.” ¶ 233 “FIG. 49 is a flowchart of an example of using a workflow policy in accordance with an implementation. In block 4902, the intent of a customer ticket/customer is detected. This may, for example, be performed using any of the previously described intent detection techniques in this application, such as using the granular topic detection/taxonomy discussed earlier.”);
prompting a generative artificial intelligence (GenAI) model to generate at least one action leading to a resolution of the current user utterance with a prompt including: the current user utterance, and domain-specific documentation (¶ 233 “In block 4912, the large language model is prompted, where the prompts may include information on the conversation [i.e., current user utterance], the workflow policy [i.e., domain-specific documentation], and observations regarding the results of previous actions/use of tools”. ); and
providing, to the customer, a resolution selected by the help center agent from among the at least one action (¶ 233 “In block 4914, a determination is made of actions and response for the autonomous AI chatbot. In block 4916, a decision is made whether the workflow/conversation is complete. The process may loop and be performed continuously to attempt to solve a customer ticket.” The Examiner asserts that in Block 4912, the GenAI model generates at least one action leading to a resolution of the current user utterance from the prompt. In Blocks 4914-4916, the resolution is provided to the help center agent which is provided to the customer because the process may loop and be performed continuously until the customer ticket is solved. Also see at least ¶ 215 “notifying the client various actions have been completed (e.g., a refund has been issued to and you will receive an email verifying the refund) and confirming with the client that all of their issues have been resolved”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ghoche in view of Haikin et al. (US 2025/0165720 A1, hereinafter “Haikin”).
As per Claim 15, Ghoche discloses wherein the intent engine … trained to determine the intent of the current user utterance (¶ 91 “The classification performed by the trained ML model may be viewed as a form of intent detection in terms of predicting the intent of the user's question, and identifying which bucket the issue in the ticket falls under regarding a macro that can be applied.” ¶ 225 “In one implementation, the large language model is provided to support an AI chatbot implementing a workflow. For example, in one implementation, a customer may input free-form text describing their problem. Upon intent detection (e.g., detecting that the intent of the customer corresponds to checking order status), a workflow is triggered.” ¶ 233 “FIG. 49 is a flowchart of an example of using a workflow policy in accordance with an implementation. In block 4902, the intent of a customer ticket/customer is detected. This may, for example, be performed using any of the previously described intent detection techniques in this application, such as using the granular topic detection/taxonomy discussed earlier.”).
While Ghoche discloses an intent engine that is trained to determine the intent of the current user utterance, Ghoche does not appear to explicitly disclose an intent engine comprises a large language model (LLM).
However, Haikin discloses an intent engine comprises a large language model (LLM) (¶ 53 “As discussed in detail below, systems and methods of the present invention utilize a large language model that is configured to automatically generate concise answers to questions—such as “What was the intent of the customer?”, “Why was the customer unhappy?”, “What was the customer upset about?”, or “What was the resolution of the issue?”—which are posed in association to conversation data derived from an interaction.” ¶ 60 “Continuing with the discussion as to how the present invention operates, the conversation data—for example, text derived from an interaction—may be fed into the LLM along with the question prompt/answer prefix asking the LLM to generate an insight. As will be discussed in more detail below, the nature of the question prompt and the answer prefix depends upon the insight being sought from the given interaction, with exemplary embodiments being fashioned around obtaining several types of insights from the conversation data, including insights relating to customer intent, sentiment-aspect, call or interaction resolution, as well as others. For example, when the insight is understanding the intent of the customer, a question prompt for determining this may be “What is the intent of the customer?” while the related answer prefix may be “The intent of the customer is . . . ”. ¶ 61 “So, for example, the LLM is asked to generate a reason as to why the customer is upset, or what is the customer's intent, or how the conversation was resolved, which is represented in the figure by the different type of “insights” listed on the downstream side of the LLM. “).
Haikin suggests that due to recent advances in the domain of LLMs, it is advantageous to use LLMs for fashioning useful analytic tools for contact centers (Haikin, ¶ 52).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the intent engine that comprises a ML model as discloses by Ghoche, for the intent engine that comprises a large language model as disclosed by Haikin, because doing so would harness the new capabilities towards fashioning useful analytic tools for contact centers (Haikin, ¶ 52). One of ordinary skill in the art would have been motivated to use an intent engine that comprises an LLM instead of an ML model because doing increases the accuracy of determining user intent. LLMs are known in the art to provide superior natural language understanding. Additionally, since each individual element and its function are shown in the prior art, albeit shown in separate reference, the difference between the claimed subject matter and the prior art rests not on any individual element or function, but in the very combination itself – that is in the substitution of the intent engine comprises a large language model (LLM) of Haikin for the intent engine comprises a ML model of Ghoche. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious (KSR Rationale B).
Prior Art
The Examiner notes after an exhaustive search, claims 9-14, and 16-22 currently overcome prior art. The Examiner was unable to find a reasonable number of references to reject these claims within a reasonable amount of time.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Patil et al. (US 2025/0278688 A1) discloses a system and method for automatically generating evaluation forms from interaction recordings comprising: identifying one or more interaction intents from an interaction transcript; generating one or more evaluation categories for the one or more interaction intents using machine learning; generating evaluation questions for the one or more evaluation categories using machine learning; and providing an evaluation form based on the evaluation questions.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAM REFAI whose telephone number is (313)446-4822. The examiner can normally be reached M-F 9:00am-6:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Waseem Ashraf can be reached at 571-270-3948. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAM REFAI/Primary Examiner, Art Unit 3621