Prosecution Insights
Last updated: April 19, 2026
Application No. 18/300,385

SYSTEMS AND METHODS FOR STANDARDIZING COMMUNICATION

Non-Final OA §101§102§103
Filed
Apr 13, 2023
Examiner
EVANS, KIMBERLY L
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Engineer AI Corp.
OA Round
1 (Non-Final)
12%
Grant Probability
At Risk
1-2
OA Rounds
7y 0m
To Grant
26%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
44 granted / 362 resolved
-39.8% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
7y 0m
Avg Prosecution
27 currently pending
Career history
389
Total Applications
across all art units

Statute-Specific Performance

§101
30.6%
-9.4% vs TC avg
§103
39.8%
-0.2% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 362 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of the Claims This Non-Final action is in reply to the application filed 4/13//2023. Claims 1-20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-20 are directed to a method (an act, or series of acts or steps), system (a concrete thing, consisting of parts, or of certain devices and combination of devices) and a computer readable storage medium. However, the computer readable storage medium is directed to non-statutory subject matter. In an effort to analyze all of the claim limitations in accordance with the two-step framework described in Alice/Mayo and the guidance on application of 35USC 101, Examiner interprets the computer readable storage medium of independent claim 15 as, “a non-transitory” computer readable storage medium. Thus, each of the claims fall within one of the four statutory categories. Step 2A-Prong 1: Representative independent claim 1 recites in part, “receiving a notification about an intended communication between a user and customer; while the communication is in progress, identifying a conversation between the user and the customer; determining one or more topics under discussion from the identified conversation; and displaying one or more other topics as recommendations to the user for standardizing the communication. The underlined limitations above demonstrate independent claim 1 is directed toward the abstract idea for receiving and identifying an intended communication between a user and customer; determining one or more topics from the identified conversation; and displaying one or more other topics as recommendations to the user in a computing environment. Applicant’s specification emphasizes a method/system for analyzing a conversation between a user and a customer and providing recommendations and suggestions. The specification also discusses determining an intent of the customer based on the determined customer input and displaying one or more recommendations on a user device communication console while the user is conversing with the customer, including determining one or more topics under discussion from the identified conversation and displaying one or more other topics as recommendations to the user for standardizing the communication (¶5-¶7). Representative Claim 1 is considered an abstract idea because the steps for, “receiving a notification about an intended communication between a user and customer; while the communication is in progress, identifying a conversation between the user and the customer; determining one or more topics under discussion from the identified conversation; and displaying one or more other topics as recommendations to the user for standardizing the communication.”, pertains to certain methods of organizing human activity groupings of abstract ideas (i) managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)) since the steps are directed to receiving a notification about an intended communication between a user and customer; while the communication is in progress, identifying a conversation between the user and the customer; determining one or more topics under discussion from the identified conversation; and displaying one or more other topics as recommendations to the user for standardizing the communication.” Such data input and data gathering steps pertains to (i) managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions. Hence, the claim recites an abstract idea--see MPEP 2106.04(II). Independent claims 8 and 15 recite essentially the same abstract idea as independent claim 1, hence they are also abstract based on the same rationale as independent claim 1. Step 2A-Prong 2: This judicial exception is not integrated into a practical application because the additional elements “computer system”, “memory” “processor” [claim 8]; “computer readable storage medium”, “software” [claim 15] merely provide an abstract-idea based solution using data gathering and analysis and merely provide instructions for organizing human activity, and implementing the abstract idea recited above utilizing the “computer system”, “memory” “processor” [claim 8]; “computer readable storage medium”, “software” [claim 15] as tools to perform the abstract idea, and generally links the abstract idea to a particular technological environment. See MPEP 2106.05 (f-h). Further, the additional elements do not impose any meaningful limits on practicing the abstract idea—see MPEP 2106.05(g). Independent claim 1 fails to operate the recited “computer system”, “memory” “processor” [claim 8]; “computer readable storage medium”, “software” [claim 15] (which are merely standard computer technology and hardware/software components- see applicant’s disclosure ¶5: “The computer system includes a memory and a processor coupled to the memory. The processor is configured to receive a notification about an intended call between a user and a customer. The processor is also configured to identify a conversation between the user and the customer to determine customer inputs while the call is in progress. The processor is further configured to determine an intent of the customer based on the determined customer input and display one or more recommendations on a user device communication console while the user is conversing with the customer”; ¶6: “An exemplary embodiment is a computer readable storage medium having data stored therein representing software executable by a computer. The software includes instructions that, when executed, cause the computer readable storage medium to perform receiving a notification about an intended call between a user and a customer and identifying a conversation between the user and the customer to determine customer inputs while the call is in progress. The instructions may further cause the computer readable storage medium to perform determining an intent of the customer based on the determined customer input and displaying one or more recommendations on a user device communication console while the user is conversing with the customer”) in any exceptional manner, and there is no evidence in the disclosure to suggest achieving an actual improvement in the computer functionality itself, or improvement in any specific computer technology other than utilizing ordinary computational tools to automate and perform the abstract idea for receiving and identifying an intended communication between a user and customer; determining one or more topics from the identified conversation; and displaying one or more other topics as recommendations to the user in a computing environment—see MPEP 2106.05(a). Accordingly, applicant has not shown an improvement or practical application under the guidance of MPEP section 2106.04(d) or 2106.05(a). Dependent claims 2-7, 9-14 and 16-20 fail to cure the deficiencies of the above noted independent claim from which they depend and are therefore rejected under the same grounds. The dependent claims further recite the abstract idea without imposing any meaningful limits on practicing the abstract idea. Dependent claims 2-7, 9-14 and 16-20 recite additional data gathering and processing steps. For example dependent claims 2, 9 and 16 recite in part, “wherein determining the one or more topics under discussion from the identified conversation includes”; claims 3, 10 and 17 recite in part, “further comprises: updating one or more status flags corresponding to”; claims 4, 11 and 18 recite in part, “wherein displaying the one or more other topics as recommendations to the user comprises:”; claims 5, 12 and 19 recite in part, “a. continue listening to an ongoing conversation…”; claims 6, 13 and 20 recite in part, “wherein determining one or more sections of the conversation between the user and the customer comprises”; claims 7 and 14 recite in part, “wherein the one or more standard topics include one of”, which are still directed toward the abstract idea identified previously and are no more than mere instructions to apply the exception using a computer or with computing components. The additional elements in the dependent claims “database”, “n-gram language model”, amounts to no more than applying the judicial exception using generic computing components, and linking the use of the judicial exception to a computing environment. In this case, the “database”, “n-gram language model”, are generically used to further process data and fails to integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (see applicant’s disclosure, ¶96: “he conversational analysis and recommendation server 605 also includes an interface provided therein for interacting with the data repository (or database) 640, such as the knowledge graph database”; ¶99: “the analysis module 660 is configured to determine customer inputs by processing the conversation to an n-gram language model”; ¶105: “store the updated one or more status flags in the database”). Hence is nonetheless directed towards fundamentally the same abstract idea as their respective independent claim since they fail to impose any meaningful limits on practicing the abstract idea. Therefore, the abstract idea fails to integrate into any practical application. Thus, under Step 2A-Prong Two the claims are directed to an abstract idea. Step 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above, with respect to integration of the abstract idea into a practical application, the additional elements ““computer system”, “memory” “processor” [claim 8]; “computer readable storage medium”, “software” [claim 15], amount to no more than mere instructions to apply the exception using generic computer components which does not integrate a judicial exception into a practical application nor provide an inventive concept (significantly more than the abstract idea). The use of applicant’s computing components are well-known, routine, and conventional activity. The court describes the use of a computer to create electronic records, track information/data and issue simultaneous instructions as purely conventional computer functions and notes that nearly every computer has a data processing system with a communications controller and a data storage unit. Their collective functions merely provide conventional computer implementation. Further, the additional elements including applicant’s “database”, “n-gram language model”, also amounts to no more than applying the judicial exception using generic computing components, and linking the use of the judicial exception to a computing environment. In this case, the “database”, “n-gram language model”, are generically used further process information via common computing components. Applicant’s “database”, “n-gram language model”, are merely used to process and communicate/transmit data/information -and fails to integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, even when considered as a whole, the claims do not transform the abstract idea into a patent-eligible invention since the claim limitations do not amount to a practical application or significantly more than an abstract idea for receiving and identifying an intended communication between a user and customer; determining one or more topics from the identified conversation; and displaying one or more other topics as recommendations to the user in a computing environment. Hence, claims 1-20 are directed to non-statutory subject matter and are rejected as ineligible subject matter under 35 USC 101. See MPEP 2106. Claims 15-20 recite in part, “A computer readable storage medium…”, and are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The aforementioned claims are directed toward providing information in relation to an electronic communication device via a data signal. The USPTO is obliged to give claims their broadest reasonable interpretation consistent with the specification during proceedings before the USPTO. See In re Zletz, 893 F.2d 319 (Fed. Cir. 1989) (during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. Se MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 USC 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Ccir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility under 35 USC 101, Aug 24, 2009; p2. The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 USC 101 as covering both non-statutory subject matter and statutory subject matter. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 USC 101 by adding the limitation “non-transitory” to the claim. Applicant's disclosure only generically recites at ¶9: Another general aspect is a computer readable storage medium having data stored therein representing software executable by a computer. The software includes instructions that, when executed, cause the computer readable storage medium to perform receiving a notification about an intended communication between a user and customer and identifying a conversation between the user and the customer while the communication is in progress. The instructions may further cause the computer readable storage medium to perform determining one or more topics under discussion from the identified conversation and displaying one or more other topics as recommendations to the user for standardizing the communication”, therefore claims 15-20 as recited can be interpreted to be embodied on abstract mediums such as carrier waves and signals, and therefore not eligible for patent protection. The term "computer-readable storage medium" may also include solid-state memories, optical and magnetic disks, and carrier wave signals. Accordingly, claims 15-20 are not eligible for patent protection. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 7, 8, 14 and 15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Gandhi et al., US Patent Application Publication No US2022/0116485 A1. With respect to claims 1, 8 and 15, Gandhi discloses, receiving a notification about an intended communication between a user and customer; while the communication is in progress, identifying a conversation between the user and the customer; determining one or more topics under discussion from the identified conversation; and displaying one or more other topics as recommendations to the user for standardizing the communication (¶40: “the customer may call or text the system 10 seeking assistance with a billing question. Such is the typical case when a customer phones a call center seeking assistance (e.g., to pay a bill to a phone company) or to take some action (e.g., to file an insurance claim). In agent assistance, a customer care agent is in need of assistance in handling or dealing with a customer. Such may be the case when the agent is working at a call center and encounters a customer who is belligerent or otherwise uncooperative with the agent”; ¶52: “The system 200, through the orchestration manager 212, conversation engine 214, and/or live associate broker interface 216 of the orchestration system engine 210, supports use cases or applications such as Associate Assist and Customer Assist…Also enabled by the orchestration system engine 210 is a unified and single source for reporting and monitoring of user 11 interaction with the system 200 and all interactions with one or more cognitive services 30 (e.g., one or more IVAs of one or more cognitive services 30 applications) and/or one or more escalations with escalation element 40. Such recordings, reporting, and/or monitoring data may include all state conditions of the cognitive services and/or escalation (state meaning characteristics of the entity, e.g., for an escalation, the name of the escalation agent, training level or experience of the particular escalation agent, etc.) and/or state conditions of the user 11 (e.g., frequency of calls or interactions with the system 20, particulars of the issue(s) the user 11 is addressing through the system 200, etc.)”; ¶74: CTI Service 313 enables the orchestration manager 312 to understand about call events, when a call has arrived at an agent, if the call is on hold, transferred, or if call has ended, for example. Such events help orchestration manager 312 maintain unique session and to know when to bring in a particular NL bot 331 bot or remove a particular NL bot 331 a user 11 conversation”; ¶75: “Voice Gateway 312 provides real-time voice to text transcription. Combined with CTI service 313, the orchestration manager 312 is able to know of call events and what is being spoken in the conversation (between a user 11 and one or more NL bots 331). The individual utterance from caller and agent data is used to process natural language understanding, which typically can result into a suggestion to the agent”; ¶97: “With attention to the associate assist application or use case, “cards” may be employed which serve to assist the user 11 agent. A card may be presented via a UI top an agent. A particular card may be generated by one or more NL bots 331 and/or the orchestration bot 312. The cards may be of any of several types, to include a suggested answer card, a next best action card, a real-time script card, and an interaction summary card”; ¶98: “A suggested answer card provides a recommended answer based on a customer's (user 11) intent determined by a NL bot 331 or by an FAQ (frequently asked questions) knowledge source”; ¶100: “A real-time script card may provide a checklist of items that an agent must ask the caller (user 11) for compliance/adherence requirements”; ¶101: “he following elements may be provided as a part of the interaction summary card: matched intents (all customers intents that were matched during the call), keywords (important keywords spoken by the customer), sentiment (overall sentiment of the conversation) and transcript summary (the transcript is summarized using machine learning to reduce overall transcript reading length down to 25-30%)”) a memory; and a processor coupled to the memory and configured to (¶58: “or each user conversation (by way of one or more channels 20, e.g., voice, messaging, etc.), the orchestration manager 312 creates a unique session identifier and maintains (in memory of processor 321, in system database 323, e.g.) the state or status of important activities during the conversation”) A computer readable storage medium having data stored therein representing software executable by a computer, the software comprising instructions that, when executed, cause the computer readable storage medium to perform (¶24: “Various embodiments or portions of the system methods of use may also or alternatively be implemented partially in software and/or firmware, e.g., metrics and/or guidelines to alter the training scenarios or customer personas, etc. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein …a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; a flash memory, etc”). With respect to claims 7 and 14, Gandhi discloses all of the above limitations, Gandhi further discloses, wherein the one or more standard topics include one of a feature selection topic, a template selection topic, a complexity of project discussion topic, a timeline discussion topic, and a cost discussion topic (¶40: “the customer may call or text the system 10 seeking assistance with a billing question. Such is the typical case when a customer phones a call center seeking assistance (e.g., to pay a bill to a phone company) or to take some action (e.g., to file an insurance claim). In agent assistance, a customer care agent is in need of assistance in handling or dealing with a customer. Such may be the case when the agent is working at a call center and encounters a customer who is belligerent or otherwise uncooperative with the agent”; ¶57: “Each of the set of bots may be capable of interaction with a user 11 to create a set of conversation topics and to conduct a conversation with a user. For example, a first NL bot may conduct a first NL bot conversation comprising one or more topics, e.g., comprising a first NL bot first conversation topic, a first NL bot second conversation topic, and the like. Any given point during a first NL bot conversation may be termed a first NL bot conversation datum. Similarly, a second NL bot may conduct a second NL bot conversation with a second NL bot comprising one or more topics, e.g., comprising a second NL bot first conversation topic, a second NL bot second conversation topic, and the like. Any given point during a second NL bot conversation may be termed a second NL bot conversation datum”; ¶97: “With attention to the associate assist application or use case, “cards” may be employed which serve to assist the user 11 agent. A card may be presented via a UI top an agent. A particular card may be generated by one or more NL bots 331 and/or the orchestration bot 312. The cards may be of any of several types, to include a suggested answer card, a next best action card, a real-time script card, and an interaction summary card”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2-6, 9-13 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gandhi et al., US Patent Application Publication No US2022/0116485 A1, in view of Steadman Henderson, US Patent Application Publication No 2021/0141799 A1, herein referred to as “Henderson”. With respect to claims 2, 9 and 16, Gandhi discloses all of the above limitations, Gandhi does not distinctly describe the following limitations but Henderson however as shown discloses, wherein determining the one or more topics under discussion from the identified conversation includes: determining one or more sections of the conversation between the user and the customer; comparing the one or more sections with one or more standard topics; and determining the one or more topics under discussion based on the comparison (¶61: “The storage 107 is configured to communicate with the processor 105. The storage 107 may contain data that is used by the response selection model 109 when executed by the processor 105”; Fig 2, ¶65: “The model is used for the task of conversational response selection. In a response selection task, given an input sentence, the goal is to identify the relevant response from a large pool of stored candidate responses. The response selection model 109 receives one input (a sentence or several sentences provided in natural language through speech or text by a user), and it aims to select the most relevant responses out of R stored potential responses. In an embodiment, R may be a large number, for example >100M potential responses may be stored, or >1 billion responses can be stored. The output of the model is a numerical score that represents the fitness of each response to the provided input, and a ranked list may be created based on the numerical scores of all R (input, response) pairs. The response is then selected based on the list”; ¶73: “The context vector h.sub.X and a response vector h.sub.Y are compared in the scoring stage 211, wherein the scoring is a measure of the similarity between the context vector and a response vector. The scoring is used to select the output response, for example the response with the closest response vector may be output. In an embodiment, the similarity between the context and response vectors is determined using a similarity measure such as the cosine similarity”) Ghandi discloses a method/system for the integration and orchestration of intelligent systems whereby customer intent is identified and used to select/route and recommend conversation to a user/bot. Henderson teaches techniques for obtaining/providing a response to a query inputted by a user and a dialogue system. Ghandi and Henderson are directed to the same field of endeavor since they are related to techniques for providing a recommended conversation/dialogue to a customer in a computing environment. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of applicant’s invention to combine the method/system for the integration and orchestration of intelligent systems of Ghandi and the techniques for providing a response to a query as taught by Henderson since it allows for identifying/selecting the most relevant response from a large pool of stored candidate responses via response selection model, and/or using a similarity measure such as the cosine similarity (¶61, ¶65, ¶73). With respect to claims 3, 10 and 17, Ghandi and Henderson disclose all of the above limitations, Ghandi further discloses, updating one or more status flags corresponding to the determined one or more topics as completed based on the determined one or more topics, wherein each of the one or more status flags is associated with one of the one or more standard topics; and storing the updated one or more status flags in a database (¶97-¶101; ¶100: “A real-time script card may provide a checklist of items that an agent must ask the caller (user 11) for compliance/adherence requirements. As the agent speaks to the system 300, the system 300 listens to and identifies the agent's intent and checks off the list in real-time. When the check-list is completed, the card is moved to completed state”; ¶101: “An interaction summary card provides a summary of the entire conversation after the conversation has ended. The following elements may be provided as a part of the interaction summary card: matched intents (all customers intents that were matched during the call), keywords (important keywords spoken by the customer), sentiment (overall sentiment of the conversation) and transcript summary (the transcript is summarized using machine learning to reduce overall transcript reading length down to 25-30%)”) With respect to claims 4, 11 and 18, Ghandi and Henderson disclose all of the above limitations, Ghandi further discloses, wherein displaying the one or more other topics as recommendations to the user comprises: retrieving the updated status flag associated with each of the one or more standard topics from the database; (¶97: “With attention to the associate assist application or use case, “cards” may be employed which serve to assist the user 11 agent. A card may be presented via a UI top an agent. A particular card may be generated by one or more NL bots 331 and/or the orchestration bot 312. The cards may be of any of several types, to include a suggested answer card, a next best action card, a real-time script card, and an interaction summary card.101: “An interaction summary card provides a summary of the entire conversation after the conversation has ended. The following elements may be provided as a part of the interaction summary card: matched intents (all customers intents that were matched during the call), keywords (important keywords spoken by the customer), sentiment (overall sentiment of the conversation) and transcript summary (the transcript is summarized using machine learning to reduce overall transcript reading length down to 25-30%)”) recommending at least one of the one or more standard topics based on the retrieved updated status flag associated with each of the one or more standard topics; and (¶98: “A suggested answer card provides a recommended answer based on a customer's (user 11) intent determined by a NL bot 331 or by an FAQ (frequently asked questions) knowledge source. The suggested answers may also contain relevant knowledge article links”; ¶100: “A real-time script card may provide a checklist of items that an agent must ask the caller (user 11) for compliance/adherence requirements. As the agent speaks to the system 300, the system 300 listens to and identifies the agent's intent and checks off the list in real-time. When the check-list is completed, the card is moved to completed state. displaying the one or more other topics based on the recommended at least one of the one or more standard topics (¶75: “The individual utterance from caller and agent data is used to process natural language understanding, which typically can result into a suggestion to the agent.”; ¶76: “the orchestration manager 312 engages a particular NL bot 33 land is able to offer various cards/recommendations to the agent. The real-time CTI events and recommendations derived from transcribed utterances are then published by the orchestration manager 312 through message queue”; ¶78: “he orchestration bot 340 is able to understand when a NL bot 331 detects escalation intent (using CCL) and is able to escalate the conversation to a live agent. Since orchestration bot 340 is in middle of the conversation, one is easily able to shift conversation between customer and bot, to customer and live-agent. As the orchestration bot 340 is embedded into the conversation, a customer assist use case can then shift over to associate assist use case where we start making recommendations to the agent as the system 300 listens or monitors the chat conversation”; ¶85: “The system database 323 stores all conversation sessions that are processed through the system 300. The system database 323 is responsible for logging all requests, orchestrator canvas nodes that were invoked and their outcome as the bot responded by back to user. These data are valuable for historical reporting, real-time dashboards, and NLU based data analysis that occurs in real-time. The data are also available to drive or enable one or more of data analytics 157 and performance monitoring 158. With respect to claims 5, 12 and 19, Gandhi and Henderson disclose all of the above limitations, Gandhi further discloses, The method of claim 4, further comprises: a. continue listening to an ongoing conversation between the user and the customer;(¶95: “Both AA UI and Supervisor Dashboard primarily run in “listening mode” in that the application subscribes and waits to receive events. These events can be form of call activities (Call arrived to agent, call ended, etc.), transcript (live utterances as they are transcribed), or form of suggested cards (cards presented to the agent based on processing of utterance, typically with NLU bot)”) b. determining that the ongoing conversation is initiated from the recommendations (¶97-¶100; ¶97: “With attention to the associate assist application or use case, “cards” may be employed which serve to assist the user 11 agent… The cards may be of any of several types, to include a suggested answer card, a next best action card, a real-time script card, and an interaction summary card”; ¶99: “A next best action card provides a recommendation on actions such as the transfer the interaction (e.g., to another NL bot or to a human agent via escalation) or may recommend an RDA bot accessed via a desktop action on a user interface”) c. updating at least one of the one or more status flags related to the ongoing conversation as completed (¶100: “A real-time script card may provide a checklist of items that an agent must ask the caller (user 11) for compliance/adherence requirements. As the agent speaks to the system 300, the system 300 listens to and identifies the agent's intent and checks off the list in real-time. When the check-list is completed, the card is moved to completed state. d. recommending at least one of the one or more standard topics based on the updated status flag associated with each of the one or more standard topics (¶95: “Both AA UI and Supervisor Dashboard primarily run in “listening mode” in that the application subscribes and waits to receive events. These events can be form of call activities (Call arrived to agent, call ended, etc.), transcript (live utterances as they are transcribed), or form of suggested cards (cards presented to the agent based on processing of utterance, typically with NLU bot)”; ¶97: “With attention to the associate assist application or use case, “cards” may be employed which serve to assist the user 11 agent. A card may be presented via a UI top an agent. A particular card may be generated by one or more NL bots 331 and/or the orchestration bot 312. The cards may be of any of several types, to include a suggested answer card, a next best action card, a real-time script card, and an interaction summary card”; ¶100: “A real-time script card may provide a checklist of items that an agent must ask the caller (user 11) for compliance/adherence requirements. As the agent speaks to the system 300, the system 300 listens to and identifies the agent's intent and checks off the list in real-time. When the check-list is completed, the card is moved to completed state”) e. displaying the one or more other topics based on the recommended at least one of the one or more standard topics (¶93: “The AA app 322 (developed using GraphQL technology) allows one to publish only relevant information down to the UI. On startup, AA app subscribes to message queue and waits for events to arrive as they are published from Orchestrator. Any of the Orchestrator events (call events, suggested card events, transcript event), are all picked up from the message queue and AA app delivers it downstream to UI using GraphQL”; ¶94: “The Associate Assist UI 323 comprises Associate Assist UI (the interface used by contact center agents to receive recommended answers and other cards), f. iterating steps (a) to (e) until each of the status flag is updated as completed (¶100: “A real-time script card may provide a checklist of items that an agent must ask the caller (user 11) for compliance/adherence requirements. As the agent speaks to the system 300, the system 300 listens to and identifies the agent's intent and checks off the list in real-time. When the check-list is completed, the card is moved to completed state”; ¶116: “the chatbot 431 have completed all tasks and the call must be terminated. In this case, the chatbot 413 instructs the voice assist 413 to hang-up the call (hand-up is the action or instruction). Such instructions are output or generated by the chatbot 431. The orchestration engine 410 has responsibility to ensure all such action types are set and forwarded or directed from the chatbot 431 and back to the voice assist 413 via an HTTP response”) With respect to claims 6, 13 and 20, Ghandi and Henderson disclose all of the above limitations, Ghandi further discloses, wherein determining one or more sections of the conversation between the user and the customer comprises processing the conversation to an n-gram language model (¶154: Prior to training the first model 205 and the second model 207, the vocabulary of units used by the tokenisation algorithm 501 of the first model 205 is first learned in a pre-training stage. FIG. 5(b) shows a flowchart illustrating the steps performed to learn the vocabulary of units, in the stage labelled “Before Training”. The vocabulary is sometimes referred to as a “subword vocabulary”, although, as has been explained previously, the vocabulary may also comprise complete words. This is done using a subset of the training data, which comprises inputs and responses. An alternative training data set may be used to learn the vocabulary 509 of the first model 205, however in this example a subset of the same training data used to train the rest of the model (i.e. inputs and responses) is used”; ¶155: “Step S501 comprises a subword tokenisation algorithm that splits arbitrary input into subword units. The subword units into which the arbitrary input is split into is what is learned in S501. A number of subword tokenization methods are available for learning a vocabulary of units including subwords, including: supervised subword tokenization using a pretrained segmenter/tokenizer such as… character n-grams”) Ghandi and Henderson are directed to the same field of endeavor since they are related to techniques for providing a recommended conversation/dialogue to a customer in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of applicant’s invention to combine the method/system for the integration and orchestration of intelligent systems of Ghandi and the techniques for providing a response to a query as taught by Henderson since it allows for training a response retrieval system to provide a response to a query inputted by a user via subword tokenization methods (¶154, ¶155). The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mallenahally et al., US Patent Application Publication No US 2021/0134279 A1, “Machine Learning Based Product Solution Recommendation”, relating to transcribing in real-time a conversation between a user and an agent into a speech text, processing digital data of the speech text associated with a topic, including parsing the speech text into one or more words and determining collocation information among the one or more words in the speech text; generating a recommendation of one or more product solutions for a user based on recommendation parameters for the library of product solutions, and providing the recommendation. Langley, US Patent No US 11715140 B1, “Systems and Methods for Providing Product and Service quotes to Customers”, relating to support systems for customer service representatives, and more particularly, to providing call center representatives with easy access to customer related data and quotes related to products and services for the customer. Conclusion Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to Kimberly L. Evans whose telephone number is 571.270.3929. The Examiner can normally be reached on Monday-Friday, 9:30am-5:00pm. If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, Lynda Jasmin can be reached at 571.272.6782. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal/pair <http://pair-direct.uspto.gov >. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217.9197 (toll-free). Any response to this action should be mailed to: Commissioner of Patents and Trademarks, P.O. Box 1450, Alexandria, VA 22313-1450 or faxed to 571-273-8300. Hand delivered responses should be brought to the United States Patent and Trademark Office Customer Service Window: Randolph Building 401 Dulany Street, Alexandria, VA 22314. /KIMBERLY L EVANS/Examiner, Art Unit 3629 /LYNDA JASMIN/Supervisory Patent Examiner, Art Unit 3629
Read full office action

Prosecution Timeline

Apr 13, 2023
Application Filed
Oct 24, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602661
SYSTEM FOR SEARCHING AND CORRELATING ONLINE ACTIVITY WITH INDIVIDUAL CLASSIFICATION FACTORS
2y 5m to grant Granted Apr 14, 2026
Patent 12277615
DETECTING AND VALIDATING IMPROPER RESIDENCY STATUS THROUGH DATA MINING, NATURAL LANGUAGE PROCESSING, AND MACHINE LEARNING
2y 5m to grant Granted Apr 15, 2025
Patent 12118558
ESTIMATING QUANTILE VALUES FOR REDUCED MEMORY AND/OR STORAGE UTILIZATION AND FASTER PROCESSING TIME IN FRAUD DETECTION SYSTEMS
2y 5m to grant Granted Oct 15, 2024
Patent 12056745
Machine-Learning Driven Data Analysis and Reminders
2y 5m to grant Granted Aug 06, 2024
Patent 11990213
METHODS AND SYSTEMS FOR VISUALIZING PATIENT POPULATION DATA
2y 5m to grant Granted May 21, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
12%
Grant Probability
26%
With Interview (+13.4%)
7y 0m
Median Time to Grant
Low
PTA Risk
Based on 362 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month