DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections - 37 CFR 1.75(c)
1. 37 CFR 1.75(c) allows one or more claims to be presented in dependent form, referring back to and further limiting another claim or claims in the same patent application. A dependent claim may refer to more than one other claim, but only in the alternative. A multiple dependent claim cannot serve as a basis for another multiple dependent claim.
2. Claims 3 & 4 are objected to under 37 CFR 1.75(c) as being in improper form because of multiple dependent claims 3 & 4.
Here, dependent claim 3 is dependent on dependent claim 4, and dependent claim 4 is dependent on dependent claim 3. In essence, Claims 3 & 4 are dependent claims that depend on each other.
Appropriate correction required.
Claim Rejections - 35 USC § 103
1. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3. Claims 1, 2, 5, 6, 8, 9, 13, 17 & 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 20180314689 A1 hereinafter, Wang ‘689) in view of Deole(US 20200076947 A1 hereinafter, Deole ‘947).
Regarding claim 1; Wang ‘689 discloses a method for an artificial intelligence (“AI”) agent (Fig. 1, Virtual Personal Assistant 150) answering a call within a voice AI container (Fig. 1, Interactive Voice Response System i.e. Fig. 1 illustrates an example of person 100 interacting with a device or system that includes a multi-modal virtual personal assistant 150. The virtual personal assistant can use speech recognition to interpret spoken words and may use artificial intelligence to determine the speaker's intent, that is, what the speaker wants from the device. Paragraph 0061 & 0083),
the method comprising:
receiving, at the voice AI container in a first receiving step (Fig. 2, Steps 210) the call from a human caller (Fig. 1, Person 100), the call comprising a query (i.e. First, at step 210, the person says, “I'd like to refill a prescription.” Paragraph 0091);
parsing, by the AI agent, the query to identify context associated with the query (i.e. For instance the natural language understanding system may apply a rule-based parser and/or a statistical parser to determine, based on the verbal context, the likely intended meaning of words or phrases that have multiple possible definitions (e.g., the word “pop” could mean that something has broken, may refer to a carbonated beverage, or may be the nickname of a person, depending on the context). Paragraph 0172),
identifying, by the AI agent in a first identifying step using the context, an intent of the query (i.e. The interpreter 1016 may apply syntactic, grammatical, and/or semantic rules to the natural dialog input, in order to parse and/or annotate the input to better understand the person's intended meaning. Paragraph 0169);
and determining, by the AI agent in a first determining step, whether a first valid mapping stored within a backend database (Fig. 4, Backend Systems 452) is associated with the intent of the query (i.e. The understanding 152 system attempts to understand the person's 100 intent and/or emotional state. The backend systems 452 can provide hardware and/or software resources that support the operations of the virtual personal assistant platform 410 and/or the domain-specific application resources 430. The backend systems 452 can include, for example, computing resources, such as processors, servers, storage disks, databases, and so on. The backend systems 452 may include domain-specific backend systems, such as domain-specific machinery, knowledge bases, services, and so on. Paragraphs 0086 & 0129),
the backend database being located within the voice AI container (i.e. The virtual personal assistant system 400 can also include various backend systems 452. Paragraph 0129),
wherein the context comprises a content of the query and at least one of: a time the call was placed, a location from which the call was placed, and a history of previous calls placed (i.e. A multi-modal virtual personal assistant can also include a preference model, which can be tailored for a particular population and/or for one or more individual people. The preference model may keep track of information related to the person's personal information and/or the person's use of a device, such as for example a person's identification information, passwords, account information and/or login information, address books and so on. The preference model can also store historical information about a person, such as frequently used applications, frequently accessed contacts, frequently visited locations, shopping habits, a fondness for traveling, or an interest in antique cars. Paragraph 0066)
and wherein the first valid mapping comprises a set of steps taken to satisfy the intent of the query (Fig. 2, System Audio Response 204 “Steps 212, 216, 220, 224, 228 & 232” i.e. At step 226, the person responds, “Yes, that's fine.” At step 228, the system again acknowledges that it understood, and provides some additional helpful information. The system also volunteers a suggestion, in case the person needs the prescription sooner: “You're all set. You should receive your prescription in 3 to 5 business days. Do you need to expedite it for faster delivery?” Paragraphs 0090-0096).
Wang ‘689 does not expressly disclose an artificial intelligence (“AI”) agent. Wang ‘689 discloses a Virtual Personal Assistant 150 instead. Paragraph 0061 of Wang ‘689 teaches that the Virtual Personal Assistant 150 can use speech recognition to interpret spoken words and may use artificial intelligence to determine the speaker's intent, that is, what the speaker wants from the device. The virtual personal assistant may pass this information to the device to be acted upon. One of ordinary skill in the art would understand that the artificial intelligence (“AI”) agent of Applicant’s specification is synonymous with the virtual personal assistant of Wang ‘689. However, Examiner cites Deole ‘947 to further disclose wherein the artificial intelligence (“AI”) agent is synonymous with the Virtual Personal Assistant 150 of Wang ‘689.
Deole ‘947 discloses an agent virtual assistant, a customer virtual assistant and a contact virtual assistant (i.e. Contact center virtual assistants can understand the conversation between a customer and the contact center agent to provide suggestions to the contact center agent or even do some background work automatically on behalf of the contact center agent. An agent virtual assistant is instantiated. The agent virtual assistant works on behalf of a contact center agent. A message is sent to the application in customer communication endpoint to instantiate a customer virtual assistant. A second communication session is established between the agent virtual assistant and the customer virtual assistant. Paragraphs 0003-0004)
Wang ‘689 and Deole ‘947 are combinable because they are from same field of endeavor of speech systems (Deole ‘947 at “Field”).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the speech system as taught by Wang ‘689 by adding an agent virtual assistant, a customer virtual assistant and a contact virtual assistant as taught by Deole ‘947. The motivation for doing so would have been advantageous because virtual assistants have ability to provide information to a contact center agent in a simplified manner that provides the best information available. Therefore, it would have been obvious to combine Wang ‘689 with Deole ‘947 to obtain the invention as specified.
Regarding claim 2; Wang ‘689 discloses the steps of: receiving, by the AI agent in a second receiving step (Fig. 2, Step 214), the first valid mapping from the backend database (Step 212 - Step 214 i.e. Fig. 2 shows wherein Step 214 receives a valid mapping from Step 212. Paragraphs 0091-0092);
and loading, by the AI agent in a first loading step, the first valid mapping to an agent screen located within the voice AI container (i.e. The output generation 422 component can create responses, which can be output using natural language and/or a visual display. For example, the output generation 422 can formulate a textual response, and indicate whether the textual response should be displayed on a screen or vocalized. As another example, the output generation 422 can assemble a combined textual and visual response. Paragraph 0116);
wherein: the first valid mapping comprises a first automated screen navigation script and the first loading step comprises loading the first automated screen navigation script on the agent screen and navigating (i.e The left-hand column in this example illustrates the person's inputs to the system. In this example, the person is speaking to the system, thus the person's input is user audio input 202. The right-hand column illustrates the system audio response 204, that is, verbal responses by the system. First, at step 210, the person says, “I'd like to refill a prescription.” Paragraphs 0090-0091),
in a first navigating step, the agent screen using the first automated screen navigation script to gather information required to satisfy the intent of the query (i.e. The system, using perhaps a natural language recognition system, is able to understand what the person wants, and responds, at step 212, with: “Sure, happy to help you with that. Is it for you or for someone else?” In this step, the system is configured to respond with an indication that it understood what the person has asked, and also with a request for additional information. Paragraphs 0090-0091)
Regarding claim 5; Wang ‘689 discloses wherein the context further comprises grammar and syntax of the query, a language which was used by the human caller (i.e. Syntax rules 3474, grammar rules 3484, and statistical models 3486 implemented in the processing language 3882 have been translated, using a machine translation 3416 engine, into the input language 3480. Paragraph 0509)
a pace that the human caller was speaking, an amount of words the human caller used for the query and a tone of the human caller (Fig. 3, Step 322 i.e. At step 322, the person responds, “Yes . . . yes, it's . . . it's for me. I need a refill.” The system detects that the person's speaking rate is faster, and that the person's tone of voice indicates mild frustration. Paragraphs 0098-0101).
Regarding claim 6; Wang ‘689 discloses wherein: there is a plurality of valid mappings stored in the backend database (i.e. The interpreter API 608 may further leverage databases and APIs to the databases, provided by the server framework 610. In this example, the server framework 610 includes a database 620, which can be used to store a domain ontology 622, a user dialog history 624, and various sources 626 or entities. Paragraph 0167)
and each of the plurality of valid mappings is associated with a respective one of a plurality of intents of the query (i.e. The interpreter API 608 may further leverage databases and APIs to the databases, provided by the server framework 610. Using the server framework 610 and/or the interpreter rules 604, the interpreter API 608 can produce a final intent 640. Paragraph 0167)
Regarding claim 8; Wang ‘689 discloses updating the backend database to associate the query with the context and the first valid mapping (i.e. The dictionaries 2522 may be implemented in a data structure such as a searchable database, table, or tree. In various implementations, the virtual personal assistant 2500 can be equipped with basic dictionaries 2522, which can be updated as the virtual personal assistant 2500 is used. Paragraph 0375)
Regarding claim 9; Wang ‘689 discloses a method for an artificial intelligence (“AI”) agent answering a call within a voice AI container (Fig. 1, Interactive Voice Response System i.e. Fig. 1 illustrates an example of person 100 interacting with a device or system that includes a multi-modal virtual personal assistant 150. The virtual personal assistant can use speech recognition to interpret spoken words and may use artificial intelligence to determine the speaker's intent, that is, what the speaker wants from the device. Paragraph 0061 & 0083), the method comprising:
receiving, at the voice AI container in a first receiving step (Fig. 2, Steps 210) the call from a human caller (Fig. 1, Person 100),
the call comprising a query (i.e. First, at step 210, the person says, “I'd like to refill a prescription.” Paragraph 0091);
parsing, by the AI agent, the query to identify context associated with the query (i.e. For instance the natural language understanding system may apply a rule-based parser and/or a statistical parser to determine, based on the verbal context, the likely intended meaning of words or phrases that have multiple possible definitions (e.g., the word “pop” could mean that something has broken, may refer to a carbonated beverage, or may be the nickname of a person, depending on the context).. Paragraph 0172),
identifying, by the AI agent in a first identifying step using the context, an intent of the query (i.e. The interpreter 1016 may apply syntactic, grammatical, and/or semantic rules to the natural dialog input, in order to parse and/or annotate the input to better understand the person's intended meaning. Paragraph 0169);
and determining, by the AI agent in a first determining step, absence of a first valid mapping associated with the intent of the query, the absence being limited to any location within a backend database (i.e. The matching processor 2140 may compensate by applying constraints to the alignment process. In all cases, the matching processor 2140 may register structural correspondences and can compare the aligned codes to determine whether a match exists. When a match is found, the matching processor returns matched iris data 2160. In various implementations, the iris data 2160 may be used by other systems in a virtual personal assistant. Paragraph 0290),
the backend database being located within the voice AI container (i.e. The virtual personal assistant system 400 can also include various backend systems 452. Paragraph 0129),
wherein the context comprises a content of the query and at least one of: a time the call was placed, a location from which the call was placed, and a history of previous calls placed (i.e. A multi-modal virtual personal assistant can also include a preference model, which can be tailored for a particular population and/or for one or more individual people. The preference model may keep track of information related to the person's personal information and/or the person's use of a device, such as for example a person's identification information, passwords, account information and/or login information, address books and so on. The preference model can also store historical information about a person, such as frequently used applications, frequently accessed contacts, frequently visited locations, shopping habits, a fondness for traveling, or an interest in antique cars. Paragraph 0066)
and wherein the first valid mapping comprises a set of steps taken to satisfy the intent of the query (Fig. 2, System Audio Response 204 “Steps 212, 216, 220, 224, 228 & 232” i.e. At step 226, the person responds, “Yes, that's fine.” At step 228, the system again acknowledges that it understood, and provides some additional helpful information. The system also volunteers a suggestion, in case the person needs the prescription sooner: “You're all set. You should receive your prescription in 3 to 5 business days. Do you need to expedite it for faster delivery?” Paragraphs 0090-0096).
Wang ‘689 does not expressly disclose an artificial intelligence (“AI”) agent. Wang ‘689 discloses a Virtual Personal Assistant 150 instead. Paragraph 0061 of Wang ‘689 teaches that the Virtual Personal Assistant 150 can use speech recognition to interpret spoken words and may use artificial intelligence to determine the speaker's intent, that is, what the speaker wants from the device. The virtual personal assistant may pass this information to the device to be acted upon. One of ordinary skill in the art would understand that the artificial intelligence (“AI”) agent of Applicant’s specification is synonymous with the virtual personal assistant of Wang ‘689. However, Examiner cites Deole ‘947 to further disclose wherein the artificial intelligence (“AI”) agent is synonymous with the Virtual Personal Assistant 150 of Wang ‘689.
Deole ‘947 discloses an agent virtual assistant, a customer virtual assistant and a contact virtual assistant (i.e. Contact center virtual assistants can understand the conversation between a customer and the contact center agent to provide suggestions to the contact center agent or even do some background work automatically on behalf of the contact center agent. An agent virtual assistant is instantiated. The agent virtual assistant works on behalf of a contact center agent. A message is sent to the application in customer communication endpoint to instantiate a customer virtual assistant. A second communication session is established between the agent virtual assistant and the customer virtual assistant. Paragraphs 0003-0004)
Wang ‘689 and Deole ‘947 are combinable because they are from same field of endeavor of speech systems (Deole ‘947 at “Field”).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the speech system as taught by Wang ‘689 by adding an agent virtual assistant, a customer virtual assistant and a contact virtual assistant as taught by Deole ‘947. The motivation for doing so would have been advantageous because virtual assistants have ability to provide information to a contact center agent in a simplified manner that provides the best information available. Therefore, it would have been obvious to combine Wang ‘689 with Deole ‘947 to obtain the invention as specified.
Regarding claim 13; Wang ‘689 discloses wherein the context further comprises grammar and syntax of the query, a language used by the human caller (i.e. Syntax rules 3474, grammar rules 3484, and statistical models 3486 implemented in the processing language 3882 have been translated, using a machine translation 3416 engine, into the input language 3480. Paragraph 0509)
a pace that the human caller was speaking, an amount of words the human caller used and a tone of the human caller (Fig. 3, Step 322 i.e. At step 322, the person responds, “Yes . . . yes, it's . . . it's for me. I need a refill.” The system detects that the person's speaking rate is faster, and that the person's tone of voice indicates mild frustration. Paragraphs 0098-0101).
Regarding claim 17; Wang ‘689 discloses wherein: there is a plurality of valid mappings stored in the backend database (i.e. The interpreter API 608 may further leverage databases and APIs to the databases, provided by the server framework 610. In this example, the server framework 610 includes a database 620, which can be used to store a domain ontology 622, a user dialog history 624, and various sources 626 or entities. Paragraph 0167)
and each of the plurality of valid mappings is associated with a respective one of a plurality of intents of the query (i.e. The interpreter API 608 may further leverage databases and APIs to the databases, provided by the server framework 610. Using the server framework 610 and/or the interpreter rules 604, the interpreter API 608 can produce a final intent 640. Paragraph 0167).
Regarding claim 19; Wang ‘689 discloses a method for an artificial intelligence (“AI”) agent answering a call within a voice AI container (Fig. 1, Virtual Personal Assistant 150 i.e. The virtual personal assistant can use speech recognition to interpret spoken words and may use artificial intelligence to determine the speaker's intent, that is, what the speaker wants from the device. Paragraph 0061),
the method comprising:
receiving, at the voice AI container in a first receiving step (Fig. 2, Steps 210) the call from a human caller (Fig. 1, Person 100), the call comprising a query (i.e. First, at step 210, the person says, “I'd like to refill a prescription.” Paragraph 0091);
parsing, by the AI agent, the query to identify context associated with the query (i.e. For instance the natural language understanding system may apply a rule-based parser and/or a statistical parser to determine, based on the verbal context, the likely intended meaning of words or phrases that have multiple possible definitions (e.g., the word “pop” could mean that something has broken, may refer to a carbonated beverage, or may be the nickname of a person, depending on the context).. Paragraph 0172)
identifying, by the AI agent in a first identifying step using the context, an intent of the query (i.e. The interpreter 1016 may apply syntactic, grammatical, and/or semantic rules to the natural dialog input, in order to parse and/or annotate the input to better understand the person's intended meaning. Paragraph 0169);
and determining, by the AI agent in a first determining step, whether a first valid mapping stored within a backend database (Fig. 1, Understanding System 152) is associated with the intent of the query (i.e. The understanding 152 system attempts to understand the person's 100 intent and/or emotional state. Paragraph 0086),
the backend database being located within the voice AI container (i.e. Fig. 1 shows wherein the Understanding System 152 is located within the Virtual Personal Assistant 150.),
wherein the context comprises a content of the query and at least one of: a time the call was placed, a location from which the call was placed, a history of previous calls placed, grammar and syntax of the query, a language which was used by the human caller, a pace that the human caller was speaking, an amount of words the human caller used for the query and a tone of the human caller (i.e. The syntactic rules 3434 used by the syntactic parser 3430 and the grammar rules 3464 and statistical models 3466 used by the natural language parser 3460 can be ported from syntax rules 3474, grammar rules 3484, and statistical models 3486 implemented in a processing language 3882. As discussed above, the processing language 3882 can be a natural language used by a device such the virtual personal assistant 3400 for internal processing of user input. In the illustrated example, syntax rules 3474, grammar rules 3484, and statistical models 3486 implemented in the processing language 3882 have been translated, using a machine translation 3416 engine, into the input language 3480. Paragraph 0509)
and wherein the first valid mapping comprises a set of steps taken to respond to the intent of the query (Fig. 2, System Audio Response 204 “Steps 212, 216, 220, 224, 228 & 232” i.e. At step 226, the person responds, “Yes, that's fine.” At step 228, the system again acknowledges that it understood, and provides some additional helpful information. The system also volunteers a suggestion, in case the person needs the prescription sooner: “You're all set. You should receive your prescription in 3 to 5 business days. Do you need to expedite it for faster delivery?” Paragraphs 0090-0096).
Wang ‘689 does not expressly disclose an artificial intelligence (“AI”) agent. Wang ‘689 discloses a Virtual Personal Assistant 150 instead. Paragraph 0061 of Wang ‘689 teaches that the Virtual Personal Assistant 150 can use speech recognition to interpret spoken words and may use artificial intelligence to determine the speaker's intent, that is, what the speaker wants from the device. The virtual personal assistant may pass this information to the device to be acted upon. One of ordinary skill in the art would understand that the artificial intelligence (“AI”) agent of Applicant’s specification is synonymous with the virtual personal assistant of Wang ‘689. However, Examiner cites Deole ‘947 to further disclose wherein the artificial intelligence (“AI”) agent is synonymous with the Virtual Personal Assistant 150 of Wang ‘689.
Deole ‘947 discloses an agent virtual assistant, a customer virtual assistant and a contact virtual assistant (i.e. Contact center virtual assistants can understand the conversation between a customer and the contact center agent to provide suggestions to the contact center agent or even do some background work automatically on behalf of the contact center agent. An agent virtual assistant is instantiated. The agent virtual assistant works on behalf of a contact center agent. A message is sent to the application in customer communication endpoint to instantiate a customer virtual assistant. A second communication session is established between the agent virtual assistant and the customer virtual assistant. Paragraphs 0003-0004)
Wang ‘689 and Deole ‘947 are combinable because they are from same field of endeavor of speech systems (Deole ‘947 at “Field”).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the speech system as taught by Wang ‘689 by adding an agent virtual assistant, a customer virtual assistant and a contact virtual assistant as taught by Deole ‘947. The motivation for doing so would have been advantageous because virtual assistants have ability to provide information to a contact center agent in a simplified manner that provides the best information available. Therefore, it would have been obvious to combine Wang ‘689 with Deole ‘947 to obtain the invention as specified.
Allowable Subject Matter
1. Claims 7, 10-12, 14-16, 18 & 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
2. Claims 11, 12, 14 & 15 depend on indicated objected claim 10. Therefore, by virtue of their dependency, Claims 11, 12, 14 & 15 are also indicated as objected subject matter.
Examiners Statement of Reasons for Allowance
The cited reference (Wang ‘689) teaches wherein provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language. Input in the computer-based language can be processed, and the multi-lingual device can take an action based on the result of the processing.
The cited reference (Deole ‘947) teaches a request to establish a first communication session with a contact center is received. For example, a customer may make a voice call to the contact center. A determination is made that the request to establish the communication session is from an application in a customer communication endpoint. An agent virtual assistant is instantiated. The agent virtual agent works on behalf of a contact center agent. A message is sent to the application in a customer communication endpoint to instantiate a customer virtual assistant. A second communication session is established between the agent virtual assistant and the customer virtual assistant. This allows the contact center agent and the customer to have a better interaction using a simplified graphical user interface that provides the best available information.
The cited references fail to disclose wherein each of the plurality of valid mappings are created by the following steps: identifying, by a machine learning agent located within the voice AI container in a third identifying step, the respective one of the plurality of intents of the query; navigating, by the machine learning agent in a second navigating step, an agent screen, located within the voice AI container, to gather information to satisfy the respective one of the plurality of intents of the query; capturing agent screen navigation data during the second navigating step; associating the agent screen navigation data with the respective one of the plurality of intents of the query; creating, in a second creating step, a respective automated screen navigation script using the agent screen navigation data and an associated intent of the query; creating, in a third creating step, the respective one of the plurality of valid mappings using the respective automated screen navigation script and the associated intent of the query; and storing the respective one of the plurality of valid mappings in the backend database; identifying, by the AI agent in a second identifying step, a machine learning agent that is actively assigned to the voice AI container; passing, by the AI agent, control of the call to the machine learning agent; and the machine learning agent responding, in a first responding step, to the human caller; wherein the agent voice and speech profile is created by the following steps: recording voice samples from the human agent; sending, in a second sending step, the voice samples to a voice cloning software; creating, in a first creating step, using the voice cloning software, the agent voice and speech profile; associating the agent voice and speech profile with the human agent; and storing the agent voice and speech profile in the agent speech profile database; wherein each of the plurality of valid mappings are created by the following steps: identifying, in a fourth identifying step by a machine learning agent located within the voice AI container, the respective one of the plurality of intents of the query; navigating, in a fourth navigating step by the machine learning agent, an agent screen, located within the voice AI container, to gather information to satisfy the respective one of the plurality of intents of the query; capturing agent screen navigation data during the second navigating step; associating the agent screen navigation data with the respective one of the plurality of intents of the query; creating, in a second creating step, a respective automated screen navigation script using the agent screen navigation data and an associated intent of the query; creating, in a third creating step, the respective one of the plurality of valid mappings using the respective automated screen navigation script and the associated intent of the query; and storing the respective one of the plurality of valid mappings in the backend database; receiving, by the AI agent in a second receiving step the first valid mapping from the backend database; loading, by the AI agent in a first loading step, the first valid mapping to an agent screen located within the voice AI container; wherein: the first valid mapping comprises a first automated screen navigation script; and the first loading step comprises loading the first automated screen navigation script on the agent screen and navigating, in a first navigating step, the agent screen using the first automated screen navigation script to gather information required to respond to the intent of the query; identifying, by the AI agent in a second identifying step, a human agent that is actively assigned to the voice AI container; loading, in a second loading step, an agent voice and speech profile associated with the human agent from an agent speech profile database located in the voice AI container; and relaying, by the AI agent, using the agent voice and speech profile associated with the human agent, the information to the human caller. As a result, and for these reasons, Examiner indicates Claims 7, 10-12, 14-16, 18 & 20 as allowable subject matter.
Relevant Prior Art References Not Relied Upon
1. London (US 20150142704 A1) - Embodiments of an adaptive virtual intelligent agent ("AVIA") service are disclosed. It may include the functions of a human administrative assistant for an enterprise including customer support, customer relationship management, and fielding incoming caller inquiries. It also has multi-modal applications for the home through interaction with AVIA implemented in the home. It may engage in free-form natural language dialogs. During a dialog, embodiments maintain the context and meaning of the ongoing dialog and provides information and services as needed by the domain of the application. Over time, the service automatically extends its knowledge of the domain (as represented in the Knowledge Tree Graphs) through interaction with external resources. Embodiments can intelligently understand and converse with users using free-form speech without pre-programmed deterministic sequences of questions and answers, can dynamically determine what it needs to know to converse meaningfully with users, and knows how to obtain information it needs.
2. Dunn et al. (US 20200274969 A1) - The present disclosure relates generally to providing an intent-driven contact center. The contact center according to some embodiments analyzes intents to determine to which device or agent to route a communication. The analyzed intent information can also be used to formulate reports and analyze the accuracy of the identified intents with respect to the received communication.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARCUS T. RILEY, ESQ. whose telephone number is (571)270-1581. The examiner can normally be reached 9-5 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MARCUS T. RILEY, ESQ.
Primary Examiner
Art Unit 2654
/MARCUS T RILEY/Primary Examiner, Art Unit 2654