DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The office action sent in response to Applicant’s communication received on 8/7/24 for the application number 18/797,385. The office hereby acknowledges receipt of the following placed of record in the file: Specification, Abstract, Oath/Declaration and Claims.
Claims 1-20 are presented for examination.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/6/24 was filed after the mailing date of the application on 8/7/24. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The disclosure is objected to because of the following informalities:
P
Paragraph 41 contains the typo “/to”
Paragraph 43 recites “By activating a higher resource cost, AI temporarily in this type of situation to address”. Meaning is unclear.
Appropriate correction is required.
Claim Objections
Claims 8 and 16 are objected to because of the following informalities:
Regarding claim 8, lines 3-4, the phrase “a capability of the first AI model” is being recited; however, in claim 1, lines 5-6, the same phase was used. It appears that the same “capability of the first AI model” is being referred to. Therefore, the phrase “a capability of the first AI model” in claim 8 should be “the capability of the first AI model”.
Regarding claim 16, in line 5, there is the typo for the word “thee”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 8, 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 8 recites the limitation "the decision received from the human customer service representative" in line 6. There is insufficient antecedent basis for this limitation in the claim.
Claim 17 recites the limitation "the predetermined criteria" in line 1. There is insufficient antecedent basis for this limitation in the claim. It appears that claim 17 should depend on claim 15.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101.
Claim 9 includes:
A method, comprising: (a) receiving a first request during a customer service session from a customer; (b) providing a first response to the first request generated by a first AI model;
(c) determining whether the first response to the first request falls below a performance threshold; and in response to a determination that the first response to the first request falls below the performance threshold, activating a second AI model with more capability than the first AI model;
(d) receiving a second request during the customer service session; and
(e) providing a second response to the second request generated by the second AI model.
Step a is a mental step since a human can receive a request
Step b can be performed by human as a human can provide a response based on request/question
Step c can be performed by human as human can determine which response is valid or not and choose another template (model) to decide on an answer.
Step d and e can be performed by the human mind as a human can pick a better template to respond to a query based on the past queries.
Additional elements – first model and second model
Claim 9 is ineligible as it is directed to an abstract idea without being significantly more.
Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claim is directed to a method, which falls within one of the statutory categories of invention. (Step 1: YES).
Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. As discussed above, the broadest reasonable interpretation of steps (a)-(c) that those steps fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Step a is a mental step since a human can receive a request. Step b can be performed by human as a human can provide a response based on request/question Step c can be performed by human as human can decide which response is valid based on the set of responses provided in multiple sets of papers/documents. Step d and e can be performed by human mind a human can pick a document which was able to provide a response to the first question/request. Hence, these steps can be performed by a human, using “observation, evaluation, judgment, [and] opinion,” because they involve making determinations and identifications, which are mental tasks humans routinely do,' ” and thus can practically be performed in the human mind, In re Killian, 45 F.4th 1373, 1379 (Fed. Cir. 2022). Therefore, these limitations are considered together as an abstract idea for further analysis. (Step 2A, Prong One: YES).
Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception.
Claim requires additional elements of first ai model and activating a second AI model. AI models are generic computer components and provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Additionally, activating a second AI model is an insignificant extra-solution activity. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES).
Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amount to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. At Step 2A, Prong Two, the additional elements of first AI model and activating a second AI model were found to represent no more than mere instructions to apply the judicial exception on a computer using generic computer components. The analysis under Step 2A, Prong Two is carried through to Step 2B. Further, activating a model was found to be insignificant extra-solution activity. However, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the re-evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). Activation of a model is implicit to obtaining a response from said model. Therefore, this limitation remains insignificant extra solution activity even upon reconsideration and does not amount to significantly more. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, and therefore do not provide an inventive concept (Step 2B: NO). The claim is not eligible.
Regarding claim 1, arguments analogous to claim 9, are applicable. In addition, human can determine the complexity and hence it’s a mental step. And analysis under step 2, prong two and step 2B remains the same as in claim 9.
Claim 2 further limits the selection step, but is still directed to a process that can be achieved in the mind.
Claim 3 adds the limitations of determining approval and initiating a conference call, both of which are directed to mental processes as under the broadest reasonable interpretation of these claims, they cover performance of the limitation in the mind.
Claim 4 adds the limitation of “text based session and “one or more chat windows”. Under the broadest reasonable interpretation of these claims, they cover performance of the limitation in the mind with pen and paper.
Regarding Claims 5-8, additional elements are directed to abstract ideas, hence analysis is analogous to Claim 9
Regarding Claims 10-11, the additional elements further limit the determination step. A human can still make a decision regarding a performance threshold using past responses written down and can detect a customer’s satisfaction level. Hence these additional elements are directed to abstract ideas.
Regarding Claim 12, under the broadest reasonable interpretation of voice-based session, humans can conduct a voice-based session and a transition between templates can be performed such that the customer cannot notice. Hence these additional elements are directed to abstract ideas.
Regarding Claims 13-16, selecting a human from a list is directed to an abstract idea and similarly, a human can provide a summary.
Regarding Claim 17, analysis analogous to that of Claim 9 is applicable
Regarding Claim 18, analysis analogous to that of Claim 3 is applicable
Regarding Claim 19, analysis analogous to that of Claim 4 is applicable
Regarding Claim 20, analysis analogous to that of Claim 12 is applicable
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4 and 6 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by AHMADIDANESHASHTIANI (US 20230377567), hereinafter referred to as Pub0377567.
Regarding Claim 1, Pub0377567 teaches a method, comprising:
receiving a first request from a customer during a customer service session (the automated conversation orchestration system receives utterance string inputs, which may be provided by a user, Para 0016);
providing a first response (responses of a domain agent, Para 0269) to the first request generated by a first AI model (The client starts by being routed to Broker A, Para 0278);
receiving a second request during the customer service session (Para 0016);
determining whether a complexity (domain switch Para 0272) of the second request exceeds a capability of the first AI model to respond; and in response to determining that the complexity of the second request exceeds the capability of the first AI model to respond (Broker A then begins to experience fallback, indicating that the client is speaking about something outside its domain, Para 0279), selecting a second AI model determined to be capable of responding to the second request(The routing broker analyzes the failing query and understands that the client should be routed to Broker B, para 0280), activating the second AI model(uses an introductory utterance template for Broker B to initially populate its context from the values in Broker A, Para 0280), and
providing a second response to the second request generated by the second AI model(responses of a domain agent, Para 0269).
Regarding Claim 2, Pub0377567 teaches the second AI model is selected based on the second request and previous requests and interactions made during the current customer service session(The routing broker analyzes the failing query, Para 0280) and during previous customer service sessions with the customer(The routing (introductory) agent may be configured use a wide range of capabilities including but not limited to 1) an NLU agent, 2) historic conversations , Para 0267).
Regarding Claim 3, Pub0377567 teaches receiving a third request during the customer service session (the automated conversation orchestration system receives utterance string inputs, which may be provided by a user, Para 0016); determining that the third request needs approval by or discussion with a human customer service representative(requires step-up authorization for an action, Para 0298); and initiating a conference call(multi-party conversations Para 0297) between the human customer service representative, the second AI model and a customer making the third request(broker transitions User A onto a call with Employee C, Para 0298. Each message sent to Employee C is also sent to a broker and analyzed, providing real-time information related to User A's queries to Employee C. Para 0299).
Regarding Claim 4, Pub0377567 teaches the customer service session is a text-based customer service session occurring in one or more chat windows (Fig 8-18).
Regarding Claim 6, Pub0377567 teaches in response to determining the that the complexity of the second request does not exceed the capability of the first AI model to respond, providing a second response to the second request generated by the first AI model(This solves the non-domain conversation spaces as the introductory agent can be equipped for small talk Para 02647).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 5,8 are rejected under 35 U.S.C. 103 as being unpatentable over Pub0377567(US 20230377567 A1) in view of Matula(US 20210360106 A1).
Regarding Claim 5, Pub0377567 teaches receiving a third request during the customer service session(As part of a conversation, a user may provide one or more user inputs 110 to digital assistant Para 0033);
Pub0377567 does not teach determining that the third request needs approval by a human customer service representative; and requesting approval from the human customer service representative;
receiving a decision regarding the third request from the human customer service representative; and
providing a third response to the third request generated by the second AI model that is based on the decision received from the human customer service representative
However Matula teaches determining that the third request needs approval by a human customer service representative; and requesting approval from the human customer service representative(where the chatbot 152 generates a suggested reply to a message, but a human agent 172 is required to approve or edit the message Para 59);
receiving a decision regarding the third request from the human customer service representative; and
providing a third response to the third request generated by the second AI model that is based on the decision received from the human customer service representative ( Fig 10, 1020)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the concept of human approval from Matula into the method of Pub0377567 because it would improve learning and the overall performance of the chatbot (Matula 0097).
Regarding Claim 8, Pub0377567 teaches receiving a third request during the customer service session(the automated conversation orchestration system receives utterance string inputs, which may be provided by a user, Para 0016); determining that a complexity of the third request does not exceed a capability of the first AI model to respond(The routing broker handles this by scanning the responses of a domain agent for fallbacks Para 0269);
Matula teaches providing a third response to the third request generated by the second AI model that is based on the decision received from the human customer service representative(where the chatbot 152 generates a suggested reply to a message, but a human agent 172 is required to approve or edit the message Para 59, Fig 10 1020).
It would have been obvious having the teachings of Pub0377567 to further include the concept of human approval from Matula before effective filing date to improve learning and the overall performance of the chatbot(Matula 0097).
Claim 7 is rejected under 35 U.S.C 103 as being unpatentable over Pub0377567(US 20230377567 A1) in view of Radanovic(Liveperson).
Pub0377567 does not teach AI models are large language AI models
However, Radanovic teaches bots using large language models(LivePerson’s voice bots solve synchronous communication issues by providing a real-time, conversational experience to customers using natural language processing (NLP) technology, automatic speech recognition, and the power of large language models (LLMs)).
It would have been obvious having the teachings of Pub0377567 to further include the concept of large language models from Radanovic before effective filing date to receive immediate responses from the voice bot, transition to an asynchronous messaging conversation, and even seamlessly escalate to a human agent when that personal touch is needed(Radanovic).
Claim(s) 9-10, 12, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pub0377567(US 20230377567 A1) in view of Kale(US 20200242511 A1 ).
Regarding Claim 9, Pub0377567 teaches a method, comprising: receiving a first request from a customer during a customer service session(the automated conversation orchestration system receives utterance string inputs, which may be provided by a user, Para 0016); providing a first response(responses of a domain agent, Para 0269) to the first request generated by a first AI model(The client starts by being routed to Broker A, Para 0278); activating a second AI model with more capability than the first AI model (introductory agent can be equipped for small talk before delivering the user to a domain agent Para 0264, uses an introductory utterance template for Broker B to initially populate its context from the values in Broker A, Para 0280); receiving a second request(the automated conversation orchestration system receives utterance string inputs, which may be provided by a user, Para 0016) during the customer service session and providing a second response to the second request generated by the second AI model(responses of a domain agent, Para 0269).
Pub0377567 does not teach determining whether the first response to the first request falls below a performance threshold;
However, Kale teaches determining whether the first response to the first request falls below a performance threshold(it can be determined whether the calculated accuracy metric meets an accuracy criteria Para 56);
It would have been obvious having the teachings of Pub0377567 to further include the concept of determining the threshold of response of Kale before effective filing date to reduce error in response ( Para 0056, Kale) .
Regarding Claim 10, Kale teaches the performance threshold is based on an accuracy of the first response to the first request(it can be determined whether the calculated accuracy metric meets an accuracy criteria Para 56).
Regarding Claim 12, Pub0377567 teaches the customer service session is a voice-based session(chatbots can interact with users through a computerized chat session (where each message is a new utterance string), or through voice (e.g., using a voice-to-text mechanism to convert the voice instructions into utterance strings) Para 0126) and transition between the first AI model and the second AI model is performed such that the transition from the first AI model to the second AI model is unnoticed by the customer(the user experience remains consistent as the user is not aware of the routing changes in the backend during the front-end conversation flow Para 0012)
Regarding Claim 19, Pub0377567 teaches the customer service session is a text-based customer service session occurring in one or more chat windows (Fig 8-18).
Regarding Claim 20, Pub0377567 teaches the customer service session is a voice-based customer service session(chatbots can interact with users through a computerized chat session (where each message is a new utterance string), or through voice (e.g., using a voice-to-text mechanism to convert the voice instructions into utterance strings) Para 0126).
Claims 11, 13-15, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Pub0377567(US 20230377567 A1) in view of Kale(US 20200242511 A1 ), as applied in claims 9-10, 12, 19-20 above, and further in view of Mazza(US 20190182382 A1).
Kale further teaches the performance threshold is also based on accuracy of responses made before the first request(calculating can be iterated using a variable number of data predictions, where the variable number of data predictions is adjusted based on an action taken during a previous iteration Para 4)
Pub0377567 and Kale fails teach the performance threshold is also based on a detected customer service satisfaction level.
However, Mazza teaches the performance threshold is also based on a detected customer service satisfaction level(selection of which edge to remove is based on customer satisfaction information (e.g., a net promoter score) after the conclusion of the interaction Para 0150).
It would have been obvious having the teachings of Pub0377567 and Kale to further include the concept of using customer satisfaction information because it would optimize conversation flow (Mazza 0147).
Regarding Claim 13, Pub0377567 and Mazza teach receiving a third request during the customer service session(the automated conversation orchestration system receives utterance string inputs, which may be provided by a user, Pub0377567 Para 0016); determining that the third request needs approval by or discussion with a human customer service representative(whether a domain switch make sense. It may redirect the user to connect to a human… Pub0377567 Para 0272); and selecting a human customer service representative from a list of available customer service representatives(store one or more databases relating to agent data (e.g. agent profiles, schedules, etc.) Mazza 0069) based on a compatibility(selection of an appropriate agent for routing an inbound interaction may be based, for example, on a routing strategy employed by the routing server 124, and further based on information about agent availability, skills, Mazza 0063) between the customer and the human customer service representatives from the list of available customer service representatives.
Regarding Claim 14, the compatibility between the customer and the human customer service representatives in the list of available customer service representatives is determined based on the customer service session(selection of an appropriate agent for routing an inbound interaction may be based, for example, on a routing strategy employed by the routing server 124, and further based on information about agent availability, skills, Mazza 0063) and data from previous customer service sessions associated with the customer and with the available customer service representatives(bias selections of agents based on prior user behavior Pub0377567 Para 0014).
Regarding Claim 15, Pub0377567 teaches the third request needs approval by or discussion with the human customer service representative when the third request meets a predetermined criteria for escalation to the human customer service representative(whether a domain switch make sense. It may redirect the user to connect to a human… Pub0377567 Para 0272).
Regarding Claim 18, receiving a third request during the customer service session(the automated conversation orchestration system receives utterance string inputs, which may be provided by a user, Pub0377567 Para 0016); determining that the third request needs approval by or discussion with a human customer service representative(whether a domain switch make sense. It may redirect the user to connect to a human… Pub0377567 Para 0272); selecting a human customer service representative from a list of human customer service representatives(store one or more databases relating to agent data (e.g. agent profiles, schedules, etc.) Mazza 0069) based on a compatibility between the customer and the human customer service representatives from the list of human customer service representatives(selection of an appropriate agent for routing an inbound interaction may be based, for example, on a routing strategy employed by the routing server 124, and further based on information about agent availability, skills, Mazza 0063); and scheduling a call with the selected customer service representative in response to the selected customer service representative being unavailable to speak immediately with the customer(Can we schedule a call, Mazza 0149).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Pub0377567(US 20230377567 A1) in view of Kale(US 20200242511 A1 ) in view of Mazza(US 20190182382 A1) and further in view of Caldwell(US 20130006973 A1).
Pub0377567 teaches initiating a conference call between the customer, the selected human customer service representative and the second AI model(multi-party conversations Para 0297); and also teaches showing a conversation summary(conversation summary is shown based on the recent requests Para 0426).
The combination of Pub0377567, Kale and Mazza does not teach providing a summary of the progress in the customer service session generated by the second AI model to the customer and thee selected human customer service representative during the conference call.
However, Caldwell(US 20130006973 A1) teaches providing a summary of the progress in the customer service session generated by the second AI model to the customer and the selected human customer service representative during the conference call(automatically summarizing electronic conversation threads and to providing a summary of one or more electronic conversation thread items in a user interface component for review by one or more users associated with the conversation thread Para 13).
It would have been obvious having the teachings of Pub0377567, Kale and Mazza to further include the concept of a summarizing a conversation of Caldwell before effective filing date to allow the users a quick and easy understanding of the nature and relevance of the conversation thread they are reviewing ( Para 0029, Caldwell) .
Claims 17 is rejected under 35 U.S.C. 103 as being unpatentable over Pub0377567(US 20230377567 A1) in view of Kale(US 20200242511 A1 ) in view of Higgy(US 20130173317 A1).
Pub0377567 and Kale do not teach the predetermined criteria is a request to schedule an event for more than 50 guests.
However, Higgy(US 20130173317 A1) teaches the predetermined criteria is a request to schedule an event for more than 50 guests(a request to book a venue for an event, facilitating an agreement on a threshold relating to tentative ticket reservations that must be achieved Para 10)
It would have been obvious having the teachings of Pub0377567, and Kale to further include the concept of a criteria of a request to book an event with a threshold of Higgy before effective filing date to alleviate the risk of booking an undersold event ( Para 0009, Higgy).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARJUN R SWAMY whose telephone number is (571)272-9763. The examiner can normally be reached Mon-Fri 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at (571) 272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Arjun Swamy/Examiner, Art Unit 2654
/HAI PHAN/Supervisory Patent Examiner, Art Unit 2654