DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 08/05/2025 has been entered.
3. Currently claims 1-24 have been canceled; and new claims 25-42 have been added. Accordingly, claims 25-42 are pending in this application.
Claim Rejections - 35 USC § 101
4. Non-Statutory (Directed to a Judicial Exception without an Inventive Concept/Significantly More)
35 U.S.C.101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
● Claims 25-42 are rejected under 35 U.S.C.101 because the claimed invention is directed to an abstract idea without significantly more.
(Step 1)
The current claims fall within one of the four statutory categories of invention (MPEP 2106.03).
(Step 2A) [Wingdings font/0xE0] Prong-One:
Considering claims 25, 31 and 37 as representative claims, the current claims recite a judicial exception, namely an abstract idea, as shown below:
— regarding claim 25, the following claimed limitations recite an abstract idea: [collect] transactions and corresponding transcripts from calls; a lexicon object comprising a current lexicon of words or phrases; a score template that references the lexicon object and includes parameters for determining a score based on matches to the current lexicon; a question object that specifies an evaluation question; an answer object that specifies an answer space for the evaluation question, the answer space comprising a plurality of acceptable answers, each of the plurality of acceptable answers having an associated score range; receive a definition of transactions to which the score template is to apply; batch evaluate a plurality of transactions based on the definition of transactions, wherein batch evaluating the plurality of transactions comprises: for each of the plurality of transactions, parsing a respective transcript to identify matches to the current lexicon; assigning a respective score to each of the plurality of transactions based on the matches to the current lexicon determined for the transaction; and [record], for each of the plurality of transactions, the assigned respective score in association with the transaction and the score template; a request to evaluate a transaction from the plurality of transactions: access the evaluation object; based on the reference to the question object, select the question object; determine the answer space for the evaluation question from the answer object; determine a score range for each of the plurality of acceptable answers from the answer object; determine the respective score assigned to the transaction; select, as a pre-selected first answer to the evaluation question, a first acceptable answer from the plurality of acceptable answers according to the respective score assigned to the transaction and the score range associated with the first acceptable answer; [draft] a page portion for the evaluation question, the page portion comprising the evaluation question and an answer [option] for an evaluator to answer to the evaluation question using the answer space, wherein [drafting] the page portion comprises presetting the answer [option] to the first answer; and [presenting] to the evaluator the page portion; record a respective evaluation answer to the evaluation question and the first answer selected for the evaluation question for each of the plurality of transactions; and determine an accuracy of the current lexicon based on evaluating, for each of a set of selected transactions from the plurality of transactions, the first answer selected for that selected transaction against the respective evaluation answer recorded for that selected transaction; select a first subset of transactions from the plurality of transactions; convert the corresponding transcripts for a first subset of transactions from the plurality of transactions into a first set of term frequency-inverse document frequency vectors, the first subset of transactions having a first same evaluation answer recorded for the evaluation question; identify from the first set of term frequency-inverse document frequency vectors, a first set of candidate words or phrases having greater than a threshold term frequency-inverse document frequency; [draft] a second set of candidate words or phrases from the first set of candidate words or phrases, wherein [drafting] the second set of candidate words or phrases comprises eliminating from the set of candidate words or phrases, words or phrases in the current lexicon; [draft] an updated lexicon that includes the second set of candidate words or phrases; using the updated lexicon, answer the evaluation question for a set of test transactions from the plurality of transactions to [create] a revised answer for each test transaction of the set of test transactions; determine an accuracy of the updated lexicon based on evaluating, for each test transaction from the set of test transactions, the revised answer for that test transaction against the respective evaluation answer recorded for that test transaction; and based on a determination that the updated lexicon more accurately answers the evaluation question, retune the lexicon object to use the updated lexicon as the current lexicon to increase the accuracy of the answers for subsequent evaluation requests.
— regarding claim 31, the following claimed limitations recite an abstract idea: [collect] calls between callers and agents; [create] transcripts of the calls; [collect] plurality of transactions, each of the plurality of transactions comprising a call and a corresponding transcript; receive a definition of transactions to a which a score template applies; batch evaluate the plurality of transactions based on the definition of transactions, wherein batch evaluating the plurality of transactions comprises: [obtain] the score template specified by the definition of transactions; [obtain] a lexicon object associated with the score template, the lexicon object comprising a current lexicon of words or phrases, wherein the score template includes parameters for determining a score based on matches to the current lexicon; for each of the plurality of transactions, parsing the corresponding transcript to identify matches to the current lexicon; assign a respective score to each of the plurality of transactions based on the matches to the current lexicon determined for the transaction; and [record], for each of the plurality of transactions, the assigned respective score in association with the transaction and the template; a request to evaluate a transaction from the plurality of transactions: access an evaluation object; [obtain] a question object referenced by the evaluation object, the question object specifying an evaluation question; [obtain] an answer object associated with the question object, the answer object specifying an answer space for the evaluation question, the answer space comprising a plurality of acceptable answers to the evaluation question and a respective score range for each of the plurality of acceptable answers; determine the respective score assigned to the transaction; select, as a pre-selected first answer to the evaluation question, a first acceptable answer from the plurality of acceptable answers according to the respective score assigned to the transaction and the respective score range associated with the first acceptable answer; [draft] a page portion for the evaluation question, the page portion comprising the evaluation question and an answer [option] for an evaluator to answer to the evaluation question using the answer space, wherein [drafting] the page portion comprises presetting the answer [option] to the first answer; and [present] the evaluator [with] the page portion; and record a respective evaluation answer to the evaluation question and the first answer selected for the evaluation question for each of the plurality of transactions; determine an accuracy of the current lexicon based on evaluating, for each transaction in a set of selected transactions from the plurality of transactions, the first answer selected for that selected transaction against the respective evaluation answer recorded for that selected transaction; select a first subset of transactions from the plurality of transactions; convert the corresponding transcripts for a first subset of transactions into a first set of term frequency-inverse document frequency vectors, the first subset of transactions having a first same evaluation answer recorded for the evaluation question; identify from the first set of term frequency-inverse document frequency vectors, a first set of candidate words or phrases having greater than a threshold term frequency-inverse document frequency; [draft] a second set of candidate words or phrases from the first set of words or phrases, wherein [drafting] the second set of candidate words or phrases comprises eliminating from the set of candidate words or phrases, words or phrases in the current lexicon; [draft] an updated lexicon that includes the second set of candidate words or phrases; using the updated lexicon, answer the evaluation question for a set of test transactions from the plurality of transactions to [create] a revised answer for each test transaction of the set of test transactions; determine an accuracy of the updated lexicon based on evaluating, for each test transaction from the set of test transactions, the revised answer for that test transaction against the respective evaluation answer recorded for that test transaction; and based on a determination that the updated lexicon more accurately answers the evaluation question, retune the lexicon object to use the updated lexicon as the current lexicon to increase the accuracy of answers for subsequent evaluation requests.
— regarding claim 37, the following claimed limitations recite an abstract idea: receive a definition of transactions to a which score template applies; batch evaluate a plurality of transactions based on the definition of transactions, wherein each of the plurality of transactions comprises a call and a corresponding transcript of the call, wherein batch evaluating the plurality of transactions comprises: [obtain] the score template specified by the definition of transactions; [obtain] a lexicon object associated with the score template, the lexicon object comprising a current lexicon of words or phrases, wherein the score template includes parameters for determining a score based on matches to the current lexicon; for each of the plurality of transactions, parsing the corresponding transcript to identify matches to the current lexicon; assign a respective score to each of the plurality of transactions based on the matches to the current lexicon determined for the transaction; and [record] for each of the plurality of transactions, the assigned respective score in association with the transaction and the score template; a request to evaluate a transaction from the plurality of transactions comprises: access an evaluation object; [obtain] a question object referenced by the evaluation object, the question object specifying an evaluation question; [obtain] an answer object associated with the question object, the answer object specifying an answer space for the evaluation question, the answer space comprising a plurality of acceptable answers to the evaluation question and a respective score range for each of the plurality of acceptable answers; determine the respective score assigned to the transaction; select, as a pre-selected first answer to the evaluation question, a first acceptable answer from the plurality of acceptable answers according to the respective score assigned to the transaction and the respective score range associated with the first acceptable answer; [draft] a page portion for the evaluation question, the page portion comprising the evaluation question and an answer [option] for an evaluator to answer to the evaluation question using the answer space, wherein [drafting] the page portion comprises presetting the answer [option] to the first answer; and [present] the evaluator [with] the page portion; and record a respective evaluation answer to the evaluation question and the first answer selected for the evaluation question for each of the plurality of transactions; determine an accuracy of the current lexicon based on evaluating, for each transaction in a set of selected transactions from the plurality of transactions, the first answer selected for that selected transaction against the respective evaluation answer recorded for that selected transaction; convert the corresponding transcripts for a first subset of transactions from the plurality of transactions into a first set of term frequency-inverse document frequency vectors, the first subset of transactions having a first same evaluation answer recorded for the evaluation question; identify from the first set of term frequency-inverse document frequency vectors, a first set of candidate words or phrases having greater than a threshold term frequency-inverse document frequency; [draft] a second set of candidate words or phrases from the first set of words or phrases, wherein [drafting] the second set of candidate words or phrases comprises eliminating from the set of candidate words or phrases, words or phrases in the current lexicon; [draft] an updated lexicon that includes the second set of candidate words or phrases; using the updated lexicon, answer the evaluation question for a set of test transactions from the plurality of transactions to [create] a revised answer for each test transaction of the set of test transactions; determine an accuracy of the updated lexicon based on evaluating, for each test transaction from the set of test transactions, the revised answer for that test transaction against the respective evaluation answer recorded for that test transaction; and based on a determination that the updated lexicon more accurately answers the evaluation question, retune the lexicon object to use the updated lexicon as the current lexicon to increase the accuracy of answers for subsequent evaluation requests.
Thus, the limitations identified above recite an abstract idea since the limitations correspond to mental processes and/or certain methods of organizing human activity, which are part of the enumerated groupings of abstract ideas identified according to the current eligibility standard (see MPEP 2106.04(a)).
Note also that regarding a mental process, a human—e.g., an assistant—can perform the core of the claimed process at least using a pen and paper. For instance, assume that an evaluation form that comprises one or more questions, which an evaluator/supervisor of a call center uses to evaluate the performance of one or more agents, is already drafted. Thus, while evaluating transcripts that relate to interactions between an agent(s) and a customer(s), the assistant identifies the words and/or phrases that the agent used during interaction; and the assistant also compares the agent’s words/phrases with the words/phrases in a lexicon in order to determine whether the agent is using words/phrases in the lexicon or new words/phrases, etc.
Accordingly, before presenting the evaluation form to the supervisor, the assistant may tentatively prepopulate the evaluation form with one or more potential answers (e.g., considering the scenario discussed in the specification, [0106], the assistant may prepopulate the answer “No” for the question “Did the agent upsell?”, etc.). Once receiving the evaluation form above from the assistant, the supervisor may accept or modify—based on the supervisor’s judgment of the agent’s interaction—one or more of the prepopulated potential answers on the evaluation form (e.g., the supervisor changes the prepopulated answer above to “Yes”, etc.). Of course, once the assistant recognizes that the supervisor has modified one or more of the prepopulated potential answers on the evaluation form, the assistant makes one or more updates; such as, updating the lexicon by adding one or more new acceptable words/phrases from one or more of the transcripts; updating one or more of the potential answers to one or more of the questions on the evaluation form, etc.
Of course, besides the discussion above, the specification describes that the disclosed system/method is presenting one or more answers to a human evaluator who is performing an evaluation task; wherein the evaluator is presented with an evaluation form prepopulated with one or more answers, so that the evaluator uses the form to perform evaluation task (see [0106], emphasis added),
[0106] FIG. 13 illustrates one embodiment of an evaluation operator interface page 1275 including an evaluation. In this example, the selection of the answer "Yes" is preset for the question "Did the agent use the standard company greeting?" and the selection of the answer "no" is preset for the question "Did the agent upsell?" when the evaluation is sent to the evaluator. These answers are prepopulated based on the autoscores assigned to the transaction by autoscore templates associated with the questions. It can be noted, however, the evaluator may choose a different answer than the prepopulated autoscore auto answer. Thus, evaluation answer to the question submitted for an evaluation may be the auto answer pre-selected for the evaluation or another answer.
Thus, given the above context, the current claims further correspond to managing personal behavior. Particularly, the human evaluator is presented with information—namely an evaluation that comprises one or more questions, each of the one or more questions prepopulated with one or more potential answers; so that the evaluator uses the above prepopulated form to evaluate the performance of an agent(s), etc.
(Step 2A) [Wingdings font/0xE0] Prong-Two:
The current claims do recite additional elements, wherein: (i) per each of claims 31 to 36, a computer-implemented method that comprises a database; (ii) per each of claims 37 to 45, a computer program product in the form of a non-transitory computer readable medium; and per each of claims 25 to 30, a call routing system that comprises an automatic call distributor, voice instruments, a plurality of computers, and a server/database, etc., are utilized to facilitate the steps/functions regarding (e.g., see claim 25 as the representative claim): collecting and/or storing information/documents related to a call center (e.g., “store transactions, the transactions comprising call recordings stored by the recording server and corresponding transcripts generated from the call records by the speech-to-text converter . . . an evaluation object referencing the question object”); analyzing the collected/stored information using one or more algorithms (e.g., “receive a definition of transactions to which the auto-score template is to apply; batch evaluate a plurality of transactions . . . storing in the database, for each of the plurality of transactions, the assigned respective auto-score in association with the transaction and the auto-score template”); generating, in response to a service request received from a user/evaluator, generating one or more results in the form of one or more suggested answers to a question on a form (e.g., “service requests to evaluate transactions from the plurality of transactions . . . determining the answer space for the evaluation question from the answer object . . . selecting, as a pre-selected, first auto answer to the evaluation question, a first acceptable answer from the plurality of acceptable answers according to the respective auto-score assigned to the transaction and the auto-score range associated with the first acceptable answer; and generating a page portion for the evaluation question, the page portion comprising the evaluation question and an answer control for an evaluator to answer to the evaluation question using the answer space, wherein generating the page portion comprises presetting the answer control to the first auto answer; and serving an evaluation operator interface to the evaluator, the evaluator operator interface comprising the page portion”); processing, using an algorithm(s), one or more documents in the database, based on an input/response received from the user; and further generate one or more updates (e.g., “an Al engine executable to: determine an accuracy of the current lexicon based on evaluating, for each of a set of selected transactions from the plurality of transactions, the first auto answer selected for that selected transaction against the respective evaluation answer recorded for that selected transaction . . . generating a second set of candidate words or phrases from the first set of candidate words or phrases, wherein generating the second set of candidate words or phrases comprises eliminating from the set of candidate words or phrases, words or phrases in the current lexicon; generating an updated lexicon that includes the second set of candidate words or phrases; using the updated lexicon, auto answering the evaluation question for a set of test transactions . . . determining an accuracy of the updated lexicon based on evaluating . . . retuning the lexicon object to use the updated lexicon as the current lexicon to increase the accuracy of auto-answering for servicing subsequent evaluation requests”), etc.
However, the claimed additional elements fail to integrate the abstract idea into a practical application since the additional elements are utilized merely as a tool to facilitate the abstract idea (see above).
Thus, when each claim is considered as a whole, the additional elements fail to integrate the abstract idea into a practical application since they fail to impose meaningful limits on practicing the abstract idea. For instance, when each of the claims is considered as a whole, none of the claims provides an improvement over the relevant existing technology.
Although claims recite the use if an AI engine to analyze documents, this is not sufficient to demonstrate an improvement over the relevant existing technology since the existing computer/network technology already implements one or more artificial intelligence algorithms to analyze collected information. Moreover, neither the current claims nor the original disclosure as a whole is directed to an AI engine that is considered to be an advance over the existing one. Particularly, similar to the court’s observation per Electric Power Group (Elec. Power Grp., LLCv. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016)), the current claims—despite being lengthy and numerous—do not implement an element, or a combination of elements, that is arguably an advance over the existing computer technology.
The observations above confirm that the claims are indeed directed to an abstract idea.
(Step 2B)
Accordingly, when the claim(s) is considered as a whole (i.e. considering all claim elements both individually and in combination), the claimed additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to “significantly more” than the abstract idea itself (also see MPEP 2106). The claimed additional elements are directed to conventional computer elements, which are serving merely to perform conventional computer functions. Accordingly, none of the current claims recites an element—or a combination of elements—directed to an “inventive concept”.
It is also worth noting that the implementation of the conventional system, which implements one or more conventional algorithms (e.g., artificial intelligence, or machine learning, etc.), to facilitate the evaluation of collected information (e.g. transcripts at a call center, etc.), is already directed to a well-understood, routine or conventional activity in the art (e.g. see US 2010/0161315; US 2008/0189171; US 2002/0111811). Particularly, such conventional algorithms are utilized to generate, based on the analysis of collected information, one or more results; such as automatically populating one or more fields in an electronic form, etc. (see US 2008/0120257; 2005/0257134; US 2002/0062342).
The observations above confirm that the current claims fail to amount to “significantly more” than an abstract idea. The above analysis already encompasses each of the dependent claims (i.e., claims 26-30, 32-36 and 38-42). Particularly, when each of the dependent claims is considered as a whole, none of the claims amounts to “significantly more” than an abstract idea since each claim is directed to a further abstract idea, and/or conventional computer-element to facilitate the abstract idea.
► Applicant’s arguments directed to section §101 have been fully considered (the arguments filed on 08/05/2025).
Applicant asserts, “[t]he claimed invention includes specifically claimed objects and a specific way of using them to auto answer questions and provide a specific way to retune auto-answering in the system including mechanisms to automatically determine updated parameters and further include retuning specific objects with the updated parameters to increase the accuracy of the automated answer. The claimed invention provides a clear improvement to the computer system itself by providing an automated way for the system to automatically retune its parameters and objects to increase accuracy. The claims are not directed to managing the human evaluator nor can the claimed inventions be practicably carried out by a human. Applicant therefore submits that the claimed invention represents patentable subject matter” (emphasis added).
However, except for summarizing the objective of the claimed system/method, and/or the information that the claimed system/method is processing, Applicant does not identify an element (if any)—or a combination of elements (if any)—that provides a technological improvement over the relevant existing technology. Particularly, neither the current claims nor the original disclosure as a whole provides any technological improvement to any of the computers utilized to facilitate the claimed abstract idea. In this regard, Applicant appears to be mistaking the alleged accurate answer, which the claimed/disclosed computer system is supposedly providing for a given question (e.g., a potentially correct an answer for a question on an evaluation from), with a technological improvement. In contrast, neither the claimed nor the disclosed process of providing a potential correct answer for a given question, and/or updating the potential correct
answer for a given question based on feedback/modification received from a human evaluator, etc., constitutes a technological improvement. In fact, it is part of the existing computer/network technology to utilize one or more machine-learning algorithms, including an artificial intelligence algorithm, to more accurately predict the correct answer for a given question, including updating the correct answer based on newly collected information/feedback, etc.
Note also that the analysis presented under prong-one of Step 2A above already demonstrates why the current claims recite an abstract idea; namely a mental process and/or certain methods of organizing human activity. So far, except for the conclusory assertion, Applicant does not challenge any of the findings above.
Thus, Applicant’s assertion regarding the alleged technological improvement is not persuasive. Moreover, given the generic and conventional arrangement of the claimed (and the disclosed) additional elements, neither the claimed nor the disclosed system/method implements an inventive concept that amounts to “significantly more” than an abstract idea.
Accordingly, at least for the reasons above, the Office concludes that none of the current claims—when considered as a whole—is patent-eligible per section §101.
Prior art
5. Considering each of claims 25, 31 and 37 as whole (including the respective dependent claims), the prior art does not teach or suggest the current claims (regarding the state of the prior art, see the office-action dated 06/09/2022).
Conclusion
6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRUK A GEBREMICHAEL whose telephone number is (571) 270-3079. The examiner can normally be reached on 7:00AM-3:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID LEWIS can be reached on (571) 272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRUK A GEBREMICHAEL/Primary Examiner, Art Unit 3715