Prosecution Insights
Last updated: April 19, 2026
Application No. 17/612,371

SYSTEMS, METHODS, AND COMPUTER READABLE MEDIUMS FOR CONTROLLING A FEDERATION OF AUTOMATED AGENTS

Non-Final OA §101§102§103
Filed
Nov 18, 2021
Examiner
MORONEY, MICHAEL CORBETT
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nokia Solutions and Networks Oy
OA Round
3 (Non-Final)
26%
Grant Probability
At Risk
3-4
OA Rounds
2y 9m
To Grant
51%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
32 granted / 123 resolved
-26.0% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
23 currently pending
Career history
146
Total Applications
across all art units

Statute-Specific Performance

§101
37.8%
-2.2% vs TC avg
§103
36.1%
-3.9% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 123 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the amendment filed on 12/12/2025 and the Request for Continued Examination filed on 02/11/2026. Claims 21, 27, 31-32, and 38 have been amended and are hereby entered. Claims 21-40 are currently pending and have been examined. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/12/2025 has been entered. Response to Arguments Applicant’s arguments, see page 8, filed 12/12/2025, with respect to the 35 U.S.C. 112(b) rejection of claim 38 have been fully considered and are persuasive. The relative term “large number” has been removed by Applicant’s amendments. The 35 U.S.C. 112(b) rejection of claim 38 has been withdrawn. Applicant’s arguments, see pages 8-13, filed 12/12/2025, with respect to the 35 U.S.C. 101 rejections of claims 21-40 have been fully considered but are not persuasive. The 35 U.S.C. 101 rejections of claims 21-40 have been maintained. Applicant first argues on pages 8-10 that the claims allegedly cannot be performed in the human mind because the human mind is not equipped to perform each and every limitation of the claim as discussed in the August 3rd Memo. Applicant then lists the limitations of claim 21 and argues that the human mind allegedly “cannot practically keep track of” the allegedly computationally complex task of measuring a degree of similarity between multiple suggestions nor practically subset the recommendations based on a rating of the number of in service automated agents because there “are too many such ratings” for the human mind to perform. Examiner respectfully disagrees. Specifically regarding the number of ratings being “too many” to practically perform comparisons, Examiner notes that claim 21 recites “a plurality” of in-service automated agents of which “a number” of the in-service automated agents receive the ticket. Accordingly, there are “a number” of ratings of in-service automated agents that are to be considered. Examiner also notes that the embodiment presented in Applicant’s specification has three such agents: 101, 102, and 103. Regardless of the exact number in a given embodiment, the broadest reasonable interpretation of “a number” of in-service automated agents encompasses three agents, and Examiner argues that a human mind would be capable of comparing three rating values in their mind. Regarding the “computationally complex” measurement of a degree of similarity among the suggestions provided by the in-service automated agents, Examiner notes that claim 21 merely recites “measuring a similarity of suggestions generated by the number of in-service automated agents among the plurality of in-service automated agents”. The broadest reasonable interpretation of “measuring a similarity of suggestions” covers similarity measurements/criteria that can be performed in the human mind like basic word matching (i.e. two suggestions are similar because they suggest rebooting a device) and is not limited to “complex calculations” as Applicant alleges. Finally, MPEP 2106.04(a)(2) III.B. states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea” and “The use of a physical aid (e.g., pencil and paper or a slide rule) to help perform a mental step (e.g., a mathematical calculation) does not negate the mental nature of the limitation...” Therefore, even if a person reviewing the suggestions and determining a subset of suggestions needed to write down agent ratings and notes about the similarity/dissimilarity of suggestions, the claim limitations at issue still amount to a mental process. Examiner also notes that while other claim limitations are not specifically argued, the receipt of a ticket, dispatching of the ticket, and providing a subset of suggestions also amount to a mental process. Accordingly, Applicant’s arguments that the claims do not fall within the Mental Process grouping of abstract ideas are not persuasive. Applicant next argues on pages 10-11 that the claims allegedly do not fall into the Certain Methods of Organizing Human Activity grouping of abstract ideas. Applicant argues that the control of the federation of automated agents does not fall into the sub-grouping of commercial interactions as stated by the Office. Examiner respectfully disagrees. Paragraph [0002] of Applicant’s specification recites “When a user desires help for a technical problem, such as requesting technical support in an installation and/or maintenance of components of an infrastructure for cellular phones, the user may desire help from a group of human experts. However, the user may be requesting help at time when human expertise is not available immediately”. Applicant’s invention accordingly covers customer support/help desk services provided to users. The provision of customer support/help desk services falls at least under the enumerated example of “business relations” as customer support/help desk services are services provided by one party (i.e. device manufacturer, merchant, repair shop, etc.) to the user/customer of the device the user needs help with. While an exchange of currency may not take place at the time of the provided customer support, customer support is a form of a business relationship between a business and its customers. The automated agents argued by Applicant are additional elements in the claimed invention, but they do not preclude the claims from reciting an abstract idea and are evaluated at Step 2A Prong Two. Accordingly, Applicant’s argument that the claimed invention does not recite a Certain Method of Organizing Human Activity is not persuasive. Applicant has argued on pages 11-12 that the claims allegedly recite the improvements of “(1) that more complicated tickets having more useful suggestions may be provided, (2) new, automated software agents may be added without extensive training, (3) the federation of agents may be simplified, and (4) request for help from a human expert, can be streamlined for especially complicated cases” and points to paragraph [0076] of the specification as allegedly integrating the abstract idea into a practical application. Examiner respectfully disagrees. The cited portions of paragraph [0076] recite adding and removing agents (what Applicant appears to be referring to as “simplifying the federation”) from a federation based on a similarity/dissimilarity of the suggestions from the agents, and forwarding a ticket to a human agent based upon the suggestion similarity from the agents. MPEP 2106.05(a) recites: “If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement”. The decision to add agents that form a consensus with the federation and remove agents that no longer conform with the consensus falls into the abstract idea. As discussed above, the similarity measurement between suggestions falls under the abstract idea of mental process and certain methods of organizing human activity. Accordingly, the decision to add/remove an agent based on such a similarity measurement would also fall into the abstract idea itself. While [0076] recites that the exclusion/inclusion of agents based on similarity of outputs avoids extensive agent training, one of ordinary skill in the art would not recognize the use of an abstract evaluation of agent outputs in place of agent training as a technological improvement to the agents themselves. At most, the decision to include/exclude agents based on a similarity of answers would be an improvement to the abstract idea by soliciting suggestions from sources that are more likely to provide helpful information. However, MPEP 2106.05(a) II. states “it is important to keep in mind that an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology”. Regarding more complicated tickets getting more useful suggestions provided, Examiner notes that [0054]-[0055] and [0082]-[0085] in view of [0076] of the specification recite that these “more useful” suggestions to complex tickets are obtained by soliciting suggestions from more agents when a ticket is complex. Complexity can be determined by a count of the number of words in the ticket per specification [0083]. Counting the number of words in a ticket and soliciting feedback from more sources when a word count exceeds a certain amount falls under at least the mental process abstract idea. While soliciting suggestions from more sources for complex tickets may amount to an improvement to the abstract idea, as discussed above the MPEP states that an improvement to the abstract idea does not amount to a technical improvement. Similarly, forwarding a ticket to a human agent for an answer also falls into the abstract idea. Specification [0076] recites that the decision to forward the ticket to a human expert can be based on the similarity measurement, which as discussed above falls under at least a mental process. Accordingly, a decision to send a ticket to a human expert based on this measurement also falls under the abstract idea and does not amount to a technical improvement. Furthermore, even assuming arguendo that the argued features provided an improvement, Examiner notes that these features are not recited in the independent claims. Therefore, even if the limitations Applicant argues from [0076] provided an improvement, the independent claims do not reflect these features. MPEP 2106.05(a) further recites: “the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology”. Applicant’s argument that the “dispatching” of the ticket somehow reflects the argued improvements is also not persuasive. The broadest reasonable interpretation of “dispatch” covers sending the tickets to the agents. It would be improper to read the various features argued by Applicant in the specification to the term “dispatch”, as Applicant’s specification does not define “dispatch” as necessarily requiring the argued features. Therefore, even if the features argued by Applicant did amount to an improvement, the features are not reflected in the independent claims. The claims do not integrate their judicial exceptions into a practical application. Finally, Applicant argues on pages 12-13 that the claims do not include “mere instructions to apply” a judicial exception. Specifically, Applicant argues that “dispatching” of the ticket does not fall into the “apply it” consideration because the dispatching is to “other automated agents” that allegedly results in an improvement in the computing capabilities of the federation of automated agents. Applicant argues that this alleged improvement is a practical application provided by the claims. Examiner respectfully disagrees. Particularly, the broadest reasonable interpretation of “dispatching” the ticket to other automated agents covers the sending of the ticket/ticket data from one agent to another. Per MPEP 2106.05(f)(2), “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more” (emphasis added). Therefore, dispatching of tickets among automated agents does not amount to an improvement to the computing capabilities of the federation of agents. The individual agent computing capabilities are not improved by the dispatching, and the computing capabilities of the federation as a whole are not improved by the transmission of tickets to agents because the abstract idea is what determines which agents to use/not use. The claims as a whole are applying the judicial exception of soliciting suggestions to provide customer service to a user to a federation of automated agents in place of a federation of human agents. Applicant’s arguments are not persuasive, and claims 21-40 still stand rejected under 35 U.S.C. 101. Applicant’s arguments, see pages 13-15, filed 12/12/2025, with respect to the 35 U.S.C. 103 rejections of claims 21-22, 25, 28, 30,32-35 and 39 have been fully considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Specifically, Thomson (U.S. Pre-Grant Publication No. 2014/0164476, hereafter known as Thomson) is used below to teach the entirety of independent claims 21 and 32, including the portions particularly argued by Applicant. Claims 21-22, 25, 28, 30, 32-35 and 39 now stand rejected under 35 U.S.C. 102(a)(1) as being anticipated by Thomson. Examiner notes that arguments against dependent claims 23-24, 26-27, 29, 36-38, and 40 across pages 16-19 are moot because the Thomson reference is used to teach the limitations from independent claims 21 and 32 which Applicant argues are not taught by the references used to teach these dependent claims. Applicant’s arguments, see page 17, filed 12/12/2025, with respect to the 35 U.S.C. 103 rejection of claim 31 have been fully considered but are not persuasive. The 35 U.S.C. 103 rejection of claims 31 has been maintained. Applicant argues on page 17 that the combination of Chu and Zhu fails to teach independent claim 31. Specifically, Applicant argues that Chu teaches “replacing” or “swapping” an in-service model with another model, and thus Chu allegedly does not explicitly teach “adding” a provisional agent to in-service agents. Applicant argues that the Office errs its interpretation of replacing or swapping necessarily reading on adding the provisional automated agent to the plurality of in-service agents. Examiner respectfully disagrees. Proceeding with the ”swap” terminology and Applicant’s example of swapping objects between hands, Examiner notes that by swapping objects between hands, one is removing the object originally one hand and adding the object originally in the other hand. Similarly, Chu is removing the original champion model from being in-service and adding the provisional model that now outperforms the original champion. As a more specific example from Chu, Chu [0026] recites that the evaluation criteria used to evaluate models have been standardized across multiple different model types, which include “decision tree models, neural network models, logistic models, and any other supervised model” per Chu [0041]. Chu therefore teaches that a champion model and its replacement may be different model types. If a champion model is a decision tree model and a replacement model is a neural network model, upon determining that the replacement model should become the new in-service champion model, the number of in-service neural network models would go up by one (an addition of one neural network model to the in-service models). As argued in previously, claim 31 does preclude other models from being removed from the in-service models. Specification paragraphs [0017], [0024], [0076], and [0114] recite the addition of a new agent to the federation of agents but do not preclude the removal of agents. Indeed, Applicant’s specification paragraphs [0010], [0110], and [0133] and claims 27 and 37 explicitly recite the removal/deregistering of agents from being in-service, with the specification paragraphs pointing to benefits of doing so. Therefore, the broadest reasonable interpretation of “adding the provisional automated agent to the plurality of in-service automated agents in response to the evaluating”, in light of Applicant’s specification, covers the addition of a replacement model and removal of an older model as taught in Chu. Applicant’s arguments are therefore not persuasive, and claim 31 still stands rejected under 35 U.S.C. 103. Claim Objections Claim 24 is objected to because of the following informalities: Claim 24 states “The method claim 23” when it appears it should recite “The method of claim 23” to correct an apparent typographical error Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite providing suggestions to a user to resolve an issue, and determining whether a provisional agent should be added to a federation of agents. As an initial matter, claims 21-30 and 37-40 fall into at least the process category of statutory subject matter. Claim 31 falls into at least the process category of statutory subject matter. Finally, claims 32-36 fall into at least the machine category of statutory subject matter. Therefore, all claims fall into at least one of the statutory categories. Eligibility analysis proceeds to Step 2A. In claim 21, the limitation of “receiving a ticket from a terminal”, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “a terminal,” nothing in the claim element precludes the step from practically being performed in the mind. Similarly, the limitation of “dispatching the ticket to a number of in-service automated agents among the plurality of in-service automated agents; measuring a similarity of suggestions generated by the number of in-service automated agents among the plurality of in-service automated agents, the suggestions being suggestions for responding to the ticket; determining a subset of the suggestions based on a rating of the number of in-service automated agents and the similarity of suggestions generated by the number of in-service automated agents, the rating being based on a previous score indicative of a quality of a previous subset of suggestions; and providing the subset of suggestions to the terminal”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Additionally, claim 21 recites the concept of providing suggestions to user tickets regarding a user issue which is a certain method of organizing human activity including commercial interactions. A method of controlling a federation of agents, the federation of agents including a plurality of in-service agents, at least some of the plurality of in-service agents including instructions that, when executed, generate suggestions for responding to tickets received, the method comprising: receiving a ticket; dispatching the ticket to a number of in-service agents among the plurality of in-service agents; measuring a similarity of suggestions generated by the number of in-service agents among the plurality of in-service agents, the suggestions being suggestions for responding to the ticket; determining a subset of the suggestions based on a rating of the number of in-service agents and the similarity of suggestions generated by the number of in-service agents, the rating being based on a previous score indicative of a quality of a previous subset of suggestions; and providing the subset of suggestions all, as a whole, fall under the category of commercial interactions. The claim falls into the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Mere recitation of generic computer components does not remove the claim from this grouping. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of automated agents including computer-readable instructions that, when executed by a computer, cause the computer to generate suggestions; a computer; one or more terminals; and a terminal. The recited additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of automated agents including computer-readable instructions that, when executed by a computer, cause the computer to generate suggestions; a computer; one or more terminals; and a terminal amounts to no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Claims 22-30 further limit the abstract idea of claim 21 without adding any new additional elements. Therefore, by the analysis of claim 21 above these claims, individually and as an ordered combination, do not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claims are not patent eligible. In claim 31, the limitation of “A method of adding a provisional automated agent to a federation of automated agents, the federation of automated agents including a plurality of in-service automated agents, at least some of the plurality of in-service automated agents including a list of computer-readable instructions that, when executed by a computer, cause the computer to generate suggestions for responding to tickets received from one or more terminals, the method comprising: receiving a ticket from a terminal”, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “automated agents, at least some of the plurality of in-service automated agents including a list of computer-readable instructions that, when executed by a computer, cause the computer to generate suggestions” and “a terminal,” nothing in the claim element precludes the step from practically being performed in the mind. Similarly, the limitation of “dispatching the ticket to a training manager; receiving, from the training manager, a provisional suggestion generated by the provisional automated agent; evaluating the provisional automated agent by comparing the provisional suggestion to suggestions generated by a number of in-service automated agents in the federation of automated agents; and adding the provisional automated agent to the plurality of in-service automated agents in response to the evaluating”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Additionally, claim 31 recites the concept of determining whether an agent is qualified to provide suggestions to user tickets and allowing a qualified agent to begin making suggestions to users which is a certain method of organizing human activity including commercial interactions. A method of adding a provisional agent to a federation of agents, the federation of agents including a plurality of in-service agents, at least some of the plurality of in-service agents including a list of instructions that, when executed, generate suggestions for responding to tickets received, the method comprising: receiving a ticket; dispatching the ticket; receiving a provisional suggestion generated by the provisional agent; evaluating the provisional agent by comparing the provisional suggestion to suggestions generated by a number of in-service agents in the federation of agents; and adding the provisional agent to the plurality of in-service agents in response to the evaluating all, as a whole, fall under the category of commercial interactions. The claim falls into the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Mere recitation of generic computer components does not remove the claim from this grouping. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a provisional automated agent; automated agents including a list of computer-readable instructions that, when executed by a computer, cause the computer to generate suggestions; a computer; one or more terminals; a terminal; and a training manager. The recited additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a provisional automated agent; automated agents including a list of computer-readable instructions that, when executed by a computer, cause the computer to generate suggestions; a computer; one or more terminals; a terminal; and a training manager amounts to no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. In claim 32, the limitation of “receive a ticket from a terminal”, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “An apparatus to control a federation of automated agents, the federation of automated agents including a plurality of in-service automated agents, the apparatus comprising: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to” and “a terminal,” nothing in the claim element precludes the step from practically being performed in the mind. Similarly, the limitation of “dispatch the ticket to a number of in-service automated agents among the plurality of in-service automated agents, measure a similarity of suggestions generated by the number of in-service automated agents among the plurality of in-service automated agents, the suggestions being suggestions for responding to the ticket, determine a subset of the suggestions based on a rating of the number of in-service automated agents and the similarity of suggestions generated by the number of in-service automated agents, the rating being based on a previous score indicative of a quality of a previous subset of suggestions, and provide the subset of suggestions to the terminal”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Additionally, claim 32 recites the concept of providing suggestions to user tickets regarding a user issue which is a certain method of organizing human activity including commercial interactions. Control a federation of agents, the federation of agents including a plurality of in-service agents comprising: receive a ticket, dispatch the ticket to a number of in-service agents among the plurality of in-service agents, measure a similarity of suggestions generated by the number of in-service agents among the plurality of in-service agents, the suggestions being suggestions for responding to the ticket, determine a subset of the suggestions based on a rating of the number of in-service agents and the similarity of suggestions generated by the number of in-service agents, the rating being based on a previous score indicative of a quality of a previous subset of suggestions, and provide the subset of suggestions all, as a whole, fall under the category of commercial interactions. The claim falls into the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Mere recitation of generic computer components does not remove the claim from this grouping. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of an apparatus, automated agents, at least one processor, at least one memory including computer program code, and a terminal. The recited additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of an apparatus, automated agents, at least one processor, at least one memory including computer program code, and a terminal amounts to no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Claims 33-36 further limit the abstract idea of claim 32 without adding any new additional elements. Therefore, by the analysis of claim 32 above these claims, individually and as an ordered combination, do not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claims are not patent eligible. Claims 37-40 further limit the abstract idea of claim 21 without adding any new additional elements. Therefore, by the analysis of claim 21 above these claims, individually and as an ordered combination, do not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claims are not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless –(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 21-22, 25, 28, 30, 32-35, and 39 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Thomson (U.S. Pre-Grant Publication No. 2014/0164476, hereafter known as Thomson). Regarding claim 21, Thomson teaches: A method of controlling a federation of automated agents, the federation of automated agents including a plurality of in-service automated agents, at least some of the plurality of in-service automated agents including computer- readable instructions that, when executed by a computer, cause the computer to generate suggestions for responding to tickets received from one or more terminals (see Figs. 3A-3B and [0027]-[0029] for overall method. See [0011] "The subject disclosure describes, among other things, illustrative embodiments of a system that provides virtual assistant services based on directing user requests to a farm of software experts (e.g., content service modules), which can be written by various developers that are related or unrelated. In one or more embodiments, multiple experts embodied in software and/or hardware may respond to each user request, and the system can include a process for selecting the best or desired response, such as based in whole or in part on user feedback" and [0017] and [0019] for controlling a federation of software experts 175 that are in service to the federator 125. See [0017] for the federator an expert being combined in one device and "the experts 175 can be content service modules (e.g., software and/or hardware that can respond to requests by obtaining responses which can include media content) that provide responses to various information requests". See [0132] and [0135] for the experts including computer-readable instructions to perform the methods) the method comprising: receiving a ticket from a terminal (see [0027] "In a first turn, a first user makes a first request at 301 such as via client 110" and [0028] "the client 110 can record the user's voice sample (or receive other user input indicative of the second request) and can send it to the dialog federator 125" and [0018] "A user can issue a request, such as by voice, text and/or physical gesture (e.g., captured via image), and the client 110 can send the request to the dialog federator 125, such as via wireless and/or wired communications". Examiner is interpreting this request as falling under the broadest reasonable interpretation of "ticket" because it is a request for help/answer to a question) dispatching the ticket to a number of in-service automated agents among the plurality of in-service automated agents (see [0027] "At 302, a screener 235 can send invites to one or more experts 175" and [0019] "the dialog federator 125 can send the user request (e.g., directly in text format and/or in a format converted by the ASR 130 or by other means) via the API 150 to a number of experts 175. For instance, invites can be sent from the dialog federator 125 to the experts 175 (or some of the experts) via the API 150" and [0023] "a screener 235 can select one or more experts 175 using session metadata, including the speech recognizer output, metadata history, and/or ratings of experts. The selected experts 175 can each receive an invite or an indication that the expert may respond to the user's request") measuring a similarity of suggestions generated by the number of in-service automated agents among the plurality of in-service automated agents, the suggestions being suggestions for responding to the ticket (see [0062] "the bid selector 240 may compare outputs from multiple experts 175 to select a bid. The bid selector 240 may: select experts that give similar answers where agreement across experts may be used to indicate a higher confidence that the answer is correct" and [0064] "the bid selector 240 may take a weighted sum of multiple indicators to determine a discriminator for each bidding expert and may compare the discriminator to a threshold. The expert or experts with the highest value of the discriminator may be selected. Indicators may comprise one or more of...a measure of how similar the expert's answer is to answers from other experts (may have positive or negative weight)") determining a subset of the suggestions based on a rating of the number of in- service automated agents and the similarity of suggestions generated by the number of in-service automated agents (see [0029] "At 312, the bid selector can select the best answer or answers based on prior feedback, the current answer, and/or other factors. A determination can be made at 313 regarding a more appropriate or better suited bid(s). If the bid selector 240 determines that the first expert's bid is best or more desired, then at 314 it can send the first expert's bid to the user via the client 110. Otherwise, it can provide the second expert's bid to the client 110 at 315. Any number of bids can be evaluated and selected" and [0053] "Rating information can be used to guide expert selection by the screener 235 and the bid selector 240. Ratings provide motivation for vendors to build experts 175 that deliver quality results", [0057] "The bid selector 240 may use ratings as a factor in selecting which bids to present to the client 110" and [0064] " the bid selector 240 may take a weighted sum of multiple indicators to determine a discriminator for each bidding expert and may compare the discriminator to a threshold. The expert or experts with the highest value of the discriminator may be selected. Indicators may comprise one or more of: the expert's rating (has a positive weight)...and a measure of how similar the expert's answer is to answers from other experts (may have positive or negative weight)") the rating being based on a previous score indicative of a quality of a previous subset of suggestions (see [0057] "The bid selector 240 may use ratings as a factor in selecting which bids to present to the client 110. For example, if an expert 175 consistently gets poor ratings or if users rarely choose to interact with the expert's bids, then the expert's answers may be less likely to be selected in the future" and [0045]-[0047] for ratings being determined based on user feedback on expert answers from previous sessions) and providing the subset of suggestions to the terminal (see [0029] "Any number of bids can be evaluated and selected, while any number of bids can be requested and eventually presented at the client 110. The application of the end user device (e.g., a mobile device that is also executing the client 110) can display a message based on the winning bid or bids" and [0025] "experts 175 selected by the bid selector 240 can provide one or more answers that are consolidated into a prompt and presented to the user via the client 110") Regarding claim 22, Thomson teaches all of the limitations of claim 21 above. Thomson further teaches: receiving, from the terminal, a score indicative of a quality of the subset of suggestions (see [0027] "At 305, the user can provide feedback (e.g., via client 110) on the quality of the answer derived from the bid. For example, the user may click a "thumbs up" icon or click on the answer for more information" and [0030] and [0042] for various methods of a user providing feedback including thumbs up/down and a rating on a five star scale) and updating the rating of the number of in-service automated agents based on the score (see [0045] "user feedback can be recorded and analyzed by a scorekeeper 530, which compiles feedback records and generates feedback statistics or ratings. For example, the scorekeeper 530 may track the number of times an expert's bid is presented and the number of times the thumbs up or "like" button is pressed, and then calculate overall percentages. These percentages can be examples of a rating that indicates how valuable the users found the expert's bid. The scorekeeper 530 may incorporate a forgetting factor that weights more recent feedback more heavily than old feedback" for updating ratings of the experts based on new user feedback scores) Regarding claim 25, Thomson teaches all of the limitations of claim 22 above. Thomson further teaches: further comprising: soliciting the score indicative of the quality of the subset of suggestions provided to the terminal (see [0042] "a client 110 may provide a means for the user to rate the quality of answers or bids from experts 175...A delete symbol such as a red "X" button may appear with an answer, so that if the user clicks the symbol, the expert receives negative feedback. Feedback may be collected from a survey taken, for example, after a turn or series of turns or from a follow-up call" for the solicitation of a user feedback score regarding the quality of the answer provided to the terminal) Regarding claim 28, Thomson teaches all of the limitations of claim 21 above. Thomson further teaches: further comprising: generating, by the number of in-service automated agents in the federation of automated agents, at least one suggestion for responding to the ticket (see [0025] "Experts 175 that have received an invite may respond with a bid. In one embodiment, a bid can be an offer to provide information to the user and may comprise text, audio, graphics, video and/or other media or content. A bid may comprise an answer to be presented to the user... more experts 175 may bid than is needed or than is practical to present to the client 110. In such cases, a bid selector 240 may choose one or more bids from among the pool of bidding experts. In one or more embodiments, experts 175 selected by the bid selector 240 can provide one or more answers that are consolidated into a prompt and presented to the user via the client 110" for the number of in-service automated agents in the federation generating at least one suggestion that is then selected to respond to the ticket) Regarding claim 30, Thomson teaches all of the limitations of claim 21 above. Thomson further teaches: wherein the determining a subset of suggestions based on a rating of the number of in-service automated agents and the similarity of suggestions generated by the number of in-service automated agents includes, consolidating the suggestions from the number of in-service automated agents (see [0062] "the bid selector 240 may compare outputs from multiple experts 175 to select a bid. The bid selector 240 may: select experts that give similar answers where agreement across experts may be used to indicate a higher confidence that the answer is correct" for consolidating and comparing the suggestions from the number of experts invited to bid on an answer) reviewing a rating associated with the number of in-service automated agents (see [0058] "Various strategies may be employed in the screening and bid selection processes such as the bid selector may select the responding expert with the highest rating. Another strategy can include bids being selected in proportion to the expert's voting record. For example, if a user "likes" or gives a "thumbs up" to expert-1 10% of the time and expert-2 20% of the time and only the two experts respond, then the bid selector may choose expert-11/3 of the time and expert-22/3 of the time, corresponding to the proportion of their relative ratings" for reviewing a rating associated with each of the consolidated expert answers) and adding the suggestion to the subset of suggestions based on the review of the rating (see [0064] "the bid selector 240 may take a weighted sum of multiple indicators to determine a discriminator for each bidding expert and may compare the discriminator to a threshold. The expert or experts with the highest value of the discriminator may be selected. Indicators may comprise one or more of: the expert's rating (has a positive weight)...and a measure of how similar the expert's answer is to answers from other experts (may have positive or negative weight)" for adding the suggestion to the subset based on the rating and similarity of suggestions to other suggestions) Regarding claim 32, Thomson teaches: An apparatus to control a federation of automated agents, the federation of automated agents including a plurality of in-service automated agents, the apparatus comprising: at least one processor (see [0132] "FIG. 10 depicts an exemplary diagrammatic representation of a machine in the form of a computer system 1000 within which a set of instructions, when executed, may cause the machine to perform any one or more of the methods described above, including generating requests, generating expert invites, selecting experts from among a group of experts based on various information including user feedback, generating bids or answers based on the requests, performing a bid selection from multiple expert devices based on feedback including user feedback and/or presenting one or more bids responsive to the request at an end user device" for a machine that controls a federation of in-service automated experts by sending invites and receiving bids to answer invites to respond to a customer request. See [0134] for the machine comprising a processor) and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to (see [0135] "The disk drive unit 1016 may include a tangible computer-readable storage medium 1022 on which is stored one or more sets of instructions (e.g., software 1024) embodying any one or more of the methods or functions described herein, including those methods illustrated above. The instructions 1024 may also reside, completely or at least partially, within the main memory 1004, the static memory 1006, and/or within the processor 1002 during execution thereof by the computer system 1000" for the machine comprising a memory with instructions to perform the methods of the disclosure that are executed and performed by the processor) Regarding the remaining limitations of claim 32, see the rejection of claim 21 above. Regarding claim 33, Thomson teaches all of the limitations of claim 32 above. Regarding the limitations introduced in claim 33, see the rejection of claim 22 above. Regarding claim 34, Thomson teaches all of the limitations of claim 32 above. Regarding the limitations introduced in claim 34, see the rejection of claim 25 above. Regarding claim 35, Thomson teaches all of the limitations of claim 32 above. Regarding the limitations introduced in claim 35, see the rejection of claim 28 above. Regarding claim 39, Thomson teaches all of the limitations of claim 21 above. Thomson further teaches: wherein the plurality of in-service automated agents are from among a pool of automated agents (see [0017] and [0019] “the dialog federator 125 can send the user request (e.g., directly in text format and/or in a format converted by the ASR 130 or by other means) via the API 150 to a number of experts 175. For instance, invites can be sent from the dialog federator 125 to the experts 175 (or some of the experts) via the API 150. Each expert 175 (or a subset of the experts) may respond to the request with an answer” for the plurality of in-service agents. [0063] “if the bid selector 240, other modules, search engines, human judges, and/or other entities determine that one or more answers may be in violation of established terms and conditions, nominally stated as part of a process of registering experts 175 to connect to the API 150, the dialog federator 125 may report a violation. The violation may be further reviewed to determine appropriate action. Depending on the offense, an expert or vendor may be barred from participation” for some of the pool of experts that violate terms being barred/removed from service of the federator. Accordingly, in-service experts are from among a pool of automated experts) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 23 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Thomson in view of Zhu et al. (U.S. Pre-Grant Publication No. 2019/0236132, hereafter known as Zhu) and Barak et al. (U.S. Pre-Grant Publication No. 2016/0360466, hereafter known as Barak). Regarding claim 23, Thomson teaches all of the limitations of claim 21 above. While Thomson teaches the bids/answers received from the experts comprising a number of fields in [0076] and the user request being text-based, Thomson does not explicitly teach the ticket including a number of fields. Thomson also does not explicitly teach measuring a complexity of the ticket based on the fields and determining the number of agents based on the complexity. Zhu teaches: wherein the ticket includes a number of fields (see [0026] "The natural language inputs can include inputs that may be provided to a domain-specific application, such as a report generator or search interface. For example, the natural language inputs can include textual inputs to a variety of fields displayed in an interface of the domain-specific application. A user may enter words, numbers, sentences, or even whole documents as the natural language inputs") It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include text fields in the request as taught by Zhu in the system of Thomson, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. The combination of Thomson and Zhu still does not explicitly teach measuring a complexity of the ticket based on the fields and determining the number of agents based on the complexity. Barak teaches: and the method further comprises: measuring a complexity of the ticket based on at least one of the fields (see Fig. 8 step 815 and [0172] "At block 815, a complexity level associated with the message or account data associated with the network device is determined, such as by an interaction management engine (e.g., interaction management engine 625 of FIG. 6). For example, a complexity level may be determined based on a prevalence of technical terms (e.g., matching those in a defined list) in the message, a length of the message, a number of question marks or requests in the message and/or an identifier of a complex product or service") and determining the number of in-service automated agents based on the complexity (see [0180] "a score is generated for each agent based on a corresponding vector. For example, the score may include a weighted sum of the vector's elements, where the weights may be pre-defined (e.g., generally or for a particular client) or determined based on one or more of the variables determined in blocks 810-825. The subset can then include one or more agents associated with a score above a relative or absolute threshold (e.g., an agent with the highest score amongst the set of agents)" for determining the subset of agents based on the complexity of the input/request from the user as part of the vector) One of ordinary skill in the art would have recognized that applying the known technique of determining the complexity of a request and routing the request to an agent of a sufficient skill level of Barak to the combination of Thomson and Zhu would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Barak to the teaching of the combination of Thomson and Zhu would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such determining the complexity of a request and routing the request to an agent of a sufficient skill level. Further, applying determining the complexity of a request and routing the request to an agent of a sufficient skill level to the combination of Thomson and Zhu would have been recognized by one of ordinary skill in the art as resulting in an improved system that would allow more efficient usage of automated agents. Specifically, by recognizing requests that are complex and sending them only to agents of a threshold level, the overall system makes more efficient use of its resources by preventing less skilled/capable experts in the group from being tasked with responding to overly complex inputs and likely outputting low quality results. Regarding claim 24, the combination of Thomson, Zhu, and Barak teaches all of the limitations of claim 23 above. As discussed above regarding claim 23, combination of Thomson and Zhu does not explicitly teach measuring a complexity of the ticket based on the fields. Accordingly, the combination of Thomson and Zhu does not explicitly teach measuring a complexity of the ticket based on a number of words in a description field. However, Barak further teaches: wherein the measuring a complexity of the ticket includes, measuring the complexity of the ticket based on a number of words in a description field of the ticket (see [0172] "At block 815, a complexity level associated with the message or account data associated with the network device is determined, such as by an interaction management engine (e.g., interaction management engine 625 of FIG. 6). For example, a complexity level may be determined based on...a length of the message" for the length of the message, i.e. the number of words, being used to determine the complexity of the user request/input) One of ordinary skill in the art would have recognized that applying the known technique of determining the complexity of a request and routing the request to an agent of a sufficient skill level of Barak to the combination of Thomson and Zhu would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Barak to the teaching of the combination of Thomson and Zhu would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such determining the complexity of a request and routing the request to an agent of a sufficient skill level. Further, applying determining the complexity of a request and routing the request to an agent of a sufficient skill level to the combination of Thomson and Zhu would have been recognized by one of ordinary skill in the art as resulting in an improved system that would allow more efficient usage of automated agents. Specifically, by recognizing requests that are complex and sending them only to agents of a threshold level, the overall system makes more efficient use of its resources by preventing less skilled/capable experts in the group from being tasked with responding to overly complex inputs and likely outputting low quality results. Claims 26 is rejected under 35 U.S.C. 103 as being unpatentable over Thomson in view of Skiba et al. (U.S. Pre-Grant Publication No. 2015/0006460, hereafter known as Skiba). Regarding claim 26, Thomson teaches all of the limitations of claim 21 above. Thomson further teaches in [0070] “A dialog memory 575 serves as a storage location for information used or created by elements of the dialog federator 125. It may contain information for the current session and historical information, such as from previous sessions… Part of all of the session metadata and information stored in the dialog memory 575 can be saved into a log for future reference and for off-line research and development. Part or all of this log data may be stored in log 585 of FIG. 5. Some of this log data may be made available to vendors for use in developing and improving experts and to experts for providing better answers”. As such, Thomson teaches that inputs and answers can be added to a data set that can be used to improve and generate automated experts. However, Thomson does not explicitly teach adding the ticket and a suggestion generated by at least one of the number of in-service automated agents to a test set based on the similarity of suggestions. Skiba teaches: adding the ticket and a suggestion generated by at least one of the number of in-service automated agents to a test set based on the similarity of suggestions (see [0042] "Analysis engine 216 may also "crowd source" the question and ask a plurality of individuals to answer the question whereby commonality or ranked answer(s) are considered "the" answer to the question" and [0043] "analysis engine 216 has a question and an answer and updates 214 knowledge base 122" for the updating of a knowledge base to include a question and answer pair based on the similarity of crowdsourced answers to the question. See [0025] and [0028] an automated agent uses the knowledge base to answer user questions. In combination with Thomson, the knowledge base of Skiba that contains previously received questions and answers can be used as the part of the dialog memory that is used to generate and improve automated experts in Thomson [0070]. The knowledge base used to train the models reads on “test set”, as per paragraph [0113] of Applicant’s disclosure the test set is a set of data used to train the agents) One of ordinary skill in the art would have recognized that applying the known technique of adding a question/answer pair to the historical knowledge base of a system based on the commonality of the answers of Skiba to Thomson would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Skiba to the teaching of Thomson would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such adding a question/answer pair to the historical knowledge base of a system based on the commonality of the answers. Further, applying adding a question/answer pair to the historical knowledge base of a system based on the commonality of the answers to Thomson would have been recognized by one of ordinary skill in the art as resulting in an improved system that would allow more robust training of new models. As stated in [0070], the models of Thomson are generated and improved based on stored historical inputs. By incorporating Skiba’s updating of the knowledge base with question/answers based on answers’ commonality, the amount of and confidence in the stored data used to develop and improve new experts of Thomson would be increased, resulting in the combination training and using more robust models for future queries. Thomson also explicitly considers similar answers to be more appropriate for display and more likely to be accurate as discussed above regarding claim 21, so one of ordinary skill in the art would have recognized that these similar answers would be the best to base experts off of. Claims 27 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Thomson in view of Chu (U.S. Pre-Grant Publication No. 2009/0106178, hereafter known as Chu). Regarding claim 27, Thomson teaches all of the limitations of claim 21 above. Thomson further teaches rating the experts based on their answers to user requests and further teaches barring experts from participating in the system if they violate terms of registering in the system in [0063]. However, Thomson does not explicitly teach removing at least one in-service automated agent from the federation of automated agents based on the rating of the at least one in-service automated agent. Chu teaches: removing at least one in-service automated agent from the federation of automated agents based on the rating of the at least one in-service automated agent (see [0029] "The test results 226 and 236 are evaluated at 250 based upon a comparison of the test results 226 and 236 with respect to the performance criteria 240. The performance evaluation process 250 includes a determination at 260 as to whether performance of the champion model 222 has degraded to a particular degree so as to require a corrective action 270. Corrective actions can include building a first replacement predictive model to replace the original champion model 222 in order to improve performance of predictions within the production environment 202" for retiring an in-service agent in favor of a replacement agent based on the performance of the agent degrading. In combination with Thomson, experts with low ratings would be removed from the pool in favor of new experts with better ratings) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the retiring of an in-service model of Chu into the system of Thomson. As Chu states in [0023] “A champion predictive model is deployed for a period of time in production environments. Its predictive performance often degrades over time. A champion model may then need to be retired when its performance degradation reaches a certain threshold. The predictive model management system 34 (e.g., a model manager) can perform model monitoring tasks to identify underperforming models 50 in time to prevent problems caused by obsolete models. A champion model's degradation can be caused for many reasons, such as…the factors that contribute to predictability have changed.” Therefore, by removing lower performing models from service and replacing them with higher performing models being trained as competitors, the resulting combination would make sure the in-service models of Thomson would be the best models available to the system and would prevent Thomson from being bogged down by out-of-date models that are no longer helpful to the user even if they do not violate terms of service of the Thomson federation. Regarding claim 37, Thomson teaches all of the limitations of claim 21 above. Thomson further teaches rating the experts based on their answers to user requests and further teaches barring experts from participating in the system if they violate terms of registering in the system in [0063]. However, Thomson does not explicitly teach deregistering at least one in-service automated agent from the federation of automated agents based on the rating of the at least one in-service automated agent. Chu teaches: deregistering an in-service automated agent from among the plurality of in- service automated agents, based on a rating of the in-service automated agent (see [0029] "The test results 226 and 236 are evaluated at 250 based upon a comparison of the test results 226 and 236 with respect to the performance criteria 240. The performance evaluation process 250 includes a determination at 260 as to whether performance of the champion model 222 has degraded to a particular degree so as to require a corrective action 270. Corrective actions can include building a first replacement predictive model to replace the original champion model 222 in order to improve performance of predictions within the production environment 202" for retiring an in-service agent in favor of a replacement agent based on the performance of the agent degrading) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the retiring of an in-service model of Chu into the system of Thomson. As Chu states in [0023] “A champion predictive model is deployed for a period of time in production environments. Its predictive performance often degrades over time. A champion model may then need to be retired when its performance degradation reaches a certain threshold. The predictive model management system 34 (e.g., a model manager) can perform model monitoring tasks to identify underperforming models 50 in time to prevent problems caused by obsolete models. A champion model's degradation can be caused for many reasons, such as…the factors that contribute to predictability have changed.” Therefore, by removing lower performing models from service and replacing them with higher performing models being trained as competitors, the resulting combination would make sure the in-service models of Thomson would be the best models available to the system and would prevent Thomson from being bogged down by out-of-date models that are no longer helpful to the user even if they do not violate terms of service of the Thomson federation. Claim 29 is rejected under 35 U.S.C. 103 as being unpatentable over Thomson in view of Zhu and Skiba et al. (U.S. Pre-Grant Publication No. 2015/0006460, hereafter known as Skiba). Regarding claim 29, Thomson teaches all of the limitations of claim 28 above. While Thomson teaches the bids/answers received from the experts comprising a number of fields in [0076] and the user request being text-based, Thomson does not explicitly teach experts reviewing a description field to generate a suggestion for the user request. Thomson also does not explicitly teach generating the suggested responses by comparing the description field to a training set. Zhu further teaches: wherein the generating, by the number of agents in the federation of automated agents, at least one suggestion for responding to the ticket includes, reviewing a description field (see [0026] "the natural language inputs include prediction data 125 that is transmitted to a prediction server 115 as inputs to the generated model that was trained in the machine learning process using the training data 120. The natural language inputs can include inputs that may be provided to a domain-specific application, such as a report generator or search interface. For example, the natural language inputs can include textual inputs to a variety of fields displayed in an interface of the domain-specific application...A user may enter “gammar” to a field of an application. The improved recommendation system would be trained to identify this input as related in a contextual and domain-specific manner to “gamma” which may be a data processing component called a “gamma board” that is used within the particular industry or domain for which the predictive model has been trained. As a result, the improved recommendation system may perform a spelling correction to “gamma” and/or provide the user with a suggestion of “gamma board" for the automated agents reviewing a field describing the natural language that the user needs recommendations for) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include text fields in the request as taught by Zhu in the system of Thomson, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Thomson teaches dialog memory 575 in [0070] that can be used to develop and improve automated experts. Zhu further teaches storing previously received inputs in [0047]: “model training system 255 can also be configured with a machine learning process to train and output one or more training models 275 that are capable of generating natural language outputs based on historical natural language inputs which may have been provide by a user in the past and can be stored in memory 220 or memory 250” that are used as training data. However, the combination of Thomson and Zhu does not explicitly teach generating the suggested responses by comparing the description field to the training set. Skiba further teaches: comparing a description field to a training set; and generating the at least one suggestion based on the comparison of the description field to the training set (see [0042] "analysis engine 216 attempts to resolve the generated question...analysis engine 216 accesses question/answer history 222 to extract related history 218 and finds an identical, or reasonably similar, answer to the question and automatically associates the answer to the question" for comparing the question in the user input to a set of previously stored questions and generates the answer to the question based on the questions/answer similarity to that of the historical questions. In combination with Thomson and Zhu, the automated agents can reference the Thomson dialog memory/Zhu training data that comprises historical inputs as discussed in Thomson [0070]/Zhu [0047] to look for similar/identical question in the training data) One of ordinary skill in the art would have recognized that applying the known technique of generating a suggestion to a question based on finding an existing answer to the question in stored previously submitted questions of Skiba to the combination of Thomson and Zhu would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Skiba to the teaching of the combination of Thomson and Zhu would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such generating a suggestion to a question based on finding an existing answer to the question in stored previously submitted questions. Further, applying generating a suggestion to a question based on finding an existing answer to the question in stored previously submitted questions to the combination of Thomson and Zhu would have been recognized by one of ordinary skill in the art as resulting in an improved system that would allow more efficient output of suggestions. Particularly, by checking the training data of historical inputs for an already determined answer to a user’s input, the in-service models of the resulting combination could save time and processing power by not needing to generate the answer to every user’s input from scratch. Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over Chu in view of Zhu. Regarding claim 31, Chu teaches: A method of adding a provisional automated agent to a federation of automated agents, the federation of automated agents including a plurality of in-service automated agents, at least some of the plurality of in-service automated agents including a list of computer-readable instructions that, when executed by a computer, cause the computer to generate suggestions for responding to (see Fig. 4 and [0033] "FIG. 4 depicts at 300 a production environment 202 and a test environment 310. The production environment 202 contains champion predication models to generate predictions for production purposes. The test environment 310 contains competing models 320 that are under development and which can act as replacement models for the champion models that are operating in the production environment 202" for evaluating performance of competing models (provisional agents) that can take the place of champion models in the production environment (federation of in-service automated agents). Examiner notes that the predictive models read on the "automated agents" of the instant invention, as the predictive models are code (see [0107]-[0114]) implemented on computers (see at least [0158])) receiving (see [0033] "the competing models 320 use data from the data source 230 collected at time point 1B" for receiving input data, see [0025] “the inputs into process 110 can include certain business user inputs, such as…one or more monitoring data sources” for the user providing the data sources) dispatching (see [0033] "the competing models 320 use data from the data source 230 collected at time point 1B" for passing the input data to the competing models) receiving, (see [0033] "the competing models 320 use data from the data source 230 collected at time point 1B to generate test results 330" for receiving an output from the competing models) evaluating the provisional automated agent by comparing the provisional suggestion to suggestions generated by a number of in-service automated agents in the federation of automated agents (see [0033] "A model manager compares the competing models' performance test results 330 (with respect to predicting data source 1B) with champion model 1's prediction performance 236 without using the decay indexes") and adding the provisional automated agent to the plurality of in-service automated agents in response to the evaluating (see [0033] "Based upon the comparison, if a competing model outperforms the champion model, then a corrective action can include replacing at 270 the champion model 222 with the competing model" for adding a provisional competing model into the production environment) Chu does not explicitly teach the automated agents being configured to take tickets received from terminals as inputs and generating suggestions to those inputs from terminals. Further, while Chu teaches training data being used to train the competing models (see at least [0113]), Chu does not explicitly teach forwarding the received ticket to a training manager that receives a suggestion regarding the ticket from a provisional agent and evaluating the provisional agent’s response. However, Zhu teaches: at least some of the plurality of in-service automated agents including a list of computer-readable instructions that, when executed by a computer, cause the computer to generate suggestions for responding to tickets received from one or more terminals (see Fig. 4 step 415 and [0065] "In operation 415, the server 115 determines an output including a plurality of natural language outputs. When the server 115 receives prediction data 125, the server 115 can apply the trained prediction model generated as a result of the training phase of the machine learning process to the transmitted inputs and can generate one or more natural language outputs that include domain-specific language and can be contextually correlated to inputs provided in a natural language format" and [0043] "the trained prediction models 275 that were generated as a result of performing the machine learning process, can receive natural language inputs and process the inputs to output predicted natural language outputs as domain-specific recommendations that can be optimized based on the natural language inputs and/or the domain-specific applications to which the inputs were provided" for the trained models 275, which are the in-service automated agents, generating suggestions to user tickets) receiving a ticket from a terminal (see Fig. 4 steps 405 and 410 and [0063] "For example, in operation 405, a client 105 receives an input including a plurality of natural language inputs. For example, a client 105 can receive a search query input to a domain-specific search interface. The user can enter one or more natural language words describing the search terms" and [0036]-[0037] "The communications module 230 transmits the computer-readable instructions and/or the natural language inputs stored on or received by the client 105 via network 235. The network 235 connects the client 105 to the server 115...the server 115 operates to receive, store and process the computer-readable instructions and/or the natural language inputs generated and received by client 105. In some embodiments, the server 115 can receive natural language inputs directly from one or more clients 105" for the receiving a natural language search query from a terminal. Examiner is interpreting this query as falling under the broadest reasonable interpretation of "ticket" because it is a request for help. In the case of Zhu [0019] and [0026], the request is to provide recommended lexicon for a specific industry/domain) dispatching the ticket to a training manager; receiving, from the training manager, a provisional suggestion (see [0039] "The model training system 255 is configured to implement a machine learning process that receives natural language inputs as training input and generates a training model that can be subsequently used to predict natural language outputs as domain-specific recommendations based on natural language inputs that may be received by one or more domain-specific applications that may be configured on a client 105" and [0046] for the model training system overseeing the training of alternate models to the ones in use making predictions. In combination with Chu, the model training system would operate the testing environment, receive the data and apply it to the competing models to generate the test results that are compared to the results of the in-service models) Regarding the inputs being tickets received from users at terminals, since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. That is in the substitution of the automated agents generating a natural language response to a user natural language input of Zhu for the models being predictive models generating predictions from received operating conditions of Chu. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. Regarding the addition of the training manager, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a training manager to train new models of Zhu in the system of Chu, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Particularly, one of ordinary skill in the art would have recognized that Chu’s “Test Environment” implies the existence of a manager conducting the testing and training of the competing models. Therefore, one of ordinary skill in the art would have recognized that the explicit inclusion of a training manager of Zhu to conduct the operations of the Chu “Test Environment” would have had predictable results. Claim 36 is rejected under 35 U.S.C. 103 as being unpatentable over Thomson in view of Zhang (U.S. Patent No. 10,692,006; hereafter known as Zhang) and Zitouni et al. (U.S. Patent No. 11,880,761; hereafter known as Zitouni). Regarding claim 36, Thomson teaches all of the limitations of claim 32 above. Thomson further teaches in [0070] “A dialog memory 575 serves as a storage location for information used or created by elements of the dialog federator 125. It may contain information for the current session and historical information, such as from previous sessions… Part of all of the session metadata and information stored in the dialog memory 575 can be saved into a log for future reference and for off-line research and development. Part or all of this log data may be stored in log 585 of FIG. 5. Some of this log data may be made available to vendors for use in developing and improving experts and to experts for providing better answers”. As such, Thomson teaches that inputs and answers can be added to a data set that can be used to improve and generate automated experts. However, Thomson does not explicitly teach adding the ticket and a suggestion generated by at least one of the number of in-service automated agents to a test set in response to the score indicative of the quality of the subset of suggestions being greater than a quality rating threshold. Zhang teaches: add the ticket and at least one of the suggestions to a test set in response to the score indicative of the quality of the subset of suggestions being greater than (see Col. 14 lines 25-37 “At block 504, process 500 can obtain a set of training items, each training item including an indication of a question and an indication of a user who provided an answer to the question (e.g. an “expert”). In various implementations, each training item can include one or more of: an indication of a person who asked the question or a measure of how good the answer, provided by the expert, was. In various implementations, the measure can be based on scores provided by users for the answer, a number of people who liked the answer, or a number of shares of the answer. In some implementations, the set of training items can include items corresponding to answers that received a comparatively high score” for question and answer pairs being selected as training items for a model based on the score for a question and answer pair being high. The training items used to train the models reads on “test set”, as per paragraph [0113] of Applicant’s disclosure the test set is a set of data used to train the agents) One of ordinary skill in the art would have recognized that applying the known technique of incorporating highly scoring answers to a set of training items for a model of Zhang to the system of Thomson would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Zhang to the teaching of the system of Thomson would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such incorporating highly scoring answers to a set of training items for a model. Further, applying incorporating highly scoring answers to a set of training items for a model to t the system of Thomson would have been recognized by one of ordinary skill in the art as resulting in an improved system that would allow more robust training of the automated agents. Specifically, by incorporating highly scored answers in the part of the dialog memory used to improve and generate automated experts of Thomson [0070], the resulting experts will be more robust because the training data comprises highly rated answers. The resulting trained models that the combination will use to determine answers to future user inputs will have been trained off of higher quality training data and therefore likely be higher-performing and more useful. While the combination of Thomson and Zhang teaches adding a ticket/answer pair to a test in response to the score of the quality of the answer being “comparatively high”, the combination still does not explicitly teach the inclusion into the training items based on the score being greater than a quality rating threshold. However, Zitouni teaches answers used to train a model based on whether a score of the answer exceeds a threshold (see Col. 8 lines 6-10 “the new domain expert 111C is trained utilizing only a portion of the answers. The answers selected for training may be the most relevant and/or have a weight that meets a predetermined weight threshold”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a threshold value as taught by Zitouni in the combination of Thomson and Zhang, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Particularly, one of ordinary skill in the art would have recognized that a threshold could have been added to the scoring of Zhang to determine which answers had “a comparatively high score” and which did not. One of ordinary skill in the art would have recognized that the use of a threshold value to make this determination of high score could be added with predictable results. Claim 38 is rejected under 35 U.S.C. 103 as being unpatentable over Thomson in view of Richardson (U.S. Pre-Grant Publication No. 2010/0070554, hereafter known as Richardson). Regarding claim 38, Thomson teaches all of the limitations of claim 21 above. Thomson further teaches that a subset of the experts may be sent invites to respond to a user request. However, Thomson does not explicitly teach dispatching the ticket to a first number of in-service automated agents in response to a complexity of the ticket being greater than a threshold; and dispatching the ticket to a second number of in-service automated agents, the second number being less than the first number of in-service automated agents, in response to the complexity of the ticket being less than the threshold. Richardson teaches: dispatching the ticket to a first number of in-service automated agents in response to a complexity of the ticket being greater than a threshold; and dispatching the ticket to a second number of in-service automated agents, the second number being less than the first number of in-service automated agents, in response to the complexity of the ticket being less than the threshold (see [0073] “Whether or not a question is "advanced" can be assessed in various ways…The expertise analysis module 404 can also deem the question to be difficult if there have already been one or more unsuccessful attempts to find an appropriate expert” and [0082]-[0083] “The decision as to how many experts should be selected can itself be based on various criteria… Another factor that has a bearing on the selection is the history of any prior attempts to route the question to appropriate experts (if such history exists). For example, assume that the selection management 402 first sends the question to a first group of experts. If no expert answers the question, the selection management 402 may decide to expand the number experts to which it sends the question” for routing difficult questions (ones that have not been answered satisfactorily by a small number of agents) to a larger number of experts. In the combination, the threshold of complexity is whether the question has been attempted by automated agents of Thomson previously or not, specifically in the case contemplated in Thomson [0079] in which no automated experts respond to an invite to answer) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate sending questions to different sized groups of agents of Richardson into the Thomson. As Richardson states in [0082] “the selection management module 402 may wish to select a sufficiently large pool of experts to ensure that at least one of the experts will agree to answer the question. But the selection management module 402 may otherwise wish to prevent the question from being sent to an unnecessarily large pool of experts; this is because this approach will unnecessarily disturb a large number of experts”. While Richardson itself is referring to human experts, one of ordinary skill in the art would have recognized the desire to not unnecessarily spend resources and commit automated agents of the combined system to answer lower difficulty questions. However, if an original number of agents could not determine an answer to a question previously, one of ordinary skill in the art would have recognized that broadening the agents consulted to answer a difficult question would increase the chances that the end user would be able to get an answer to their question. Claim 40 is rejected under 35 U.S.C. 103 as being unpatentable over Thomson in view of Heck (U.S. Patent No. 7,809,664; hereafter known as Heck). Regarding claim 40, Thomson teaches all of the limitations of claim 21 above. Thomson further teaches evaluating answers from automated experts based on similarity as discussed regarding claim 21, and also teaches consulting human agents under certain circumstances in which all bids/answers are rejected in [0079]. However, Thomson does not explicitly teach soliciting human input from a human expert based on the similarity of suggestions. Heck teaches: soliciting input from a human expert based on the similarity of suggestions (see Col. 9 lines 17-22 “feature router 112 may filter questions that QA robot system 100 is unable to answer. For example, suppose system 100 receives a question for which there is no expert. In such a case, feature router 112 might send the question to a set of human experts to be answered” and Col. 11 lines 57-63 “when a candidate answer 115 does not exceed the confidence level threshold determined by decision maker 140, question 101 and the suggested answer can be sent to human experts 150 to be answered. Human experts can generally refer to a human expert on a particular topic, a panel of experts, a group of computer users, or any other type of human input that can answer the question”. In combination with Thomson, if the system receives different responses from each expert or if the similarity of the expert answers is not above a threshold, the combination would look to a human for assistance) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate sending out questions that cannot be answered confidently by the automated agents out to human experts to be answered. As Heck states in Col. 2 lines 50-63, the observation of human interaction allows the automated system to expand and refine its ability to answer user questions going forward. This refining would increase the functionality of the combined system, gradually allowing the combined system to rely less on human intervention while at the same time still having human experts on hand to provide users with answers that the system currently cannot answer. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Zhang et al. (U.S. Pre-Grant Publication No. 2018/0211260) teaches routing customer support tickets to customer service agents using a trained model Ghatage et al. (U.S. Pre-Grant Publication No. 2017/0372231) teaches analyzing text of a service request using natural language processing Bigus et al. (U.S. Patent No. 7,386,522) teaches selecting program modules to perform a task based on the domain knowledge of the program modules Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C MORONEY whose telephone number is (571)272-4403. The examiner can normally be reached Mon-Fri 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached on (571) 270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.C.M./Examiner, Art Unit 3628 /EMMETT K. WALSH/Primary Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Nov 18, 2021
Application Filed
Mar 01, 2025
Non-Final Rejection — §101, §102, §103
Jun 13, 2025
Response Filed
Jun 17, 2025
Interview Requested
Jul 08, 2025
Applicant Interview (Telephonic)
Jul 08, 2025
Examiner Interview Summary
Aug 06, 2025
Final Rejection — §101, §102, §103
Dec 12, 2025
Response after Non-Final Action
Feb 11, 2026
Request for Continued Examination
Feb 23, 2026
Response after Non-Final Action
Feb 28, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602626
SYSTEMS AND METHODS FOR GENERATING TIME SLOT PREDICTIONS AND REPURCHASE PREDICTIONS USING MACHINE LEARNING ARCHITECTURES
2y 5m to grant Granted Apr 14, 2026
Patent 12567018
System and Method For Enabling Unattended Package Delivery to Multi-Dwelling Properties
2y 5m to grant Granted Mar 03, 2026
Patent 12548098
CONTINUOUS MONITORING SYSTEM FOR DETECTING, LOCATING, AND QUANTIFYING FUGITIVE EMISSIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12511660
METHOD AND APPARATUS FOR CALCULATING CARBON EMISSION RESPONSE BASED ON CARBON EMISSION FLOWS
2y 5m to grant Granted Dec 30, 2025
Patent 12498728
CONTROL SYSTEM AND CONTROL METHOD
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
26%
Grant Probability
51%
With Interview (+25.1%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 123 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month