Prosecution Insights
Last updated: April 19, 2026
Application No. 18/399,307

SYSTEMS AND METHODS FOR GATHERING AGENT PERFORMANCE METRICS AND TRANSLATING METRICS TO NORMALIZED GOAL PERCENTAGES TO EVALUATE AGENT PERFORMANCE

Final Rejection §101§103
Filed
Dec 28, 2023
Examiner
KNIGHT, LETORIA G
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Genesys Cloud Services Inc.
OA Round
2 (Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
2y 9m
To Grant
73%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
46 granted / 173 resolved
-25.4% vs TC avg
Strong +46% interview lift
Without
With
+46.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
39 currently pending
Career history
212
Total Applications
across all art units

Statute-Specific Performance

§101
43.9%
+3.9% vs TC avg
§103
38.6%
-1.4% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 173 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This is a final office action in response to the amendment filed 23 December 2025. Claims 1, 10, 12, and 19 have been amended. Claim 9 has been canceled. Claim 21 Is newly added. Response to Amendment Applicant’s amendment to claims 1, 10, 12, and 19 and newly added claim 21 have been entered. Applicant’s amendment is insufficient to overcome the pending 35 U.S.C. 101 rejection. The rejection remains pending and is updated below, as necessitated by amendment. Applicant’s amendment is insufficient to overcome the pending 35 U.S.C. 103 rejection. The rejection remains pending and is updated below, as necessitated by amendment. Response to Arguments Applicant’s arguments regarding the 35 U.S.C. 103 rejection have been fully considered, but are moot in view of the new grounds of rejection necessitated by Applicant’s amendment to the claims because the arguments do not apply to the combination of references used in the current rejection detailed below. Applicant asserts that under Step 2A Prong One the claims are not directed to an abstract idea, and do not fall within the certain methods of organizing human activities grouping because the claims are not directed to managing personal behavior or relationships or interactions between people. Applicant further assets that Under Step 2A Prong Two, the claim limitations integrate any alleged abstract idea into a practical application because the claims include specific recitations that place meaningful limits on the alleged exception by detailing how the system compares agent performance, including comparing agent performance over the plurality of agent performance metrics. Applicant lastly asserts that each of the amended claims recites significantly more than any alleged abstract idea under Step 2B because under 35 U.S.C. 103 the limitations are not taught by the prior art of record. Examiner respectfully disagrees. Per MPEP 2106.04(a)(2)(II) certain methods of organizing human activity is defined as activity that falls within the enumerated sub-groupings of fundamental economic principles or practices, commercial or legal interactions, managing personal behavior, and relationships or interactions between people. Certain activity between a person and a computer (for example a method of anonymous loan shopping that a person conducts using a mobile phone) may fall within the “certain methods of organizing human activity” grouping. The number of people involved in the activity is not dispositive as to whether a claim limitation falls within this grouping. Instead, the determination should be based on whether the activity itself falls within one of the sub-groupings. Managing or evaluating agent performance according to a common standard, calculating a goal percentage for each agent performance metric, and comparing agent performance over the plurality of agent performance metrics using a processor is a form if managing the personal behavior of an agent with respect to performance of a job function. Therefore, the claims are directed to an abstract idea that falls within certain methods of organizing human activity under Step 2A Prong One. The claim limitations for “obtaining metric data” is a data gathering step that is construed as extra-solution activity because it merely provide input for the recited data processing steps. Per MPEP 2106.05(g) extra-solution activity includes both pre-solution and post-solution activity. The steps to “convert, based on the metric data, each of the plurality of raw performance indicators into a point score” and “calculate, based on the points score, a goal percentage” are mathematical concepts. MPEP § 2106.04(a)(2)(I) defines “mathematical concepts” as mathematical relationships, mathematical formulas or equations, and mathematical calculations. The MPEP expressly recognizes mathematical relationships and calculations as constituting patent-ineligible abstract ideas. MPEP § 2106.04(a). The step to “compare, based on the plurality of goal percentages, agent performance over the plurality of agent performance metrics” is a judgement, evaluation, and observation that can practically be performed in the mind as mental process. The MPEP expressly recognizes such mental processes as constituting patent-ineligible abstract ideas. MPEP § 2106.04(a)(2)(II). Displaying the results of the collection and analysis “to facilitate visualized comparison” is insignificant post-solution activity that does not add meaningful limitations beyond generally linking the abstract idea to the particular technological environment. The claimed processor and memory are broadly and generically claimed and are merely used as tools to implement the abstract concepts recited in the claims. Examiner notes that a “display” element is not positively recited as part of the system of claim 1 or claim 19. The limitations are void of technological improvements to the computing system used to implement the abstract idea. Therefore, the 35 U.S.C. 101 rejection is proper, maintained, and updated below, as necessitated by amendment. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 and 10-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 1 recites a device, independent claim 12 recites a process, and independent claim 19 recites a product for evaluating agent performance. Independent claims 1 and 19 recite substantially similar limitations. Taking claim 1 as representative for claim 9, claim 1 recites the following limitations: obtain metric data for one or more agents including a plurality of raw performance indicators each corresponding to a particular agent performance metric of a plurality of agent performance metrics; convert, based on the metric data, each of the plurality of raw performance indicators into a points score for the particular agent performance metric to provide a plurality of points scores for the plurality of agent performance metrics; calculate, based on the points score, a goal percentage for each particular agent performance metric to provide a plurality of goal percentages for the plurality of agent performance metrics; and compare, based on the plurality of goal percentages, agent performance over the plurality of agent performance metrics including to display, for each agent in an agent profile selected by a user, the plurality of goal percentages for the plurality of agent performance metrics assigned to the particular agent to facilitate visualized comparison of agent performance across the selected agent profile, in which the selected agent profile is indicative of a characteristic of a work schedule, wherein the characteristic is shared by agents associated with the selected agent profile, wherein to convert each of the plurality of raw performance indicators into a points score comprises to normalize the points score for the particular agent performance metric across a plurality of variables to evaluate agent performance according to a common standard. Claim 12 recites the following limitations: obtaining, by a computing system, metric data for one or more agents including a plurality of raw performance indicators each corresponding to a particular agent performance metric of a plurality of agent performance metrics; converting, by a computing system and based on the metric data, each of the plurality of raw performance indicators into a points score for the particular agent performance metric to provide a plurality of points scores for the plurality of agent performance metrics; calculating, by the computing system and based on the points score, a goal percentage for each particular agent performance metric to provide a plurality of goal percentages for the plurality of agent performance metrics; and comparing, by the computing system and based on the plurality of goal percentages, agent performance over the plurality of agent performance metrics, wherein converting each of the plurality of raw performance indicators into a points score comprises normalizing the points score for the particular agent performance metric across a plurality of variables to evaluate agent performance according to a common standard, and wherein comparing agent performance over the plurality of agent performance metrics comprises comparing, for each agent in an agent profile selected by a user in which the selected agent profile is indicative of a characteristic of a work schedule and the characteristic is shared by agents associated with the selected agent profile, agent performance for the particular agent over a first predefined time interval to agent performance for the particular agent over a second predefined time interval. Under Step 1, independent claims 1, 12, and 19 recite at least one step or act, including obtaining metric data. Thus the claims fall within one of the statutory categories of invention. Under Step 2A Prong One, the limitations recited the claims for obtaining metric data for one or more agents, converting each of the plurality of raw performance indicators into a points score for the particular agent performance metric, calculating a goal percentage for each particular agent performance metric, comparing agent performance over the plurality of agent performance metrics, normalizing the points score for the particular agent performance across a plurality of variables, and comparing agent performance for the particular agent over a predefined time interval, under its broadest reasonable interpretation falls within the "Certain Methods of Organizing Human Activity" grouping of abstract ideas, because the inventive concept involves managing the behavior of call/contact center agents by gathering job performance metrics to determine whether business goals are being met for providing various types of services to customers and improving overall contact center performance and the customer experience. See at least Spec. at [0056, 0076]. Each of the claimed steps could also be performed mentally or through use of a pen and paper and as a result, reasonably fall under the mental processes grouping of abstract ideas. Additionally, the steps for converting raw performance indicators into a points score, calculating a goal percentage, and normalizing the points score fall within the mathematical concepts grouping of abstract ideas because the scoring, normalizing, and converting steps involve mathematical relationships and calculations. Accordingly, the claims recite an abstract idea. Under Step 2A Prong Two, the judicial exception of claim 12 is not integrated into a practical application. In particular, the claims only recite a processor and storage device for performing the recited steps. These additional elements are recited at a high level of generality (i.e., as a generic processor performing a generic computer function) and amount to no more than mere instructions to apply the exception using generic computer components. See MPEP 2106.05(f). For example, Applicant’s specification at paragraph [0051] states: “The computing device 100 may be any workstation, desktop computer, laptop or notebook computer, server machine, … or any other type of computing, telecommunications or media device, without limitation, capable of performing the operations and functionality described herein.” Adding generic computer components to perform generic functions, such as data gathering, performing calculations, and outputting a result would not transform the claim into eligible subject matter. See MPEP 2106.05(h). The Specification does not provide additional details about the computer system that would distinguish it from any generic processing devices that communicate with one another in a network environment. The claim fails to recite actual interface functionality. As claimed herein the limitations for displaying the results of the data processing steps processed data amounts to insignificant extra-solution activity, not to specific interaction with an object selected from a graphical user interface (GUI) with claim limitations directed to how the interface performs in response to the specific action in a particular interface interaction in a technical manner that provides a technical improvement to interface technology. The claims fails to recite a significant amount of interaction with a particular interface element that uses the information in a particular way such that the claims integrate the judicial exception into a practical application by modifying or improving the functions of the GUI. Specifying the abstract idea of gathering and analyzing agent performance metrics to evaluate agent performance being executed in a computer environment merely indicates a field of use, i.e. to execution on a generic computer. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Under Step 2B the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of a processor and storage device amount to no more than mere instructions to apply the exception using a generic computer component which cannot provide an inventive concept. See MPEP 2106.05. Dependent claims 2-8, 11, 13-18, and 20-21 include the abstract ideas of the independent claims. The limitations of the dependent claims merely narrow the method of organizing human activity abstract idea by describing how the gathered performance metrics are manipulated, analyzed, and presented to users. The limitations of the dependent claims are not integrated into a practical application because none of the additional elements set forth any limitations that meaningfully limit the abstract idea implementation. There are no additional elements that transform the claim into a patent eligible idea by amounting to significantly more. The analysis above applies to all statutory categories of invention. Accordingly independent claims 1 and 19 and the claims that depend therefrom are rejected as ineligible for patenting under 35 U.S.C. 101 based upon the same analysis applied to claim 12 above. Therefore claims 1-8 and 10-21 are ineligible under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1-8 and 10-21 are rejected under 35 U.S.C. 103 as being unpatentable over Bhat et al. (US 11,528,362) in view of Pal et al. (US 2022/0351229), and in view of Samborskyy et al. (US 2014/0181676). Regarding Amended Claim 1, Bhat et al. a system for evaluating agent performance comprising: at least one processor; and at least one memory having a plurality of instructions stored therein that, in response to execution by the at least one processor, causes the system to: (… methods and systems for providing a data consolidation for improved customer communication and agent performance evaluation in a multi-channel contact center. Bhat et al. [col. 1, lines 5-10]. The example computing device 1002 as illustrated includes a processing system 1004, one or more computer-readable media 1006, and one or more I/O interfaces 1008 that are communicatively coupled, one to another. Bhat et al. [col. 25, lines 45-55]. … the contact center 101 may additionally include a workforce management unit 145 configured to manage the agents of the contact center 101, including setting the work schedules for the agents of the contact center in accordance with predicted demand. Bhat et al. [col. 10, lines 39-55]); obtain metric data for one or more agents including a plurality of raw performance indicators each corresponding to a particular agent performance metric of a plurality of agent performance metrics; (… the data consolidation unit 135 may also consolidate the interaction activities of an agent throughout the services s/he provided to customers. A personalized context may be similarly generated for the agent, where the generated context may also include personal information of the agent, interaction activities of the agent, as well as other related context related to the interaction activities (e.g., surveys provided by customers for the agent). Bhat et al. [col. 8, lines 15-50]. … the interaction analysis unit 137 may include a plurality of metrics that may be used to measure agent performance of an agent. The plurality of metrics may be generated based on the consolidation data for an agent. That is, the data from different sources (e.g., different departments, different teams, or across different communication channels) are all combined in generating the metrics for the agent. Bhat et al. [col. 9, lines 15-43]); convert, based on the metric data, each of the plurality of raw performance indicators into a points score for the particular agent performance metric to provide a plurality of points scores for the plurality of agent performance metrics; (… based on the retrieved data, the performance management unit 143 may determine one or more performance scores for the agent (e.g., an overall performance score across multiple channels, a set of performance scores for different categories or different channels). Based on the determined scores for the agents, the performance management unit 143 may then rank the agents in general or in one or more categories. Bhat et al. [col. 9, lines 60-67; col. 10, lines 1-16]. …the agent performance may be evaluated by a performance score, a knowledge score, a sentiment+quality score, and/or a VoC score. Any metrics that affect one or more of the above scores may be then considered as an identified metric for evaluation of the agent performance and may be included in an equation or algorithm for calculating the performance score(s). Bhat et al. [col. 15, lines 35-61]); Bhat et al. fails to explicitly disclose the step to calculate, based on the points score, a goal percentage for each particular agent performance metric to provide a plurality of goal percentages for the plurality of agent performance metrics; and compare, based on the plurality of goal percentages, agent performance over the plurality of agent performance metrics including to display. Pal et al. discloses these limitations. (… calculating the agent performance index may include: measuring agent performance in relation to each of a plurality of key performance indicators (KPIs); for each KPI, calculating a normalized performance score based on a ratio of the agent performance related to the KPI to a KPI goal; and calculating the agent performance index as a combination of the normalized performance scores. Pal et al. [para. 0013]. … the engagement index may be calculated as the relative agent's activity as compared to other agent's activity on the gamification module. … the performance index may be calculated by comparing the agent's performance against pre decided goals. … The gamification challenges may be presented to the agents on a display, as part of a graphical user interface (GUI) using a dedicated application of gamification pages in a web browser. For example, a gamification page may include the goals of a level of the gamification challenge, and associated rewards. Pal et al. [para. 0028-0030]. … An agent engagement index may be calculated as a combination (e.g., weighted average) of the normalized engagement scores. In some embodiments the engagement parameter goal may equal engagement of other agents with relation to the same engagement parameter. Pal et al. [para. 0045-0048; Fig. 4]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the calculating steps of Bhat et al. to include the step to calculate, based on the points score, a goal percentage for each particular agent performance metric to provide a plurality of goal percentages for the plurality of agent performance metrics; and compare, based on the plurality of goal percentages, agent performance over the plurality of agent performance metrics as disclosed by Pal et al. to measure the performance of the agents on their primary job (Pal et al. [para. 0003]), in a manner that would yield predictable results at the relevant time. While Bhat et al. discloses agent-specific content includes the user profile of an agent and any customer interactions from the agent (Bhat et al. [col. 24, lines 60-60]) and use of an interface and workforce management unit for setting work schedules (Bhat et al. [col. 10, lines 03-55]; FIG. 7A illustrates an example snapshot 700 of a user interface 701 for monitoring metrics regarding agent performance, according to embodiments of the disclosure. The user interface may be an agent performance management (APM) dashboard. Bhat et al. [col. 22, lines 12-25; Fig. 7A-8B]), Bhat et al. and Pal et al. combined fail to explicitly disclose including to display for each agent in an agent profile selected by a user, the plurality of goal percentages for the plurality of agent performance metrics assigned to the particular agent to facilitate visualized comparison of agent performance across the selected agent profile, in which the selected agent profile is indicative of a characteristic of a work schedule, wherein the characteristic is shared by agents associated with the selected agent profile. Samborskyy discloses these limitations. (Selection of the particular user causes the specialized application 52 to retrieve the user's profile information from the third party database. Samborskyy et al. [para. 0385; Fig. 52]. … A contact center worker may view reports associated with a particular worker via the agent management UI page 500. … the report may display the productivity, average handling time, after call work time, current call time, call disposition, or any other relevant metric for an agent of a contact center. In one embodiment, the report displays the particular agent's report along with a comparison to the average of all agents, a goal, or a standard. … agents may be configured in aggregate. Samborskyy et al. [para. 0109-0113; Fig. 6-7]. … The report workspace 602 may represent a display area for a report dashboard. … Report widgets 604, according to one embodiment, are visualizations for particular contact center metrics. Samborskyy et al. [para. 0184-0186]. … . The dashboard UI 1000 may include a plurality of UI pages for monitoring and/or configuring a contact center. Samborskyy et al. [para. 0202-0205, 0212-0215 (agent group dashboard UI)]. … The activity report may include a segmented circle chart proportionally showing the activity of the agent including time on call, time on standby, and time on break, and include a text display of the percentage of time on call for a predefined period of time (e.g., since midnight) together with a color-coded arrow indicating the trend. Samborskyy et al. [para. 0221-0228 (agent widgets)] … the report panel 1804 may display a report of an agent's call activity for a relevant time period, an agent's call handle time, and/or a status of calls of the contact center. Samborskyy et al. [para. 0308-0315; Fig. 33-34]. … a performance visualizer 1650A…is provided for displaying forecast or scheduled contact center metrics against actual contact center metrics. … The monitoring UI may also display different types of KPIs. Samborskyy et al. [para. 0278-0289]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the performance monitoring and reporting steps of Bhat et al. and Pal et al. combined to include the step to display for each agent in an agent profile selected by a user, the plurality of goal percentages for the plurality of agent performance metrics assigned to the particular agent to facilitate visualized comparison of agent performance across the selected agent profile, in which the selected agent profile is indicative of a characteristic of a work schedule, wherein the characteristic is shared by agents associated with the selected agent profile as disclosed by Samborskyy et al. for providing a dashboard user interface for contact center monitoring (Samborskyy et al. [para. 0007]), in a manner that would yield predictable results at the relevant time. wherein to convert each of the plurality of raw performance indicators into a points score comprises to normalize the points score for the particular agent performance metric across a plurality of variables to evaluate agent performance according to a common standard. (… if multiple different performance scores are used to evaluate agent performance, there may be multiple different equations included in the created agent performance measurement framework. Bhat et al. [col. 20, lines 10-30]. …the process of determining the overall performance score of an agent may include determining specific scores for certain categories and then determining an overall score for the agent based on the determining scores for these categories. Bhat et al. [col. 21, lines 10-61]. … agent performance measurement framework is a mechanism that aggregates a critical set of metrics that are indicative of agent performance and combines them into a single, all-inclusive measure of agent performance. Bhat et al. [col. 21, lines 62-67]. … KPIs are to be defined, benchmarked, and analyzed at the call reason/product level, which drives the right agent behavior and enables improved accuracy. Bhat et al. [col. 24, lines 7-25]). Regarding Claim 2, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the system, wherein to normalize the points score for the particular agent performance metric comprises to establish a plurality of zones in progression toward a target for the conversion of at least one raw performance indicator corresponding to the particular agent performance metric into the points score. Pal et al. discloses this limitation. (… a gamification challenge may include a plurality of levels, each including one or more objectives. A gamification objective may include a qualitative goal defined on the performance of the agent in relation to one or more KPIs, and a plurality of rewards, each associated with completion of a level. Pal et al. [para. 0043-0045]. …calculating the agent performance index may include measuring agent performance in relation to each of a plurality of KPIs, for each KPI, calculating a normalized performance score based on a ratio of the agent performance related to the KPI to a KPI goal, and calculating the agent performance index as a combination of the normalized performance scores. Pal et al. [para. 0051]. … The reward for completing level 1 of the gamification challenge is 15 points. Pal et al. [para. 0061-0062; Table 1; Fig. 4]. … One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Pal et al. [para. 0069-0070]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the calculating steps of Bhat et al. and Samborskyy et al. combined to include the step to normalize the points score for the particular agent performance metric comprises to establish a plurality of zones in progression toward a target for the conversion of at least one raw performance indicator corresponding to the particular agent performance metric into the points score as disclosed by Pal et al. to measure the performance of the agents on their primary job (Pal et al. [para. 0003]), in a manner that would yield predictable results. Regarding Claim 3, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the system, wherein to normalize the points score for the particular agent performance metric comprises to determine the points score based on the particular zone of the plurality of zones that the at least one raw performance indicator falls in. Pal et al. discloses this limitation. (… a gamification challenge may include a plurality of levels, each including one or more objectives. A gamification objective may include a qualitative goal defined on the performance of the agent in relation to one or more KPIs, and a plurality of rewards, each associated with completion of a level. … Once a level is complete, rewards, badges or points may be awarded to the agents and the gamification challenge may proceed to the next level. Pal et al. [para. 0043-0045]… One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Pal et al. [para. 0069-0070]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the calculating steps of Bhat et al. and Samborskyy et al. combined include the step to normalize the points score for the particular agent performance metric comprises to determine the points score based on the particular zone of the plurality of zones that the at least one raw performance indicator falls in as disclosed by Pal et al. to measure the performance of the agents on their primary job (Pal et al. [para. 0003]), in a manner that would yield predictable results. Regarding Claim 4, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the system, wherein to determine the points score based on the particular zone comprises to determine whether the at least one raw performance indicator falls in a first zone corresponding to a first points score, and wherein the first points score is zero points. Pal et al. discloses this limitation. ( A challenge may include multiple levels and a level may be made up of one or more objectives. Usually, levels get difficult to complete as the challenge progresses. An objective may be a condition or criterion defined on one or more metric or KPI. Gamification challenges may include badges or rewards that symbolizes achievement and are provided to agents or groups of agents once an objective or a level is achieved. Supervisors and managers typically create a challenge for their agents. They may set multiple levels within a challenge. Each level may have few objectives where each objective is associated with achieving performance goals. Pal et al. [para. 0044]. … Agents may also be presented a leaderboard where they can see the progress of themselves and other agents in the gamification challenge, namely how many levels they have completed, how many points they have earned and their respective rank. Pal et al. [para. 0061-0063; Fig. 4-6; Table 1] … One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Pal et al. [para. 0069-0070]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the calculating steps of Bhat et al. and Samborskyy et al. combined to include the step to determine the points score based on the particular zone comprises to determine whether the at least one raw performance indicator falls in a first zone corresponding to a first points score, and wherein the first points score is zero points as disclosed by Pal et al. to measure the performance of the agents on their primary job (Pal et al. [para. 0003]), in a manner that would yield predictable results. Regarding Claim 5, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the system, wherein to determine the points score based on the particular zone comprises to determine whether the at least one raw performance indicator falls in a second zone closer to the target than the first zone, and wherein the second zone corresponds to a second points score that is greater than the first points score. Pal et al. discloses this limitation. ( A challenge may include multiple levels and a level may be made up of one or more objectives. Usually, levels get difficult to complete as the challenge progresses. An objective may be a condition or criterion defined on one or more metric or KPI. Gamification challenges may include badges or rewards that symbolizes achievement and are provided to agents or groups of agents once an objective or a level is achieved. Supervisors and managers typically create a challenge for their agents. They may set multiple levels within a challenge. Each level may have few objectives where each objective is associated with achieving performance goals. Pal et al. [para. 0044]. … Table 1 presents an example gamification challenge provided to the agent. The example gamification challenge includes 4 levels, each including four objectives. The objectives include a quotative goal defined on the performance of the agent. Pal et al. [para. 0061-0063; Table 1; Fig. 4-6] … One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Pal et al. [para. 0069-0070]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the calculating steps of Bhat et al. and Samborskyy et al. combined to include the step to determine the points score based on the particular zone comprises to determine whether the at least one raw performance indicator falls in a second zone closer to the target than the first zone, and wherein the second zone corresponds to a second points score that is greater than the first points score as disclosed by Pal et al. to measure the performance of the agents on their primary job (Pal et al. [para. 0003]), in a manner that would yield predictable results. Regarding Claim 6, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the system, wherein to determine the points score based on the particular zone comprises to determine whether the at least one raw performance indicator falls in a third zone closer to the target than the second zone, and wherein the third zone corresponds to a third points score that is greater than the second points score. Pal et al. discloses this limitation. ( A challenge may include multiple levels and a level may be made up of one or more objectives. Usually, levels get difficult to complete as the challenge progresses. An objective may be a condition or criterion defined on one or more metric or KPI. Gamification challenges may include badges or rewards that symbolizes achievement and are provided to agents or groups of agents once an objective or a level is achieved. Supervisors and managers typically create a challenge for their agents. They may set multiple levels within a challenge. Each level may have few objectives where each objective is associated with achieving performance goals. Pal et al. [para. 0044]. … Table 1 presents an example gamification challenge provided to the agent. The example gamification challenge includes 4 levels, each including four objectives. The objectives include a quotative goal defined on the performance of the agent. Pal et al. [para. 0061-0063; Table 1; Fig. 4-6] … One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Pal et al. [para. 0069-0070]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the calculating steps of Bhat et al. and Samborskyy et al. combined to include the step to determine the points score based on the particular zone comprises to determine whether the at least one raw performance indicator falls in a third zone closer to the target than the second zone, and wherein the third zone corresponds to a third points score that is greater than the second points score as disclosed by Pal et al. to measure the performance of the agents on their primary job (Pal et al. [para. 0003]), in a manner that would yield predictable results. Regarding Claim 7, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the system, wherein to calculate the goal percentage for each particular agent performance metric comprises to multiply the points score for the particular agent performance metric by a predetermined number of days to compute an aggregated points score for the particular agent performance metric over a predefined time interval. (… the data from different sources (e.g., different departments, different teams, or across different communication channels) are all combined in generating the metrics for the agent. The generated metrics may include a measurement of single user interaction or an average from a plurality of interactions within a certain period. Bhat et al. [col. 9, lines 15-31] … if multiple different performance scores are used to evaluate agent performance, there may be multiple different equations included in the created agent performance measurement framework. Bhat et al. [col. 20, lines 10-30; col. 25, lines 4-35]). Regarding Claim 8, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the system, wherein to calculate the goal percentage for each particular agent performance metric further comprises to divide the aggregated points score by a maximum number of points achievable over the predefined time interval to determine the goal percentage. (… the data from different sources (e.g., different departments, different teams, or across different communication channels) are all combined in generating the metrics for the agent. The generated metrics may include a measurement of single user interaction or an average from a plurality of interactions within a certain period. Bhat et al. [col. 9, lines 15-31]. … if multiple different performance scores are used to evaluate agent performance, there may be multiple different equations included in the created agent performance measurement framework. Bhat et al. [col. 20, lines 10-30]. Regarding Claim 9, [Canceled]. Regarding Amended Claim 10, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the system, wherein to compare agent performance over the plurality of agent performance metrics comprises to calculate, for each agent in the selected agent profile, an overall goal percentage value corresponding to an average of the plurality of goal percentages for the plurality of performance metrics assigned to the particular agent. (… the data from different sources (e.g., different departments, different teams, or across different communication channels) are all combined in generating the metrics for the agent. The generated metrics may include a measurement of single user interaction or an average from a plurality of interactions within a certain period. Bhat et al. [col. 9, lines 15-31] … if multiple different performance scores are used to evaluate agent performance, there may be multiple different equations included in the created agent performance measurement framework. Bhat et al. [col. 20, lines 10-30; col. 25, lines 4-35]). Regarding Claim 11, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the system, wherein to display the plurality of goal percentages for the plurality of agent performance metrics comprises to display, for each agent in the selected agent profile, the overall goal percentage value to facilitate visualized comparison of overall goal percentage value across the selected agent profile. ( FIG. 7A illustrates an example snapshot 700 of a user interface 701 for monitoring metrics regarding agent performance, according to embodiments of the disclosure. The user interface may be an agent performance management (APM) dashboard. Bhat et al. [col. 22, lines 12-25; Fig. 7A-8B]). Regarding Amended Claim 12, Bhat et al. discloses a method of evaluating agent performance comprising: obtaining, by a computing system, metric data for one or more agents including a plurality of raw performance indicators each corresponding to a particular agent performance metric of a plurality of agent performance metrics; (… methods and systems for providing a data consolidation for improved customer communication and agent performance evaluation in a multi-channel contact center. Bhat et al. [col. 1, lines 5-10; col. 25, lines 45-55 (computing devices)]. … (… the data consolidation unit 135 may also consolidate the interaction activities of an agent throughout the services s/he provided to customers. A personalized context may be similarly generated for the agent, where the generated context may also include personal information of the agent, interaction activities of the agent, as well as other related context related to the interaction activities.. Bhat et al. [col. 8, lines 15-50]. … the interaction analysis unit 137 may include a plurality of metrics that may be used to measure agent performance of an agent. The plurality of metrics may be generated based on the consolidation data for an agent. That is, the data from different sources (e.g., different departments, different teams, or across different communication channels) are all combined in generating the metrics for the agent. Bhat et al. [col. 9, lines 15-43]); converting, by a computing system and based on the metric data, each of the plurality of raw performance indicators into a points score for the particular agent performance metric to provide a plurality of points scores for the plurality of agent performance metrics; (… based on the retrieved data, the performance management unit 143 may determine one or more performance scores for the agent… Based on the determined scores for the agents, the performance management unit 143 may then rank the agents in general or in one or more categories. Bhat et al. [col. 9, lines 60-67; col. 10, lines 1-16]. … Any metrics that affect one or more of the above scores may be then considered as an identified metric for evaluation of the agent performance and may be included in an equation or algorithm for calculating the performance score(s). Bhat et al. [col. 15, lines 35-61]); Bhat et al. fails to explicitly disclose calculating, by a computing system and based on the points score, a goal percentage for each particular agent performance metric to provide a plurality of goal percentages for the plurality of agent performance metrics; and comparing, by a computing system and based on the plurality of goal percentages, agent performance over the plurality of agent performance metrics. Pal et al. discloses these limitations. (… calculating the agent performance index may include: measuring agent performance in relation to each of a plurality of key performance indicators (KPIs); for each KPI, calculating a normalized performance score based on a ratio of the agent performance related to the KPI to a KPI goal; and calculating the agent performance index as a combination of the normalized performance scores. Pal et al. [para. 0013]. … the engagement index may be calculated as the relative agent's activity as compared to other agent's activity on the gamification module. … the performance index may be calculated by comparing the agent's performance against pre decided goals. Pal et al. [para. 0028-0030]. … An agent engagement index may be calculated as a combination (e.g., weighted average) of the normalized engagement scores. In some embodiments the engagement parameter goal may equal engagement of other agents with relation to the same engagement parameter. Pal et al. [para. 0045-0048; Fig. 4]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the calculating steps of Bhat et al. to include the step to calculating, by a computing system and based on the points score, a goal percentage for each particular agent performance metric to provide a plurality of goal percentages for the plurality of agent performance metrics; and comparing, by a computing system and based on the plurality of goal percentages, agent performance over the plurality of agent performance metrics as disclosed by Pal et al. to measure the performance of the agents on their primary job (Pal et al. [para. 0003]), in a manner that would yield predictable results. wherein converting each of the plurality of raw performance indicators into a points score comprises normalizing the points score for the particular agent performance metric across a plurality of variables to evaluate agent performance according to a common standard, (… if multiple different performance scores are used to evaluate agent performance, there may be multiple different equations included in the created agent performance measurement framework. Bhat et al. [col. 20, lines 10-30]. …the process of determining the overall performance score of an agent may include determining specific scores for certain categories and then determining an overall score for the agent based on the determining scores for these categories. Bhat et al. [col. 21, lines 10-61]. … agent performance measurement framework is a mechanism that aggregates a critical set of metrics that are indicative of agent performance and combines them into a single, all-inclusive measure of agent performance. Bhat et al. [col. 21, lines 62-67]. … KPIs are to be defined, benchmarked, and analyzed at the call reason/ product level, which drives the right agent behavior and enables improved accuracy. Bhat et al. [col. 24, lines 7-25]); and wherein comparing agent performance over the plurality of agent performance metrics comprises comparing, for each agent in an agent profile selected by a user… agent performance for the particular agent over a first predefined time interval to agent performance for the particular agent over a second predefined time interval. (The generated metrics may include a measurement of single user interaction or an average from a plurality of interactions within a certain period. Bhat et al. [col. 9, lines 15-31] … if multiple different performance scores are used to evaluate agent performance, there may be multiple different equations included in the created agent performance measurement framework. Bhat et al. [col. 20, lines 10-30; col. 25, lines 4-35]). Bhat et al. and Pal et al. fail to explicitly disclose comparing, for each agent in an agent profile selected by a user in which the selected agent profile is indicative of a characteristic of a work schedule and the characteristic is shared by agents associated with the selected agent profile, agent performance for the particular agent over a first predefined time interval to agent performance for the particular agent over a second predefined time interval. Samborskyy et al. discloses this limitation. (Selection of the particular user causes the specialized application 52 to retrieve the user's profile information from the third party database. Samborskyy et al. [para. 0385; Fig. 52]. … A contact center worker may view reports associated with a particular worker via the agent management UI page 500. … the report may display the productivity, average handling time, after call work time, current call time, call disposition, or any other relevant metric for an agent of a contact center. In one embodiment, the report displays the particular agent's report along with a comparison to the average of all agents, a goal, or a standard. … agents may be configured in aggregate. Samborskyy et al. [para. 0109-0113; Fig. 6-7]. … The report workspace 602 may represent a display area for a report dashboard. … Report widgets 604, according to one embodiment, are visualizations for particular contact center metrics. Samborskyy et al. [para. 0184-0186]. … . The dashboard UI 1000 may include a plurality of UI pages for monitoring and/or configuring a contact center. Samborskyy et al. [para. 0202-0205, 0212-0215 (agent group dashboard UI)]. … The activity report may include a segmented circle chart proportionally showing the activity of the agent including time on call, time on standby, and time on break, and include a text display of the percentage of time on call for a predefined period of time (e.g., since midnight) together with a color-coded arrow indicating the trend. Samborskyy et al. [para. 0221-0228 (agent widgets)] … the report panel 1804 may display a report of an agent's call activity for a relevant time period, an agent's call handle time, and/or a status of calls of the contact center. Samborskyy et al. [para. 0308-0315; Fig. 33-34]. … a performance visualizer 1650A…is provided for displaying forecast or scheduled contact center metrics against actual contact center metrics. … The monitoring UI may also display different types of KPIs. Samborskyy et al. [para. 0278-0289]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the performance monitoring and reporting steps of Bhat et al. and Pal et al. combined to include the step for comparing, for each agent in an agent profile selected by a user in which the selected agent profile is indicative of a characteristic of a work schedule and the characteristic is shared by agents associated with the selected agent profile, agent performance for the particular agent over a first predefined time interval to agent performance for the particular agent over a second predefined time interval as disclosed by Samborskyy et al. for providing a dashboard user interface for contact center monitoring (Samborskyy et al. [para. 0007]), in a manner that would yield predictable results at the relevant time. Regarding Claim 13, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the method, wherein normalizing the points score for the particular agent performance metric comprises: establishing a plurality of zones in progression toward a target for the conversion of at least one raw performance indicator corresponding to the particular agent performance metric into the points score; and determining the points score based on the particular zone of the plurality of zones that the at least one raw performance indicator falls in. Pal et al. discloses this limitation. (… a gamification challenge may include a plurality of levels, each including one or more objectives. A gamification objective may include a qualitative goal defined on the performance of the agent in relation to one or more KPIs, and a plurality of rewards, each associated with completion of a level. … Once a level is complete, rewards, badges or points may be awarded to the agents and the gamification challenge may proceed to the next level. Pal et al. [para. 0043-0045]… One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Pal et al. [para. 0069-0070]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the calculating steps of Bhat et al. and Samborskyy et al. combined to include normalizing the points score for the particular agent performance metric comprises: establishing a plurality of zones in progression toward a target for the conversion of at least one raw performance indicator corresponding to the particular agent performance metric into the points score; and determining the points score based on the particular zone of the plurality of zones that the at least one raw performance indicator falls in as disclosed by Pal et al. to measure the performance of the agents on their primary job (Pal et al. [para. 0003]), in a manner that would yield predictable results. Regarding Claim 14, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the method, wherein determining the points score based on the particular zone comprises: determining whether the at least one raw performance indicator falls in a first zone that corresponds to a first points score; determining whether the at least one raw performance indicator falls in a second zone closer to the target than the first zone that corresponds to a second points score; and determining whether the at least one raw performance indicator falls in a third zone closer to the target than the second zone that corresponds to a third points score. ( A challenge may include multiple levels and a level may be made up of one or more objectives. Usually, levels get difficult to complete as the challenge progresses. An objective may be a condition or criterion defined on one or more metric or KPI. Gamification challenges may include badges or rewards that symbolizes achievement and are provided to agents or groups of agents once an objective or a level is achieved. Supervisors and managers typically create a challenge for their agents. They may set multiple levels within a challenge. Each level may have few objectives where each objective is associated with achieving performance goals. Pal et al. [para. 0044]. … Table 1 presents an example gamification challenge provided to the agent. The example gamification challenge includes 4 levels, each including four objectives. The objectives include a quotative goal defined on the performance of the agent. … Agents may also be presented a leaderboard where they can see the progress of themselves and other agents in the gamification challenge, namely how many levels they have completed, how many points they have earned and their respective rank. Pal et al. [para. 0061-0063; Fig. 4-6] … One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Pal et al. [para. 0069-0070]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the calculating steps of Bhat et al. and Samborskyy et al. combined to include the step to determining the points score based on the particular zone comprises: determining whether the at least one raw performance indicator falls in a first zone that corresponds to a first points score; determining whether the at least one raw performance indicator falls in a second zone closer to the target than the first zone that corresponds to a second points score; and determining whether the at least one raw performance indicator falls in a third zone closer to the target than the second zone that corresponds to a third points score as disclosed by Pal et al. to measure the performance of the agents on their primary job (Pal et al. [para. 0003]), in a manner that would yield predictable results. Regarding Claim 15, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the method, wherein the first points score is zero points, the second points score is greater than the first points score, and the third points score is greater than the second points score. (An objective may be a condition or criterion defined on one or more metric or KPI. … Supervisors and managers typically create a challenge for their agents. They may set multiple levels within a challenge. Each level may have few objectives where each objective is associated with achieving performance goals. Pal et al. [para. 0044]. … One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Pal et al. [para. 0069-0070]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the calculating steps of Bhat et al. and Samborskyy et al. combined to include the step wherein the first points score is zero points, the second points score is greater than the first points score, and the third points score is greater than the second points score as disclosed by Pal et al. to measure the performance of the agents on their primary job (Pal et al. [para. 0003]), in a manner that would yield predictable results. Regarding Claim 16, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the method, wherein calculating the goal percentage for each particular agent performance metric comprises: multiplying the points score for the particular agent performance metric by a predetermined number of days to compute an aggregated points score for the particular agent performance metric over a predefined time interval; and dividing the aggregated points score by a maximum number of points achievable over the predefined time interval to determine the goal percentage. (… the data from different sources (e.g., different departments, different teams, or across different communication channels) are all combined in generating the metrics for the agent. The generated metrics may include a measurement of single user interaction or an average from a plurality of interactions within a certain period. Bhat et al. [col. 9, lines 15-31] … if multiple different performance scores are used to evaluate agent performance, there may be multiple different equations included in the created agent performance measurement framework. Bhat et al. [col. 20, lines 10-30; col. 25, lines 4-35]). Regarding Claim 17, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the method, wherein comparing agent performance over the plurality of agent performance metrics comprises: displaying, for each agent in the selected agent profile, the plurality of goal percentages for a plurality of agent performance metrics assigned to the particular agent to facilitate visualized comparison of agent performance across the selected agent profile; calculating, for each agent in the selected agent profile, an overall goal percentage value corresponding to an average of the plurality of goal percentages for the plurality of performance metrics assigned to the particular agent; and displaying, for each agent in the selected agent profile, the overall goal percentage value to facilitate visualized comparison of overall goal percentage value across the selected agent profile. (… the data from different sources (e.g., different departments, different teams, or across different communication channels) are all combined in generating the metrics for the agent. The generated metrics may include a measurement of single user interaction or an average from a plurality of interactions within a certain period. Bhat et al. [col. 9, lines 15-31] … if multiple different performance scores are used to evaluate agent performance, there may be multiple different equations included in the created agent performance measurement framework. Bhat et al. [col. 20, lines 10-30; col. 25, lines 4-35]. … FIG. 7A illustrates an example snapshot 700 of a user interface 701 for monitoring metrics regarding agent performance, according to embodiments of the disclosure. The user interface may be an agent performance management (APM) dashboard. Bhat et al. [col. 22, lines 12-25; Fig. 7A-8B]). Regarding Claim 18, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the method, wherein comparing agent performance over the first predefined time interval to agent performance over the second predefined time interval comprises displaying, for a particular agent in the selected agent profile over each of the first and second predefined time intervals, the goal percentage for the particular agent performance metric, the raw performance indicator corresponding to the particular agent performance metric, a number of days over which metric data including the raw performance indicator has been obtained, and the points score for the particular agent performance metric. (… the data from different sources (e.g., different departments, different teams, or across different communication channels) are all combined in generating the metrics for the agent. The generated metrics may include a measurement of single user interaction or an average from a plurality of interactions within a certain period. Bhat et al. [col. 9, lines 15-31] … if multiple different performance scores are used to evaluate agent performance, there may be multiple different equations included in the created agent performance measurement framework. Bhat et al. [col. 20, lines 10-30; col. 25, lines 4-35]. … FIG. 7A illustrates an example snapshot 700 of a user interface 701 for monitoring metrics regarding agent performance, according to embodiments of the disclosure. The user interface may be an agent performance management (APM) dashboard. Bhat et al. [col. 22, lines 12-25; Fig. 7A-8B]). Regarding Amended claim 19 and claim 20, claims 19-20 recite substantially similar limitations to those of claim 1 (claim1 is a combination of claims 19-20) and are therefore rejected based upon the same prior art combination, reasoning, and rationale. Claims 19-20 are directed to one or more non-transitory machine readable storage media comprising a plurality of instructions stored thereon that, in response to execution by a processor, which is disclosed by Bhat et al.( Various techniques may be described herein in the general context of software, hardware elements, or program modules. … implementation of the described modules and techniques may be stored on or transmitted across some form of computer-readable media … “Computer-readable storage media” may refer to media and/or devices that enable persistent and/or non-transitory storage of information. Bhat et al. [col. 26, lines 35-67; col. 27, lines 1-30]). Regarding New Claim 21, Bhat et al., Pal et al., and Samborskyy et al. combined disclose the system, wherein to compare agent performance over the plurality of agent performance metrics comprises to compare agent performance over multiple time intervals. Samborskyy et al. discloses this limitation. (… the agent UI page 500 may display information (e.g., user entered, historical, and/or real time information) associated with agents of a contact center. Samborskyy et al. [para. 0094]. … A report widget 602 may be a display window for displaying contact center metrics (e.g., real-time or historical metrics) to a contact center worker. Samborskyy et al. [para. 0184-0186]. … the activity report may include a segmented circle chart proportionally showing the activity of the agent including time on call, time on standby, and time on break, and include a text display of the percentage of time on call for a predefined period of time (e.g., since midnight) together with a color-coded arrow indicating the trend. … The time period for the information being displayed may be configured by the time period widget 1260. Samborskyy et al. [para. 0221-0230 (agent widgets)] … the report panel 1804 may display a report of an agent's call activity for a relevant time period, an agent's call handle time, and/or a status of calls of the contact center. Samborskyy et al. [para. 0308-0315; Fig. 33-34]. … a performance visualizer 1650A…is provided for displaying forecast or scheduled contact center metrics against actual contact center metrics. … The monitoring UI may also display different types of KPIs. Samborskyy et al. [para. 0278-0289]). It would have been obvious to one of ordinary skill in the art of performance evaluations before the effective filing date of the claimed invention to modify the performance monitoring and reporting steps of Bhat et al. and Pal et al. combined to include the step compare agent performance over the plurality of agent performance metrics comprises to compare agent performance over multiple time intervals. Samborskyy et al. discloses this limitation as disclosed by Samborskyy et al. for providing a dashboard user interface for contact center monitoring (Samborskyy et al. [para. 0007]), in a manner that would yield predictable results at the relevant time. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Singh et al. (US 2019/0102719) - . The first graphical user interface may be configurable to display a plurality of performance metrics related to the managed network. The embodiment may further involve receiving an indication to display a detailed representation of a particular performance metric of the plurality of performance metrics. The embodiment may further involve transmitting a web-based representation of a second graphical user interface. The second graphical user interface may be configured to display (i) a textual description of the particular performance metric, (ii) the value of the particular performance metric, (iii) an ordered ranking, (iv) a graph-based representation of the particular performance metric as measured over a time period, and (v) a recommendation of operational modifications to improve the particular performance metric. Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LETORIA G KNIGHT whose telephone number is (571)270-0485. The examiner can normally be reached M-F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao WU can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L.G.K/Examiner, Art Unit 3623 /RUTAO WU/Supervisory Patent Examiner, Art Unit 3623
Read full office action

Prosecution Timeline

Dec 28, 2023
Application Filed
Sep 25, 2025
Non-Final Rejection — §101, §103
Oct 27, 2025
Response Filed
Dec 23, 2025
Response Filed
Feb 12, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579488
METHODS AND SYSTEMS FOR OPTIMIZING VALUE IN CERTAIN DOMAINS
2y 5m to grant Granted Mar 17, 2026
Patent 12536552
HUMANOID SYSTEM FOR AUTOMATED CUSTOMER SUPPORT
2y 5m to grant Granted Jan 27, 2026
Patent 12499400
Sensor Input and Response Normalization System for Enterprise Protection
2y 5m to grant Granted Dec 16, 2025
Patent 12380409
METHODS AND SYSTEMS FOR EXPLOITING VALUE IN CERTAIN DOMAINS
2y 5m to grant Granted Aug 05, 2025
Patent 12373748
SYSTEMS AND METHODS OF ASSIGNING MICROTASKS OF WORKFLOWS TO TELEOPERATORS
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
73%
With Interview (+46.5%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 173 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month