DETAILED ACTION
This communication is a Final Office Action rejection on the merits. Claims 1-6, 10-14, and 16-20 are currently pending and have been addressed below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed on 04/05/2024 and 06/20/2025 (related to the 103 Rejection) have been fully considered and are persuasive. Examiner agrees that the combination of Sarma et al., Izadi, Carreira-Perpinan, and Schwartz et al. fails to teach the recited invention under 35 U.S.C. § 103. The cited art of record, alone or in combination, do not teach or describe every element of amended claims. Therefore, claims 1-6, 10-14, and 16-20 have potential allowable subject matter.
Applicant's arguments filed on 01/12/2026 (related to the 101 Rejection) have been fully considered but they are not persuasive.
Applicant states, on pages 12-17, The claims are not directed to certain methods of organizing human activity or to mathematical calculations, but to use of a trained CART machine learning model to determine approval probabilities of a time-off request and to provide that information to an agent and a manager using a schedule request manager microservice and a graphical user interface as amended herein.
Also, the use of the trained, regressive CART model allows the system to continuously improve over time as it learns from past time-off request patterns, providing an evolving solution to an ongoing problem. This provides a specific technical solution with concrete applications in contact center management. A trained CART model is a technical solution for predictive analytics. In the present case, the trained CART model provides a technical improvement in predicting the approval probability for a time-off request. Moreover, the claims are amended to recite the use of a schedule request manager microservice and its interactions with the graphical user interface of an agent web interface and the trained CART model. Micro services provide a technical solution by breaking large applications into smaller, independent services for better scalability, resilience, and faster development. Therefore, the claims integrate the alleged abstract idea into a practical application, and thus the claims are not directed to an abstract idea.
Lastly, the combination of elements recited in the claims is not rejected over any prior art references. The inventive concept in the claims is not well-understood, routine, or conventional activity in the field of using a trained CART model to calculate an approval probability for a time-off request in combination with a schedule request manager microservice, agent web interface, and a manager web interface.
Examiner respectfully disagrees with Applicant. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “managing personal behavior.” In this case, managing a workforce is a form of managing personal behavior because it allows the method to determine whether to approve time-off requests for the workforce based on rules (e.g., following rules for approving the time-off request based on staffing levels and skills). Also, the step of “training to calculate an approval probability for a time-off request” is considered a “mathematical calculation.” If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or mathematical calculations, then it falls within the “method of organizing human activity” or “mathematical concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Claim 1 includes additional elements such as: a schedule request manager microservice; a machine learning model; and a graphical user interface. The schedule request manager microservice is merely used to receive the time-off requests coming in from the agents (Paragraph 0022). At Step 2A, Prong 2 - this is considered “field of use” since it’s just used to receive staffing data for an analysis, but the microservice is not improved (MPEP 2106.05h). At Step 2B – this is considered a conventional computer function of “receiving and transmitting over a network” (MPEP 2106.05d).
The machine learning model is a tree-based model (e.g., CART) that is used for: receiving training data; splitting values according to information gain values; and calculating an approval probability of the time-off request (see Figure 3 and Paragraphs 0029-0030). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f) being applicable at both Step 2A, Prong 2 and Step 2B. The machine learning is recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element (MPEP 2106.05f). In this case, the machine learning model includes specific inputs (e.g., training data comprising net staffing data, skills of an agent, overlapping time-off requests from other agents, time-off requests by the agent in the past, and manager approvals of such time-off requests) and outputs (e.g., an approval probability of the time-off request). However, the machine learning does not include any specific analysis explaining how the machine learning is learning new rules or updating existing rules in order to improve the decision/logic process (see 2024 AI Guidance, Example 47, Claim 2, no details about how the machine learning operates to derive the threshold other than that it is being used to calculate a probability). Also, the process of splitting the features according to the respective information gain value is just describing a “well-known” process used for training a CART model (see MPEP 2106.05(d)). Therefore, claim 1 only recites the idea of a solution or outcome since it omits any details as to how the machine learning solves a technical problem (MPEP 2106.05(a)).
The graphical user interface is merely used to display the approval probability of the time-off request to the first agent and to a manager (Paragraph 0055). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f) being applicable at both Step 2A, Prong 2 and Step 2B. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, instructions to display and/or arrange information in a graphical user interface may not be sufficient to show an improvement in computer-functionality (MPEP 2106.05a).
Further, claim 1 includes the step of “automatically approving the time-off request.” Although the step specifies an action that is executed to automatically approve the time-off request without need for manager review, the specified action is well known in the art and does not solve an existing problem (see MPEP 2106.05(d) and 2106.05(f)). Also, the step is considered a “mere automation of a manual process” (see MPEP 2106.05(a)).
Lastly, the claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim amounts to significantly more than the abstract idea itself.
Independent claims 10 and 16 recite similar features and therefore are rejected for the same reasons as independent claim 1. Claims 2-6, 11-14, and 17-20 are rejected for having the same deficiencies as those set forth with respect to the claims that they depend from, independent claims 1, 10, and 16.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6, 10-14, and 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without reciting significantly more.
Independent Claim 1
Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 1 is directed to an apparatus which is a statutory category.
Step 2A, Prong One - Claim 1 recites: A workforce management system to perform operations which comprise: for a period of time, receiving training data comprising net staffing data, skills of an agent, overlapping time-off requests from other agents, time-off requests by the agent in the past, and manager approvals (and rejections) of such time-off requests in the past; storing the training data; training a model by constructing a binary tree using features comprising net staffing data, skills of an agent, overlapping time-off requests from other agents, time-off requests by the agent in the past and using numerical thresholds that yield a largest respective information gain value at each node of the tree; and splitting the features according to the respective information gain values; receiving a time-off request from a first agent, wherein the time-off request comprises an agent ID of the first agent and a first requested date; automatically providing, to the trained model, staffing data on the first requested date, skills of the first agent, pending time-off requests from other agents on the first requested date, and time-off taken by the first agent in the past; automatically calculating an approval probability of the time-off request by automatically calculating a net staffing percentage on the first requested date based on the staffing data on the first requested date, defining a number of time-off requests permitted for the first agent, automatically determining a number of overlapping time-off requests from other agents on the first requested date based on the pending time-off requests from other agents on the first requested date, and automatically determining whether the skills of the first agent overlap with a plurality of skills of the other agents with pending time-off requests; providing the approval probability; displaying the approval probability of the time-off request to the first agent; displaying the approval probability of the time-off request to a manager; and if the approval probability exceeds an approval probability threshold, automatically approving the time-off request. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “managing personal behavior.” In this case, managing a workforce is a form of managing personal behavior because it allows the method to determine whether to approve time-off requests for the workforce based on rules (e.g., following rules for approving the time-off request based on staffing levels and skills). Also, the step of “training to calculate an approval probability for a time-off request” is considered a “mathematical calculation.” If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or mathematical calculations, then it falls within the “method of organizing human activity” or “mathematical concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 1 includes additional elements: a processor; a computer readable medium; a workforce management database; a schedule request manager microservice; a trained classification and regression tree (CART) machine learning model; and via a graphical user interface of an agent web interface.
The processor is merely used to execute instructions (Paragraph 0055). The computer readable medium is merely used to store instructions (Paragraph 0055). The workforce management database is merely used to store training data (Paragraph 0023). The schedule request manager microservice is merely used to receive the time-off requests coming in from the agents (Paragraph 0022). The trained machine learning model is a tree-based model (e.g., CART) that is merely used to: receive training data; split values according to information gain values; and calculate an approval probability of the time-off request (see Figure 3 and Paragraphs 0029-0030). The user interface is merely used to display the approval probability of the time-off request to the first agent and to a manager (Paragraph 0055). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “processor,” “computer readable medium,” “workforce management database,” “schedule request manager microservice,” “trained CART machine learning,” and “graphical user interface” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea.
Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of deciding whether to approve or decline a time-off request based on rules. The specification shows that the processor is merely used to execute instructions (Paragraph 0055). The computer readable medium is merely used to store instructions (Paragraph 0055). The schedule request manager microservice is merely used to receive the time-off requests coming in from the agents (Paragraph 0022). The trained machine learning model is a tree-based model (e.g., CART) that is merely used to: receive training data; split values according to information gain values; and calculate an approval probability of the time-off request (see Figure 3 and Paragraphs 0029-0030). In this case, merely specifying a type of machine learning is not enough to show an improvement in computer-functionality (see 2024 AI Guidance, Example 47, Claim 2, no details about how the machine learning operates to derive the threshold other than that it is being used to calculate a probability). Also, the process of “splitting the features according to the respective information gain value” is just describing a “well-known” process used for training a machine learning model such as a CART model (see MPEP 2106.05(d)). Further, the user interface is merely used to display the approval probability of the time-off request to the first agent and to a manager (Paragraph 0055). In this case, instructions to display and/or arrange information in a graphical user interface may not be sufficient to show an improvement in computer-functionality (MPEP 2106.05a). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Independent claim 10 directed to a method at step 1, which is a statutory category. Claim 10 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. The claim is not patent eligible.
Independent claim 16 directed to an article of manufacturing at step 1, which is a statutory category. Claim 16 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. The claim is not patent eligible.
Dependent claims 2-6, 11-14, and 17-20 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as: wherein the trained CART machine learning is used to calculate an approval probability of a time-off request of the first agent on the alternate dates; wherein the trained CART machine learning is used to calculate an approval probability of the time-off request from the second agent; wherein calculating an approval probability comprises to: calculate a net staffing percentage on the first requested date based on the staffing data on the first requested date; and determine whether the skills of the first agent overlap with a plurality of skills of the other agents with pending time-off requests. Also, wherein the graphical user interface is further used to: display the approval probability of the time-off request on the alternate dates to the first agent and to the manager; receive a selection of an alternate date from the alternate dates to take time-off; receive an approval of the alternate date the first agent selected to take time-off; request automatic approval of time-off requests having the approval probability of greater than or equal to the threshold probability; receive confirmation of automatic approval; receive a time-off request from a second agent; automatically approve the time-off request; notify the manager regarding the automatic approval. Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f) being applicable at both Step 2A, Prong 2 and Step 2B. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. In this case, the additional function of the machine learning and user interface are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. Further, instructions to display and/or arrange information in a graphical user interface may not be sufficient to show an improvement in computer-functionality (MPEP 2106.05a). Lastly, the step of “selection of an alternate date to determine an approval probability” is considered a well-understood, routing, and conventional function since it’s just “performing repetitive calculations” (MPEP 2106.05(d)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Potential Allowable Subject Matter
The closest prior art is Sarma et al. (US 2022/0405712 A1), Sarma et al. discloses a workforce management system comprising: a processor and a non-transitory computer readable medium operably coupled thereto, the non-transitory computer readable medium comprising a plurality of instructions stored in association therewith that are accessible to, and executable by, the processor, to perform operations which comprise (Paragraph 0001, The disclosure relates generally to an improved computer system and, more specifically, to a method, apparatus, computer system, and computer program product for autonomous management of request-for-approvals within an organization; Paragraph 0100, Program code 918 is located in a functional form on computer-readable media 920 that is selectively removable and can be loaded onto or transferred to data processing system 900 for execution by processor unit 904):
for a period of time, receiving training data comprising (Paragraph 0061, In this illustrative example, Request manager 216 creating a training data set from the response. The set of machine learning models 242 is trained from training data set 244. Based on the training data set, Artificial intelligence system 240 builds a set of predictive models for generating a new set of parameters) net staffing data (Paragraph 0051, When human capital operation is the time-off request, the set of parameters can include additional parameters selected from the group consisting of team availability during the time-off period, duration of the time-off, a type of the time-off request, a nature of the time-off request, a projected workload during the time-off request, and combinations thereof), …, time-off requests by the agent in the past, and manager approvals (and rejections) of such time-off requests in the past (Paragraph 0033, For example, in the context of human capital management, the automated approval of requests can include time off request, timesheet submissions, and expense approvals. request management system 202 provides capabilities for autonomous configuring of new rule sets, as well as recommending these new rule sets based on historical data and prior approval patterns of different requests by user 124);
storing the training data in a workforce management database (see Figure 2 and related text in Paragraph 0061, In this illustrative example, Request manager 216 creating a training data set from the response. The set of machine learning models 242 is trained from training data set 244);
with the workforce management database and the training data, training a [decision tree] machine learning model by (Paragraph 0060, A machine learning model can learn based on training data input into the machine learning model. The machine learning model can learn using various types of machine learning algorithms. The machine learning algorithms include at least one of a supervised learning, an unsupervised learning, a feature learning, a sparse dictionary learning, and anomaly detection, association rules, or other types of learning algorithms. Examples of machine learning models include an artificial neural network, a decision tree, a support vector machine, a Bayesian network, a genetic algorithm, and other types of models. These machine learning models can be trained using data and process additional data to provide a desired output; Paragraph 0061, In this illustrative example, Request manager 216 creating a training data set from the response. The set of machine learning models 242 is trained from training data set 244. Based on the training data set, Artificial intelligence system 240 builds a set of predictive models for generating a new set of parameters):
constructing a binary tree (Paragraph 0060, decision tree) using features comprising net staffing data (Paragraph 0051, When human capital operation is the time-off request, the set of parameters can include additional parameters selected from the group consisting of team availability during the time-off period, duration of the time-off, a type of the time-off request, a nature of the time-off request, a projected workload during the time-off request, and combinations thereof), …, time-off requests by the agent in the past (Paragraph 0033, For example, in the context of human capital management, the automated approval of requests can include time off request, timesheet submissions, and expense approvals. request management system 202 provides capabilities for autonomous configuring of new rule sets, as well as recommending these new rule sets based on historical data and prior approval patterns of different requests by user 124) and using numerical thresholds that yield a largest respective information gain value at each node of the tree (Paragraph 0059, Machine learning is used to train the artificial intelligence system. Machine learning involves inputting data to the process and allowing the process to adjust and improve the function of the artificial intelligence system; Paragraph 0060, a decision tree; Paragraph 0063, Based on historical data and subsequent analysis, there are 2 ways in which a new rule set can be defined. For example, once trained, machine learning models 242 enables request management system 202 to generate a new set of rules which will build in the required tolerances for the system to make appropriate decisions and take corresponding actions); …
receiving, by a schedule request manager microservice via a graphical user interface of an agent web interface, a time-off request from a first agent, wherein the time-off request comprises an agent ID of the first agent and a first requested date (Figure 2, item 202, Request Management System; Paragraph 0040, User 204 is a person that can interact with graphical user interface 227 through user input 218 generated by input system 224 for computer system 214; Paragraph 0042, In this illustrative example, request manager 216 in computer system 214 is configured to receive a request-for-approval 226 submitted by an employee 228 of the organization 230; Paragraph 0051, When human capital operation is the time-off request, the set of parameters can include additional parameters selected from the group consisting of team availability during the time-off period);
automatically providing, to the trained [decision tree] machine learning model (Paragraph 0060, a decision tree; Paragraph 0061, In this illustrative example, Request manager 216 creating a training data set from the response. The set of machine learning models 242 is trained from training data set 244. Based on the training data set, Artificial intelligence system 240 builds a set of predictive models for generating a new set of parameters), staffing data on the first requested date (Paragraph 0051, When human capital operation is the time-off request, the set of parameters can include additional parameters selected from the group consisting of team availability during the time-off period, duration of the time-off, a type of the time-off request, a nature of the time-off request, a projected workload during the time-off request, and combinations thereof), …, and time-off taken by the first agent in the past (Paragraph 0033, For example, in the context of human capital management, the automated approval of requests can include time off request, timesheet submissions, and expense approvals. request management system 202 provides capabilities for autonomous configuring of new rule sets, as well as recommending these new rule sets based on historical data and prior approval patterns of different requests by user 124);
automatically calculating, by a regression of the trained [decision tree] machine learning model, an approval … of the time-off request (Paragraph 0065, In one illustrative example, Request manager 216 receives a response from the employee-manager. When the response is an approval of the new rule, the new rule can be stored as one of set of policy 232. Request manager 216 can autonomously manage subsequent request-for-approvals according to the new rule) by automatically calculating a net staffing percentage on the first requested date based on the staffing data on the first requested date (Paragraph 0051, When human capital operation is the time-off request, the set of parameters can include additional parameters selected from the group consisting of team availability during the time-off period, duration of the time-off, a type of the time-off request, a nature of the time-off request, a projected workload during the time-off request, and combinations thereof), defining a number of time-off requests permitted for the first agent (Paragraph 0019, Similarly, in the case of a time-off request, the approver may need to reference other factors, such as team coverage during the requested time-off duration and the employee's available/accrued time-off, before approving the request), …;
providing, by the trained [decision tree] machine learning model to the schedule request manager microservice, the approval … (Figure 2, item 202, Request Management System; Paragraph 0059, Machine learning is used to train the artificial intelligence system. Machine learning involves inputting data to the process and allowing the process to adjust and improve the function of the artificial intelligence system; Paragraph 0060, a decision tree; Paragraph 0063, Based on historical data and subsequent analysis, there are 2 ways in which a new rule set can be defined. For example, once trained, machine learning models 242 enables request management system 202 to generate a new set of rules which will build in the required tolerances for the system to make appropriate decisions and take corresponding actions; Paragraph 0065, In one illustrative example, Request manager 216 receives a response from the employee-manager. When the response is an approval of the new rule, the new rule can be stored as one of set of policy 232. Request manager 216 can autonomously manage subsequent request-for-approvals according to the new rule);
displaying, via the graphical user interface of the agent web interface, the approval … of the time-off request to the first agent (Paragraph 0039, Display system 222 is a physical hardware system and includes one or more display devices on which graphical user interface 227 can be displayed; Paragraph 0056, Responsive to performing the human capital operation 234, request manager 216 transmits a confirmation 246 of the human capital operation 234 to an employee-manager 248);
displaying, via the graphical user interface of the agent web interface, the approval … of the time-off request to a manager (Paragraph 0039, Display system 222 is a physical hardware system and includes one or more display devices on which graphical user interface 227 can be displayed; Paragraph 0056, Responsive to performing the human capital operation 234, request manager 216 transmits a confirmation 246 of the human capital operation 234 to an employee-manager 248);
and if the approval [exceeds approval conditions], automatically approving the time-off request (Paragraph 0054, Policy 232 can include multiple rules for determining the outcome of a request for approval. When the policy contains multiple rules, the request can be approved or denied based on the consistency of the outcomes of those rules. For example, if the conditions are met for one or more rules where the outcome is approved, then the request is an approval of the request).
Although Sarma et al. discloses training a decision tree machine learning model for automatically approving a time-off request based on multiple features such as net staffing data, time-off requests by the agent in the past, and manager approvals (and rejections) of such time-off requests in the past (Paragraph 0053, machine learning models 242 enables request management system 202 to generate a new set of rules which will build in the required tolerances for the system to make appropriate decisions and take corresponding actions), Sarma et al. does not specifically disclose other features used for making a decision related to time-off approval.
Izadi (US 2020/0380451 A1), Izadi discloses for a period of time, receiving … data comprising net staffing data, skills of an agent, overlapping time-off requests from other agents, …; constructing … using features comprising net staffing data, skills of an agent, overlapping time-off requests from other agents, … ; automatically providing, to the … model, staffing data on the first requested date, skills of the first agent, pending time-off requests from other agents on the first requested date, …; automatically calculating, by a … model, an approval … of the time-off request by automatically calculating a net staffing percentage on the first requested date based on the staffing data on the first requested date, defining a number of time-off requests permitted for the first agent, automatically determining a number of overlapping time-off requests from other agents on the first requested date based on the pending time-off requests from other agents on the first requested date, and automatically determining whether the skills of the first agent overlap with a plurality of skills of the other agents with pending time-off requests (Paragraph 0059, The scheduling interface 44 may be used by schedulers to create, maintain, and update the schedules. The leave management interface 46 may be used to approve or reject employees' leave requests. The leave management interface 46 may be used by any person responsible for leave management such as a dedicated HR employee. The employee interface 47 may be used by individual employees to submit their leave or schedule requests; Paragraph 0077, The next cycle schedule may include the number of available employees and their skill levels and other employee information. If the shift parameters are not acceptable, then in step 92, the user may modify the next cycle schedule by moving employee tags 54, accepting or canceling leave or vacation requests to accommodate employee shortages or overages; Paragraph 0084, Although this employee typically would not be granted leave for another month due to the organizational policies, it may benefit the organization if the leave was approved if there are more available employees than needed (step 132); Paragraph 0091, In the leave entitlement data 186 column, cells may become color-coded depending on such factors as if the requested leave duration is more than entitled leave duration; Paragraph 0092, For instance, two employees may have requested the same duration of leave starting at the same date and time and have the same range of the leave from shift 1 to shift 4. But the shift pattern of the first employee may be 1, 3 and shift pattern of the second employee may be 2, 4).
Although the combination of Sarma et al. and Izadi discloses training a decision tree machine learning model for automatically approving a time-off request based on multiple features such as net staffing data, skills of an agent, overlapping time-off requests from other agents, time-off requests by the agent in the past, and manager approvals (and rejections) of such time-off requests in the past (see Sarma et al., Paragraph 0053, machine learning models 242 enables request management system 202 to generate a new set of rules which will build in the required tolerances for the system to make appropriate decisions and take corresponding actions), the combination of Sarma et al. and Izadi does not specifically disclose wherein the tree-based machine learning model is a classification and regression tree (CART) and how the training process includes the step of splitting the features according to the respective information gain values.
Carreira-Perpinan (US 2022/0318641 A1). Carreira-Perpinan discloses training a classification and regression tree (CART) machine learning model by constructing a binary tree using features comprising … and using numerical thresholds that yield a largest respective information gain value at each node of the tree; and splitting the features according to the respective information gain values; …; automatically providing, to the trained CART machine learning model, … (Paragraph 0018, The instant invention provides novel methods for learning and growing better decision trees using a general form of a tree optimizing algorithm to improve prediction accuracy, interpretability, tree size, speed of learning the tree, and speed and accuracy of growing a tree from scratch, among other improvements. In some embodiments of the invention, methods assume a tree structure given by an initial decision tree (grown by CART or another conventional method, and/or using random parameter values), and through use of a tree alternating optimization (TAO) algorithm, return a tree that is smaller or equal in size than the initial tree that reduces the prediction error of the tree. These methods utilizing the TAO algorithm directly optimize the quantity of interest (i.e., the prediction accuracy). The invention may provide other optimizations and benefits as well; Paragraph 0019, Generally, the method comprises inputting an initial decision tree and a training set of instances, processing the initial decision tree by partitioning nodes into sets of non-descendant nodes, processing the nodes in each set by updating the nodes' parameters at each iteration so that the objective function decreases monotonically, and pruning the tree, which produces a final tree of a size no larger than that of the initial tree. TAO applies to many different types of loss functions, regularization terms and constraints, and types of models at both the decision nodes and the leaves, and makes it possible to learn better decision trees than with traditional algorithms, and to learn trees for problems where traditional algorithms do not apply; Paragraph 0064, And not only does TAO learn much better trees (in accuracy and size), it also frees the user from many ad-hoc choices that CART-type algorithms require: what purity criterion to use (Gini index, information gain, misclassification error, F-ratio, various hypothesis tests . . . ), when to stop growing the tree, the minimum number of instances a leaf must have, etc.); …
automatically calculating, by a regression of the trained CART machine learning model, an … probability of the … (Paragraph 0011, One important point to note is that, while hard decision trees do not use probability at the decision nodes, they can perfectly output probability distributions at the leaves. The difference between soft and hard trees is not in the ability to produce probability outputs—both are able to do so—but in whether the decision nodes make stochastic or deterministic decisions, respectively. Hence, for a given input instance, a soft tree computes output probabilities at each leaf while a hard tree computes them at only one leaf; Paragraph 0089, In traditional decision trees, the model choices are very limited. The leaf predictor is typically a single class value (for classification) or a single scalar or vector value (for regression). The decision function is typically univariate (or axis-aligned), testing whether a specific input feature exceeds a threshold, e.g. “go right if x.sub.κ.sub.i+b.sub.i≥0” (so the parameters are θ.sub.i={κ.sub.i, b.sub.i}, i.e., the feature to test and the threshold value). Multivariate (or oblique) decision functions of the form “go right if w.sub.i.sup.Tx+b.sub.i≥0” (with parameters θ.sub.i={w.sub.i, b.sub.i}) have also been used, but their performance has generally not been good enough for widespread practical use); …
and if the … probability exceeds an … probability threshold, automatically [making a decision] (Paragraph 0011, One important point to note is that, while hard decision trees do not use probability at the decision nodes, they can perfectly output probability distributions at the leaves. The difference between soft and hard trees is not in the ability to produce probability outputs—both are able to do so—but in whether the decision nodes make stochastic or deterministic decisions, respectively. Hence, for a given input instance, a soft tree computes output probabilities at each leaf while a hard tree computes them at only one leaf; Paragraph 0089, In traditional decision trees, the model choices are very limited. The leaf predictor is typically a single class value (for classification) or a single scalar or vector value (for regression). The decision function is typically univariate (or axis-aligned), testing whether a specific input feature exceeds a threshold, e.g. “go right if x.sub.κ.sub.i+b.sub.i≥0” (so the parameters are θ.sub.i={κ.sub.i, b.sub.i}, i.e., the feature to test and the threshold value). Multivariate (or oblique) decision functions of the form “go right if w.sub.i.sup.Tx+b.sub.i≥0” (with parameters θ.sub.i={w.sub.i, b.sub.i}) have also been used, but their performance has generally not been good enough for widespread practical use; Examiner notes that the decision is based on the defined/learned output probability value).
Although Sarma et al. discloses a graphical user interface for displaying approval information (Paragraphs 0039 & 0056), the combination of Sarma et al., Izadi, and Carreira-Perpinan does not specifically disclose the graphical user interface is used for displaying the approval probability of the time-off request.
Schwartz et al. (US 9,679,265 B1), Schwartz et al. discloses a workforce management system comprising (Column 1, lines 23-29, To schedule large hourly workforces (for example call center agents), most companies utilize Workforce Management (WFM) software-based products that provide forecasting and scheduling capability):
a processor and a non-transitory computer readable medium operably coupled thereto, the non-transitory computer readable medium comprising a plurality of instructions stored in association therewith that are accessible to, and executable by, the processor, to perform operations which comprise (Column 19, lines 50-63, When used in the claims, “computer readable medium” should be understood to refer to any object, substance, or combination of objects or substances, capable of storing data or instructions in a form in which they can be retrieved and/or processed by a device. computer memory such as hard discs, read only memory, random access memory, solid state memory elements, optical discs and registers is an example of a “computer readable medium”):
for a period of time, receiving … data comprising net staffing data (Column 3, lines 54-57, There could also be a real time input section, which could be provided by an existing workforce management system and could provide details on staff supply and demand for a particular time window), skills of an agent (Column 14, lines 22-33, Taking time off during slots when there was likely to be a lower demand for the worker's skills), … time-off requests from other agents (see Figure 11 and related text in Column 9, lines 12-21, He or she could then request that those hours be added to his or her schedule and, after a final check to confirm that the request was consistent with the employer's staffing needs (e.g., to address the possibility that other workers had made changes to their schedules since the calendar interface was generated which meant that additional workers were no longer needed during the requested slots), time-off requests by the agent in the past, and manager approvals (and rejections) of such time-off requests in the past (Column 3, lines 16-19, Staff Database which contains staff specific information such as staff availability preferences, performance data and call out history (i.e., the acceptance rate of offers previously provided); Column 13, lines 1-11, Projected overstaffing; Column 20, lines 36-40, Staffing level projection; Examiner notes that Schwartz et al. is keeping track of whether the time off was approved or rejected (e.g. either by the manager or the system) based on overstaffing or understaffing projections);
storing the … data in a workforce management database; with the workforce management database and the … data, … a … model … (Column 3, lines 16-19, Staff Database which contains staff specific information such as staff availability preferences, performance data and call out history (i.e., the acceptance rate of offers previously provided; Column 20, lines 36-40, Staffing level projection);
receiving, by a schedule request manager microservice via a graphical user interface of an agent web interface, a time-off request from a first agent, wherein the time-off request comprises an agent ID of the first agent and a first requested date (Column 2, lines 38-39, FIG. 11 depicts an interface which could be used to define how requests for time off or extra hours could be treated in a system which supports worker initiate schedule changes; Column 9, lines 7-21, In the non-incentivized workflow 1201 when the worker executed the self-scheduling application, he or she could be provided with a calendar interface which would show periods when additional hours were available to be added to the worker's schedule and he or she could then request that those hours be added to his or her schedule and, after a final check to confirm that the request was consistent with the employer's staffing needs (e.g., to address the possibility that other workers had made changes to their schedules since the calendar interface was generated which meant that additional workers were no longer needed during the requested slots);
automatically providing, to the … model, staffing data on the first requested date (Column 3, lines 54-57, There could also be a real time input section, which could be provided by an existing workforce management system and could provide details on staff supply and demand for a particular time window), skills of the first agent (Column 14, lines 22-33, Taking time off during slots when there was likely to be a lower demand for the worker's skills), … time-off requests from other agents on the first requested date (see Figure 11 and related text in Column 9, lines 12-21, He or she could then request that those hours be added to his or her schedule and, after a final check to confirm that the request was consistent with the employer's staffing needs (e.g., to address the possibility that other workers had made changes to their schedules since the calendar interface was generated which meant that additional workers were no longer needed during the requested slots), and time-off taken by the first agent in the past (Column 3, lines 16-19, Staff Database which contains staff specific information such as staff availability preferences, performance data and call out history (i.e., the acceptance rate of offers previously provided); Column 13, lines 1-11, Projected overstaffing; Column 20, lines 36-40, Staffing level projection; Examiner notes that Schwartz et al. is keeping track of whether the time off was approved or rejected (e.g. either by the manager or the system) based on overstaffing or understaffing projections);
automatically calculating, by a … model, an approval probability of the time-off request by automatically calculating a net staffing percentage on the first requested date based on the staffing data on the first requested date, …, automatically determining a number of … time-off requests from other agents on the first requested date based on the … time-off requests from other agents on the first requested date, and automatically determining whether the skills of the first agent overlap with a plurality of skills of the other agents with … time-off requests (Column 9, lines 65-67 & Column 10, lines 1-4, For example, in a case where worker initiated requests would be treated in different ways depending on the magnitude a staffing variance, a worker could be allowed to redeem points in order to have a variance treated as being greater than is actually the case so as to increase the likelihood that his or her request will be accepted (either automatically or manually); Column 10, lines 33-49, For example, in a case where there could be changes in slot availability between the time an interface is presented to a worker and the time a worker makes a request for a schedule change (e.g., where a self-scheduling interface is updated on a periodic basis), it is possible that slots could be categorized according to the likelihood that staffing gaps associated with those slots would be resolved between updates. In such a case, if a worker requested a schedule change involving a slot associated with a staffing gap with a likelihood of being resolved between updates which was greater than a threshold level, then before that change would be made, a further check (which could be automated or manual) could be made to ensure that the staffing gap had not been resolved, and the request could be denied (and the worker could be informed of such denial) if the staffing gap no longer existed at the time the request for a change was made; Column 17, lines 31-36, if the request was made for a slot which was overstaffed by 5% or more, would be referred to a manager for approval if it was made for a slot which was overstaffed by less than 5% but more than 0%, and would be automatically denied if the request was made for a slot had 0% or less overstaffing);
displaying, via the graphical user interface of the agent web interface, the approval [likelihood] of the time-off request to the first agent (Column 6, lines 3-12, FIG. 8, which shows an interface which could be used to support worker initiated schedule changes. In that figure, a calendar 801 is displayed, indicating time slots during which a worker is scheduled to work and which could not be removed from his or her schedule, which time slots during which the worker is scheduled to work which could potentially be removed from his or her schedule, and which slots during which the worker was not currently scheduled to work which could potentially be added to the worker's schedule; Column 10, lines 33-49, For example, in a case where there could be changes in slot availability between the time an interface is presented to a worker and the time a worker makes a request for a schedule change (e.g., where a self-scheduling interface is updated on a periodic basis), it is possible that slots could be categorized according to the likelihood that staffing gaps associated with those slots would be resolved between updates. In such a case, if a worker requested a schedule change involving a slot associated with a staffing gap with a likelihood of being resolved between updates which was greater than a threshold level, then before that change would be made, a further check (which could be automated or manual) could be made to ensure that the staffing gap had not been resolved, and the request could be denied (and the worker could be informed of such denial) if the staffing gap no longer existed at the time the request for a change was made; Column 17, lines 31-36, if the request was made for a slot which was overstaffed by 5% or more, would be referred to a manager for approval if it was made for a slot which was overstaffed by less than 5% but more than 0%, and would be automatically denied if the request was made for a slot had 0% or less overstaffing);
displaying, via the graphical user interface of the agent web interface, the approval [likelihood] of the time-off request to a manager (Column 6, lines 3-12, FIG. 8, which shows an interface which could be used to support worker initiated schedule changes. In that figure, a calendar 801 is displayed, indicating time slots during which a worker is scheduled to work and which could not be removed from his or her schedule, which time slots during which the worker is scheduled to work which could potentially be removed from his or her schedule, and which slots during which the worker was not currently scheduled to work which could potentially be added to the worker's schedule; Column 10, lines 33-49, For example, in a case where there could be changes in slot availability between the time an interface is presented to a worker and the time a worker makes a request for a schedule change (e.g., where a self-scheduling interface is updated on a periodic basis), it is possible that slots could be categorized according to the likelihood that staffing gaps associated with those slots would be resolved between updates. In such a case, if a worker requested a schedule change involving a slot associated with a staffing gap with a likelihood of being resolved between updates which was greater than a threshold level, then before that change would be made, a further check (which could be automated or manual) could be made to ensure that the staffing gap had not been resolved, and the request could be denied (and the worker could be informed of such denial) if the staffing gap no longer existed at the time the request for a change was made; Column 17, lines 31-36, if the request was made for a slot which was overstaffed by 5% or more, would be referred to a manager for approval if it was made for a slot which was overstaffed by less than 5% but more than 0%, and would be automatically denied if the request was made for a slot had 0% or less overstaffing);
and if the approval [likelihood] exceeds an approval [likelihood] threshold, automatically approving the time-off request (Column 2, lines 4-10, The disclosed technology can also be used to facilitate the submission and processing of requests for schedule changes, such as through a computer configured to generate a scheduling interface with information on potential changes, and to automatically approve or deny some or all of the changes requested through such an interface; Column 9, lines 65-67 & Column 10, lines 1-4, For example, in a case where worker initiated requests would be treated in different ways depending on the magnitude a staffing variance, a worker could be allowed to redeem points in order to have a variance treated as being greater than is actually the case so as to increase the likelihood that his or her request will be accepted (either automatically or manually); Column 17, lines 21-25, For example, in some implementations, there could rules setting multiple availability thresholds, with changes in slots with staffing levels meeting or exceeding the most stringent threshold being accepted automatically).
However, the cited art, alone or in any combination, fails to teach or suggest at least: a workforce management system comprising: a processor and a non-transitory computer readable medium operably coupled thereto, the non-transitory computer readable medium comprising a plurality of instructions stored in association therewith that are accessible to, and executable by, the processor, to perform operations which comprise: for a period of time, receiving training data comprising net staffing data, skills of an agent, overlapping time-off requests from other agents, time-off requests by the agent in the past, and manager approvals (and rejections) of such time-off requests in the past; storing the training data in a workforce management database; with the workforce management database and the training data, training a classification and regression tree (CART) machine learning model by: constructing a binary tree using features comprising net staffing data, skills of an agent, overlapping time-off requests from other agents, time-off requests by the agent in the past and using numerical thresholds that yield a largest respective information gain value at each node of the tree; and splitting the features according to the respective information gain values; receiving, by a schedule request manager microservice via a graphical user interface of an agent web interface, a time-off request from a first agent, wherein the time-off request comprises an agent ID of the first agent and a first requested date; automatically providing, to the trained CART machine learning model, staffing data on the first requested date, skills of the first agent, pending time-off requests from other agents on the first requested date, and time-off taken by the first agent in the past; automatically calculating, by a regression of the trained CART machine learning model, an approval probability of the time-off request by automatically calculating a net staffing percentage on the first requested date based on the staffing data on the first requested date, defining a number of time-off requests permitted for the first agent, automatically determining a number of overlapping time-off requests from other agents on the first requested date based on the pending time-off requests from other agents on the first requested date, and automatically determining whether the skills of the first agent overlap with a plurality of skills of the other agents with pending time-off requests; providing, by the trained CART machine learning model to the schedule request manager microservice, the approval probability; displaying, via the graphical user interface of the agent web interface, the approval probability of the time-off request to the first agent; displaying, via a graphical user interface of a manager web interface, the approval probability of the time-off request to a manager; and if the approval probability exceeds an approval probability threshold, automatically approving the time-off request.
Nor does the remaining prior art of record remedy the deficiencies found in the cited prior art. Furthermore, neither the prior art, the nature of the problem, nor knowledge of a person having ordinary skill in the art provides for any predictable or reasonable rationale to combine prior art teachings.
Claims 10 and 16 recite similar limitations and therefore have Potential Allowable Subject Matter for the same reasons as claim 1. Claims 2-6, 11-14, 17-20 have Potential Allowable Subject Matter because of their dependency from at least one of independent claims 1, 10, and 16.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Lehr (US 2021/0110293 A1) – discloses a machine learning logic may be used for training received data such as historical data from past process workflow execution, and different rules may be identified. The rules may be related to observations of scenarios of process steps complying with certain conditions, or corresponding to particular input values or related to process specifics (Paragraph 0027).
Krug et al. (US 2022/0188943 A1) – discloses neural networks can be used to provide prediction services in relation to approvals. Based on machine learning logic approvals can be predicted in a supervised scenario, using past time recordings and approvals as inputs along with non-personal information, such as, work schedules, seasonal business fluctuations, type of work or project performed, etc. In some instances, approval data can be classified using supervised learning techniques (e.g., logistic regression) to predict whether a given set of working hours will be approved or not. The confidence level of the classifier technique can be used to initiate a manual approval of entered hours whenever the confidence levels associated with the prediction are below a certain threshold value. For example, even if the classifier has a confidence level of 0.7 that the hours will be automatically approved, customers may choose to manually approve hours in this scenario (see at least Paragraphs 0132-0133).
Kalusivalingam (Kalusivalingam, A.K., Sharma, A., Patel, N. and Singh, V., 2020. Optimizing Workforce Planning with AI: Leveraging Machine Learning Algorithms and Predictive Analytics for Enhanced Decision-Making. International Journal of AI and ML, 1(3)) - discloses predictive analytics, powered by machine learning, plays a crucial role in forecasting workforce needs. By analyzing existing data, predictive models can anticipate future workforce demand and supply, taking into account variables such as seasonal trends, industry shifts, and organizational growth trajectories. This capability allows organizations to preemptively address gaps or surpluses in talent, ensuring that the right number of employees with the appropriate skills are in place to meet business objectives. Furthermore, predictive analytics can assist in identifying potential talent risks, such as high turnover rates or skill shortages, enabling proactive interventions. In addition to forecasting, machine learning can enhance workforce planning through real-time decision support. For instance, advanced algorithms can optimize shift scheduling, taking into account employee preferences, availability, and regulatory requirements (see at least Page 18).
Jeong Ha et al. (WO 2021/094845 A1) – discloses mistakes can be made by the supervisor in either approving vacation times that causes the delivery team to miss needed labor or denying vacation times when resources would otherwise be sufficient to cover the vacationing delivery worker (see at least Paragraph 0004).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARJORIE PUJOLS-CRUZ whose telephone number is (571)272-4668. The examiner can normally be reached Mon-Thru 7:30 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia H Munson can be reached at (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.P./
Examiner, Art Unit 3624
/PATRICIA H MUNSON/Supervisory Patent Examiner, Art Unit 3624