Prosecution Insights
Last updated: April 19, 2026
Application No. 18/158,669

SYSTEMS AND METHODS FOR SCHEDULE EXPERIMENTATION

Non-Final OA §101§103
Filed
Jan 24, 2023
Examiner
GUNN, JEREMY L
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
UKG Inc.
OA Round
3 (Non-Final)
29%
Grant Probability
At Risk
3-4
OA Rounds
3y 1m
To Grant
74%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
43 granted / 149 resolved
-23.1% vs TC avg
Strong +45% interview lift
Without
With
+45.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
44.0%
+4.0% vs TC avg
§103
37.3%
-2.7% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 149 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1, 3-12, and 14-19 have been reviewed and are under consideration by this office action. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/01/2025 has been entered. Notice to Applicant The following is a Non-Final Office action. Applicant amended claims and previously cancelled claims 2, 13, and 20. Claims 1, 3-12, and 14-19 are pending in this application and have been rejected below. Response to Amendment Applicant’s amendments are received and acknowledged. Response to Arguments - 35 USC § 101 Applicant’s arguments with respect to the 35 USC 101 rejections have been fully considered, but they are not persuasive. Applicant contends that the claims recite elements not capable of being performed in the human mind. Applicant points to Alice Corp… further asserting generating… using a neural network, executing simulations, improving metrics, agglomerate networks, etc. are not mental processes. Examiner respectfully disagrees and notes that many additional elements were discussed. The generation of a plurality of schedules, simulating data, and improving metrics are all concepts capable of being performed in the human mind (i.e. via pen and paper, Examiner notes the circuits of subsequent claims executing simulations would be an additional element). Further agglomerate networks, AI models, etc. are all additional elements (recited at a high level of generality). The identified abstract idea is applied to a general purpose computing device See MPEP 2106.05(f) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). Applicant contends that the claims are directed towards certain methods of human activities and points to MPEP 2106.04(a)(2). Examiner respectfully disagrees. The claims are directed to “Certain methods of organizing human activity” — commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as the claims are directed towards generating forecasts for employees (See Specification, [04, 05]). Applicant contends at Step 2A-P2 that the claims recite neural networks, simulations, training an AI model, and agglomerate networks which recite a technological improvement and recite a practical application. Examiner respectfully disagrees. The additional elements are recited at a high level of generality as discussed above. The additional elements do not recite an improvement to the technology/technological field nor do they integrate the abstract idea into a practical application. For example the use of AI to evaluate effectiveness of experimental schedules is recite the abstract idea of evaluating effectiveness while reciting the additional element of an AI model recited at a high level of generality. Applicant further points to Desjardins… asserting the referenced elements similar to Desjardin reflect specific improvements. Examiner respectfully disagrees. The claims do not match the fact pattern of Desjardins as the additional elements are recited at a high level of generality while the cited case includes such elements training and retraining a machine learning model to “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task.” Applicant contends at Step 2B that the claims recite elements that amount to significantly more than the judicial exception. Applicant further points to the 4 points previously discussed. Examiner respectfully disagrees. The additional elements are analyzed both individually as well as in combination and are determined to be no more than mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h) as the additional elements such as neural networks, AI models, training AI models are recited at a high level of generality. Further details can be seen below in the full 101 Rejection. The 101 Rejection is updated and maintained below. Response to Arguments - 35 USC § 103 Applicant’s arguments with respect to the 35 USC 103 rejections have been fully considered, but they are not persuasive. Applicant contends that amended claims recite limitations not taught by the cited prior art as the cited prior art does not teach executing a simulations…, evaluating based on the simulation, or improving… metrics. Examiner respectfully disagrees. While Aslam does not teach simulation, Johnson is relied upon to explicitly teach the use of simulations as seen in Johnson, [47, 83, 110, 157] (full citations below). Further Johnson is relied upon to teach evaluating the schedules in the same paragraphs cited. Regarding the training aspect, Aslam does teach training a machine learning algorithm. The combination of Aslam/Johnson below is relied upon to teach the entirety of the limitation. Lastly, Aslam does teach improving metric in at least Aslam, [05, 131,156, 195] as Aslam teaches improving the compliance metrics by repeating iterations until compliance is reached (full citations below). The 103 Rejection is updated and maintained below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-12, and 14-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the claim(s) 1, 3-12, and 14-19 is/are directed to statutory categories. Step 2A, Prong One – The claims are found to recite limitations that set forth the abstract idea(s), namely in independent claims 1, 12, and 19 recite a series of steps for generating and evaluating experimental schedules. Regarding Claims 1, (additional elements bolded) A method comprising: receiving schedule data corresponding to a schedule; receiving at least one schedule modification parameter, wherein the at least one schedule modification parameter includes a parameter indicating a user's level of risk tolerance for an unacceptable schedule; determining, based at least in part on the schedule data, a schedule feature of the schedule; identifying a set of incentives for one or more employees for the schedule feature; and generating a plurality of experimental schedules using a neural network based on the schedule modification parameter, wherein each of the plurality of experimental schedules is configured to test effectiveness of different incentives of the set of incentives on the schedule feature; executing a simulation of at least one of the generated plurality of experimental schedules; evaluating the generated plurality of experimental schedules based on at least one outcome of the simulation and training based on the evaluating, and training, based on the evaluating, one or more artificial intelligence ("Al") models that are each trained to output one or more biases in response to an input schedule; and improving one or more performance metrics corresponding to the input schedule by optimizing, via an agglomerate network circuit, the input schedule, based on at least one of the generated plurality of experimental schedules, using the one or more of the biases output by the one or more Al models, wherein at least one of input data to or output data from the agglomerate network circuit is adjusted based on the one or more biases, and wherein the optimized schedule is output with metadata listing or describing the one or more biases. Regarding Claim(s) 12, An apparatus comprising: a historic schedule interpretation circuit structured to: interpret historical schedule data; and extract a difficult schedule feature from the historical schedule data: an incentive determination circuit structured to identify a set of incentives compatible with the difficult schedule feature; and a schedule experimentation circuit structured to: receive one or more schedule modification parameters, wherein the one or more schedule modification parameters include a parameter indicating a user's level of risk tolerance for an unacceptable schedule; generate, using a neural network, based at least in part on the one or more schedule modification parameters, a set of experimental schedules each with different incentives of the set of incentives; a schedule evaluation circuit structured to: execute a simulation of at least one of the generated set of experimental schedules: evaluate the generated set of experimental schedules based on at least one outcome of the simulation, using and train, based on the evaluating, using one or more artificial intelligence ("AI") models to output one or more biases in response to an input schedule; and a schedule optimization circuit structured to: improve one or more performance metrics corresponding to the input schedule by optimizing the input schedule, based on at least one of the generated set of experimental schedules, using the one or more biases output by the one or more AI models, wherein at least one of input data to or output data from the schedule optimization circuit is adjusted based on the one or more biases, and wherein the optimized schedule is output with metadata listing or describing the one or more biases. Regarding Claim(s) 19, An agglomerate network for generating experimental schedule data, the agglomerate network comprising: a scheduler circuit structured to output schedule data, wherein the schedule data is output with metadata listing or describing one or more experimental biases applied during generation of the schedule data; a connector circuit structured to adjust at least one of an input to the scheduler circuit or the schedule data outputted by the scheduler circuit based on the one or more experimental biases; a schedule experimentation circuit structured to: receive schedule modification parameters; receive one or more schedule modification parameters, wherein the one or more schedule modification parameters includes a parameter indicating a user's level of risk tolerance for an unacceptable schedule; generate the one or more experimental biases for the connector circuit, by generating, using a neural network and based at least in part on the one or more schedule modification parameters, a set of experimental schedules; a schedule evaluation circuit structured to: execute a simulation of at least one of the generated set of experimental schedules; and evaluate the generated set of experimental schedules based on at least one outcome of the simulation, and train, based on the evaluating, one or more artificial intelligence ("Al") models to output the one or more experimental biases in response to the one or more schedule modification parameters; the schedule experimentation circuit further structured to: transmit the one or more experimental biases to the connector circuit; and the schedule evaluation circuit further structured to: evaluate the schedule data for performance, using one or more pre-trained Al models; and determine when the performance is below a threshold and, in response, modify the one or more schedule modification parameters to improve one or more performance metrics corresponding to the schedule data. As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea groupings of “Mental processes—concepts performed in the human mind” (observation, evaluation, judgment, opinion) as the claims are directed towards receiving schedule modification parameters, determining feature of schedule, generating experimental schedules (claim 1); extract schedule features, receive schedule modifications, and generate experimental schedules (Claim 12); output schedule data; adjust schedule data outputted, receive schedule modification parameters, generate experimental biases, evaluate schedule data, and determine performance is below a threshold, and modify schedule parameters (Claim 19) all of which are concepts capable of being performed in the human mind (i.e. via pen and paper). Further the claims are directed to “Certain methods of organizing human activity” — commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as the claims are directed towards generating forecasts for employees (See Specification, [04, 05]). Step 2A, Prong Two - This judicial exception is not integrated into a practical application. The independent claims utilize at least an a neural network (recited at a high level of generality); training… one or more artificial intelligence ("Al") models (recited at a high level of generality); agglomerate network circuit; apparatus; a historic schedule interpretation circuit; an incentive determination circuit; a schedule experimentation circuit… to receive; a schedule evaluation circuit structured to: execute a simulation; a schedule optimization circuit; agglomerate network;, scheduler circuit… to output; connector circuit… to adjust at least one of an input to the scheduler circuit; a schedule experimentation circuit… to: receive; schedule experimentation circuit… transmit the set of experimental biases to the connector circuit; and one or more pre-trained Al models. The additional elements are performing the steps would be no more than mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). Step 2B - The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are just “apply it” on a computer. (See MPEP 2106.05(f) – Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235). Further the element of transmit the set of experimental biases to the connector circuit is an activity that has been recognized by the courts as well-understood, routine, and conventional activity (See MPEP 2106.05(d) i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362) Regarding Claim(s) 3-11 and 14-18, the claim further narrows the abstract idea or recite additional elements previously rejected in the independent claims. Accordingly, the claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 4-5, 7-8, 10-12, 15-16, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aslam et al. (US 20220198353 A1) in view of Johnson et al. (US 20160063192 A1), and Zehtabi et al. (US 20230112156 A115). Regarding Claim(s) 1, Aslam teaches: A method comprising: receiving schedule data corresponding to a schedule; (Aslam, [06]; Obtaining the corresponding time and attendance data may include receiving the corresponding time and attendance data as a csv file or via an api. Applying a machine learning algorithm to the training corpus may include applying a clustering algorithm to generate a plurality of clusters of problem shifts. The computer-implemented method may include computing a centroid value of each cluster of the plurality of clusters of problem shifts. The computer-implemented method may include: receiving a shift schedule template that includes a newly created shift and Aslam, [07]; receiving shift bids from a set of users via one or more of the plurality of user devices; publishing a shift schedule based on the received shift bids; obtaining time and attendance data associated with the published shift schedule). receiving at least one schedule modification parameter, (Aslam, [05]; a computer-implemented method to identify problem shifts using machine learning. The computer-implemented method also includes obtaining a plurality of published shift schedules, each published shift schedule associated with a respective shift of a respective employer of one or more employers, where each published shift schedule includes a location attribute, industry code attribute, and week indicator attribute; for each published shift schedule, obtaining a corresponding time and attendance record; programmatically analyzing the published shift schedule and the corresponding time and attendance record to determine one or more unscheduled shifts; and adding unscheduled shift data associated with the one or more unscheduled shifts to a training corpus, where the unscheduled shift data includes two or more of an employer identifier, a location identifier, a shift identifier, an industry identifier, an employee identifier, a job type identifier). Examiner interprets the unscheduled shifts as modification parameters. determining, based at least in part on the schedule data, a schedule feature of the schedule; (Aslam, [95]; Two or more attributes in the shift schedule data, e.g., employer identifier, shift identifier, job identifier, employee identifier, location identifier, time (day, week of year, etc.) identifier may be utilized to determine a feature vector associated with each shift schedule). identifying a set of incentives for the one or more employees for the schedule feature; (Aslam, [04]; Automated determination of problem shifts across large datasets of schedule data, and automated methods for incentivizing employees to attend can improve business productivity and Aslam, [09]; determining an incentive offer for each problem shift and Aslam, [190]; In some implementations, a lift value for a particular shift is based on a measured change in the logistical regression values with and without a randomized incentive offer. In some implementations, a logistic regression function is determined for each category of shifts, e.g., no incentive offer, timely pay shifts, bonus pay shifts, etc. A lift associated with the randomized incentive offer is based on a distance measured between respective points on the logistics regression curve (evaluated by the determined logistic regression function for each randomized incentive offer) corresponding to a particular shift.). generating a plurality of experimental schedules using a neural network based on the at least one schedule modification parameter, (Aslam, [06]; Implementations may include one or more of the following features. The computer-implemented method where applying the machine learning algorithm to the training corpus may include determining a shift compliance metric for each problem shift. Applying the machine learning algorithm to the training corpus may include applying a logistic regression model to the training corpus. Applying a machine learning algorithm to the training corpus may include applying the machine learning algorithm to at least 400 published shift schedules. Determining the plurality of problem shifts may include: identifying one or more decision boundaries based on logistic regression values; comparing a distance of a logistic regression value associated with each of a plurality of shifts to the one or more decision boundaries; and determining the plurality of problem shifts based on the comparison. Obtaining the corresponding time and attendance data may include receiving the corresponding time and attendance data as a csv file or via an api. Applying a machine learning algorithm to the training corpus may include applying a clustering algorithm to generate a plurality of clusters of problem shifts and Aslam, [11]; receiving a shift schedule template; identifying a plurality of problem shifts in the shift schedule template; determining an incentive offer for each problem shift; instantiating a graphical user interface (GUI) portion on a plurality of user devices; displaying the incentive offer for each of the identified problem shifts via the GUI; receiving shift bids from each of a set of users via one of more user devices of the plurality of user devices; and adjusting a shift schedule based on the received shift bids to generate a published shift schedule and Aslam, [129]; In some implementations, the ML model is a neural network). Examiner interprets the incentives paired with problem shifts as experimental shifts. wherein each of the plurality of experimental schedules is configured to test effectiveness of different incentives of the set of incentives on the schedule feature. (Aslam, [146]; FIG. 7 is a flowchart illustrating an example method to perform a randomized testing of incentives for problem shifts, in accordance with some implementations. The method may be utilized, for example, to support evaluation of an incentive structure for resolving problem shifts and Aslam, [178]; the selection of incentive offers is based on a determination of clusters of types of problem shifts. Selection of clusters for randomized offers is made such that similar groups (clusters) of shift slots (e.g. night shifts, long weekend shifts, weekday shift slots in a restaurant industry, etc.) are offered different types of incentive offer(s) to test the sensitivity of shift compliance to the incentive offers). … and training,… one or more artificial intelligence ("Al") models to output one or more biases in response to an input schedule; and (Aslam, [99]; Application of the ML model is utilized to determine a shift compliance metric (odds of a shift being a problem shift), expressed as a number between 0 and 1, where 0 is indicative of a shift unlikely to be missed by an employee scheduled to work the shift, and where 1 represents a shift that is highly likely (e.g., almost definitely) to be missed and Aslam, [153]; In some implementations, the identification of the one or more problem shifts is performed by comparing the shift attributes in the received template against an already created record (table) of problem shifts and/or corresponding shift attributes generated from an ML model. The record may be created from previously generated shift compliance metric values and by applying a suitable decision boundary and Aslam, [156]; In some implementations, problem shift(s) in the shift templates are identified by determining a shift compliance metric based on the shift attributes in the shift template, and by comparing the determined shift compliance metric with predetermined thresholds and Aslam, [206]; In some implementations, the identification of the one or more problem shifts is performed by comparing the shift attributes in the received template against an already created record (table) of problem shifts and/or corresponding shift attributes generated from an ML model. The record may be created from previously generated shift compliance metric values and by applying a suitable decision boundary). Examiner interprets the shift attributes and metrics as biases which are compared to a threshold. Examiner further notes that Aslam does not teach evaluation based on simulations, nor training based on the evaluations. The Johnson prior art below is explicitly relied upon to teach those aspects. improving one or more performance metrics corresponding to the input schedule by optimizing, via an agglomerate network circuit, the input schedule, based on at least one of the generated plurality of experimental schedules, using the one or more of the biases output by the one or more Al models, (Aslam, [05]; A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions and Aslam, [131]; FIG. 6A illustrates an example graph of logistic regression based odds of problem shifts generated as a function of shift attributes (shift feature vector) based on a trained machine learning (NIL) model, in accordance with some implementations and Aslam, [156]; In some implementations, problem shift(s) in the shift templates are identified by determining a shift compliance metric based on the shift attributes in the shift template, and by comparing the determined shift compliance metric with predetermined thresholds and Aslam, [195]; Method 700, or portions thereof, may be repeated any number of times using additional inputs. In another example, block 710-750 may be repeated with additional incentive offers. Method 700 may be repeated until a threshold level of shift compliance is reached, or a threshold lift from a randomized incentive offer is measured. While Aslam teaches schedule modification parameters for users, Aslam does not appear to teach: wherein the at least one schedule modification parameter includes a parameter indicating a user's level of risk tolerance for an unacceptable schedule; However, Aslam in view of the analogous art of Johnson (i.e. scheduling) does teach the entirety of the limitation: (Johnson, [190-191]; For example, factor(s) contributing to the schedule risk state change can be evaluated. A degree of schedule risk state change can be evaluated (e.g., slightly behind/ahead of schedule (e.g., within a tolerance or standard deviation), more than a threshold or tolerance behind/ahead of schedule, etc.). A circumstance of schedule risk state change can be evaluated. Based on the change in schedule risk state, a next action is determined. The next action can be based on the degree, circumstance, and/or other factor(s) associated with the schedule risk change…. At block 1714, schedule risk state can be adjusted. For example, if the degree of state change was slight (e.g., within a tolerance) and/or circumstances justify the change, then a definition of the schedule risk state (and/or adjacent/associated schedule risk state(s)) can be adjusted. For example, set point(s) defining the schedule risk states can be automatically moved by the system based on the processing of the schedule risk state change/transition and Johnson, [187]; At block 1706, one or more schedule risk states are defined based on the CPDF of completion/schedule risk for tasks in the schedule. Each schedule risk state may be associated with a duration triggering one or more setpoints defining a transition between states of schedule risk (e.g., from okay (low risk), to a warning (medium risk), to a mitigating action (high risk)). Examiner interprets the high risk as unacceptable as the system looks to mitigating action. While Aslam teaches generating a plurality of experimental schedules, schedule modification parameters, and training of AI, Aslam does not appear to teach: executing a simulation of at least one of the generated plurality of experimental schedule. However, Aslam in view of the analogous art of Johnson (i.e. scheduling) does teach the entirety of the limitation: (Johnson, [47]; the user interface to the schedule manager as the process is occurring is configured to indicate and display variation along with suggestions as to “do-what” and enable “what-if” and will be referred to herein as “Day View”. Day View may be thought of as a “radar” for the clinical process, it brings schedule with other location and clinical information so that the staff can know when schedule deviations are occurring, what the cause is, have a way to visualize process interdependencies, have the ability to play out or simulate alternative process decisions and ultimately get the process stakeholders constructively involved in proceeding forward in a manner that has their intellectual buy-in to the course and Johnson, [83]; certain examples enable a plurality of scheduling scenarios to be manually or dynamically entered or simulated automatically to explore an available solution space and ramifications on current and future activities. Certain examples provide suggested decisions calculated to help meet one or more static, dynamic or path-dependent configurable objectives and Johnson, [110]; Logic 803, 715 may be rule-based, example-based, evidential reasoning, fuzzy logic-based, case-based, and/or other artificial intelligence-based logic, for example and Johnson, [157]; Likewise, a day can be replayed for study or training of decision support algorithms. The replay can be historical and/or comparative to what was planned or even what the scheduling algorithm chose as a robust path forward but was not necessarily selected). While Aslam teaches generating a plurality of experimental schedules, schedule modification parameters, and training of AI, Aslam does not appear to teach: evaluating the generated plurality of experimental schedules based on at least one outcome of the simulation, using and training, based on the evaluating (Johnson, [47]; the user interface to the schedule manager as the process is occurring is configured to indicate and display variation along with suggestions as to “do-what” and enable “what-if” and will be referred to herein as “Day View”. Day View may be thought of as a “radar” for the clinical process, it brings schedule with other location and clinical information so that the staff can know when schedule deviations are occurring, what the cause is, have a way to visualize process interdependencies, have the ability to play out or simulate alternative process decisions and ultimately get the process stakeholders constructively involved in proceeding forward in a manner that has their intellectual buy-in to the course and Johnson, [83]; certain examples enable a plurality of scheduling scenarios to be manually or dynamically entered or simulated automatically to explore an available solution space and ramifications on current and future activities. Certain examples provide suggested decisions calculated to help meet one or more static, dynamic or path-dependent configurable objectives and Johnson, [110]; Logic 803, 715 may be rule-based, example-based, evidential reasoning, fuzzy logic-based, case-based, and/or other artificial intelligence-based logic, for example and Johnson, [157]; Likewise, a day can be replayed for study or training of decision support algorithms. The replay can be historical and/or comparative to what was planned or even what the scheduling algorithm chose as a robust path forward but was not necessarily selected). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam including schedule modification parameters with the teachings of Johnson including risk tolerance parameters in order to allow for an opportunity to mitigate risks or use corrective actions (Johnson, [192]; a reaction to the schedule risk state change is triggered. For example, in response to the transition from one schedule risk state to another, a corrective or mitigating action such as a warning (e.g., a displayed icon, text, and/or audio), an altered/corrective/mitigating workflow (e.g., to move a patient, reschedule a procedure, reallocate resources, extend overtime, etc.), etc., is automatically triggered. As shown in the example of FIG. 17, at block 1718, a warning can be generated to alert a user to the state change and/or associated risk. At block 1720, a mitigating action can be triggered to attempt to correct the state change. Such action may be based on one or more forecasts and/or what-if simulations, for example). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam teaches generating a plurality of experimental schedules, schedule modification parameters, and training of AI with the teachings of Johnson including executing simulations and training based off of evaluations of simulations in order to playout alternative process decision and stakeholders involved in proceeding forward (Johnson, [47]; a way to visualize process interdependencies, have the ability to play out or simulate alternative process decisions and ultimately get the process stakeholders constructively involved in proceeding forward in a manner that has their intellectual buy-in to the course). While Aslam does teach generating experimental schedules and biases generated by an AI model, Aslam does not appear to teach wherein at least one of input data to or output data from the agglomerate network circuit is adjusted based on the one or more biases, and (Zehtabi, [105]; fairness is reflected in instances where there exists a low number of available desks and a high minimum days-per-week in-office requirement. This scenario might be infeasible for a solution. In these cases, when prompted by the user, the algorithm decides which employee preferences should be relaxed. The tool ensures that the adjustment is spread across all resources being scheduled, and prevents the preferences of any one team members from being favored over the preferences of another). wherein the optimized schedule is output with metadata listing or describing the one or more biases. (Zehtabi, [80]; At step S414, the dynamic working scheduler module 302 outputs the schedules, metrics, and explanations to a user interface for display thereon. In an exemplary embodiment, the user interface may be displayed on computer screens of various employees and/or members of a particular group or team in order to provide a notification as to the upcoming work schedules and also to show the team members that the schedule is intended to maximize fairness and flexibility in scheduling within the known constraints). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam does teach generating experimental schedules and biases generated using an AI model with the teachings of Zehtabi adjusting based on biases in order to maximize fairness and flexibility in scheduling (Zehtabi, [80]; the dynamic working scheduler module 302 outputs the schedules, metrics, and explanations to a user interface for display thereon. In an exemplary embodiment, the user interface may be displayed on computer screens of various employees and/or members of a particular group or team in order to provide a notification as to the upcoming work schedules and also to show the team members that the schedule is intended to maximize fairness and flexibility in scheduling within the known constraints). Regarding Claim(s) 4 and 15, Aslam/Johnson/Zehtabi teaches: The method of claim 1, wherein the schedule feature is an undesirable feature of the schedule for the one or more employees. (Aslam, [153]; the identification of the one or more problem shifts is performed by comparing the shift attributes in the received template against an already created record (table) of problem shifts and/or corresponding shift attributes generated from an ML model and Aslam, [178]; In some implementations, the selection of incentive offers is based on a determination of clusters of types of problem shifts. Selection of clusters for randomized offers is made such that similar groups (clusters) of shift slots (e.g. night shifts, long weekend shifts, weekday shift slots in a restaurant industry, etc.) and Aslam, [208]; In some implementations, a table of problem shifts may include shifts determined to be likely problem shifts based on expected future events, e.g., weather events, special events such as sports games, festivals, etc. Regarding Claim(s) 5 and 16, Aslam/Johnson/Zehtabi teaches: The method of claim 1. wherein at least one of the set of incentives is monetary. (Aslam, [213]; In some implementations, the incentive offer may be a bonus pay offer, which is an offer to an employee on an amount that is greater than a customary pay). Regarding Claim(s) 7, Aslam/Johnson/Zehtabi teaches: The method of claim 1, wherein the one or more employees are incentivized to participate in an experimental schedule. (Aslam, [09]; identifying a plurality of problem shifts in the shift schedule template, determining an incentive offer for each problem shift, instantiating a graphical user interface (GUI) portion on a plurality of user devices, displaying the incentive offer for each of the identified problem shifts via the GUI, receiving shift bids from each of a set of users via one of more user devices of the plurality of user devices, and adjusting a shift schedule based on the received shift bids to generate a published shift schedule). Regarding Claim(s) 8, Aslam/Johnson/Zehtabi teaches: The method of claim 1, wherein the one or more employees opt in for an experimental schedule. (Aslam, [10]; The incentive offer is a bonus pay offer, and where an amount associated with the bonus pay offer is based on a logistic regression value associated with the problem shift. The GUI enables a user to submit a shift bid. The shift bid is an indication of acceptance of the incentive offer by the user). Regarding Claim(s) 10, Aslam/Johnson/Zehtabi teaches: The method of claim 1, wherein the schedule feature is identified as a difficult feature of the schedule. (Aslam, [09]; One general aspect includes a computer-implemented method to improve shift coverage. The computer-implemented method also includes receiving a shift schedule template; identifying a plurality of problem shifts in the shift schedule template and Aslam, [178]; In some implementations, the selection of incentive offers is based on a determination of clusters of types of problem shifts. Selection of clusters for randomized offers is made such that similar groups (clusters) of shift slots (e.g. night shifts, long weekend shifts, weekday shift slots in a restaurant industry, etc.) Regarding Claim(s) 11 and 18, Aslam/Johnson/Zehtabi teaches: The method of claim 10, wherein the difficult feature is at least one of consecutive time slots, late shifts, or busy shift times. Aslam, [178]; In some implementations, the selection of incentive offers is based on a determination of clusters of types of problem shifts. Selection of clusters for randomized offers is made such that similar groups (clusters) of shift slots (e.g. night shifts, long weekend shifts, weekday shift slots in a restaurant industry, etc.) Regarding Claim(s) 12, Aslam/Johnson/Zehtabi teaches: An apparatus comprising: a historic schedule interpretation circuit structured to: interpret historical schedule data; and (Aslam, [05]; A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions and Aslam, [07]; obtaining time and attendance data associated with the published shift schedule; obtaining historical problem shift data; and statistically analyzing the time and attendance data, the historical problem shift data, and the published shift schedule to determine a statistical shift compliance metric associated with the randomized incentive). Examiner interprets the software, firmware, hardware, or a combination as a circuit. extract a difficult schedule feature from the historical schedule data: (Aslam, [05]; One general aspect includes a computer-implemented method to identify problem shifts using machine learning. The computer-implemented method also includes obtaining a plurality of published shift schedules, each published shift schedule associated with a respective shift of a respective employer of one or more employers…where the unscheduled shift data includes two or more of an employer identifier, a location identifier, a shift identifier, an industry identifier, an employee identifier, a job type identifier. The method also includes applying a machine learning algorithm to the training corpus to determine a plurality of problem shifts and Aslam, [178]; In some implementations, the selection of incentive offers is based on a determination of clusters of types of problem shifts. Selection of clusters for randomized offers is made such that similar groups (clusters) of shift slots (e.g. night shifts, long weekend shifts, weekday shift slots in a restaurant industry, etc.)). an incentive determination circuit structured to identify a set of incentives compatible with the difficult schedule feature; and (Aslam, [05]; A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions and Aslam, [09]; determining an incentive offer for each problem shift). Examiner interprets the software, firmware, hardware, or a combination as a circuit. a schedule experimentation circuit structured to: receive one or more schedule modification parameters, (Aslam, [05]; a computer-implemented method to identify problem shifts using machine learning. The computer-implemented method also includes obtaining a plurality of published shift schedules, each published shift schedule associated with a respective shift of a respective employer of one or more employers, where each published shift schedule includes a location attribute, industry code attribute, and week indicator attribute; for each published shift schedule, obtaining a corresponding time and attendance record; programmatically analyzing the published shift schedule and the corresponding time and attendance record to determine one or more unscheduled shifts; and adding unscheduled shift data associated with the one or more unscheduled shifts to a training corpus, where the unscheduled shift data includes two or more of an employer identifier, a location identifier, a shift identifier, an industry identifier, an employee identifier, a job type identifier). Examiner interprets the unscheduled shifts as modification parameters. generate, using a neural network, based at least in part on the one or more schedule modification parameters, a set of experimental schedules each with different incentives of the set of incentives. (Aslam, [06]; Implementations may include one or more of the following features. The computer-implemented method where applying the machine learning algorithm to the training corpus may include determining a shift compliance metric for each problem shift. Applying the machine learning algorithm to the training corpus may include applying a logistic regression model to the training corpus. Applying a machine learning algorithm to the training corpus may include applying the machine learning algorithm to at least 400 published shift schedules. Determining the plurality of problem shifts may include: identifying one or more decision boundaries based on logistic regression values; comparing a distance of a logistic regression value associated with each of a plurality of shifts to the one or more decision boundaries; and determining the plurality of problem shifts based on the comparison. Obtaining the corresponding time and attendance data may include receiving the corresponding time and attendance data as a csv file or via an api. Applying a machine learning algorithm to the training corpus may include applying a clustering algorithm to generate a plurality of clusters of problem shifts and Aslam, [11]; receiving a shift schedule template; identifying a plurality of problem shifts in the shift schedule template; determining an incentive offer for each problem shift; instantiating a graphical user interface (GUI) portion on a plurality of user devices; displaying the incentive offer for each of the identified problem shifts via the GUI; receiving shift bids from each of a set of users via one of more user devices of the plurality of user devices; and adjusting a shift schedule based on the received shift bids to generate a published shift schedule and Aslam, [129]; In some implementations, the ML model is a neural network.). Examiner interprets the incentives paired with problem shifts as experimental shifts. … and training,… one or more artificial intelligence ("Al") models to output one or more biases in response to an input schedule; and (Aslam, [99]; Application of the ML model is utilized to determine a shift compliance metric (odds of a shift being a problem shift), expressed as a number between 0 and 1, where 0 is indicative of a shift unlikely to be missed by an employee scheduled to work the shift, and where 1 represents a shift that is highly likely (e.g., almost definitely) to be missed and Aslam, [153]; In some implementations, the identification of the one or more problem shifts is performed by comparing the shift attributes in the received template against an already created record (table) of problem shifts and/or corresponding shift attributes generated from an ML model. The record may be created from previously generated shift compliance metric values and by applying a suitable decision boundary and Aslam, [156]; In some implementations, problem shift(s) in the shift templates are identified by determining a shift compliance metric based on the shift attributes in the shift template, and by comparing the determined shift compliance metric with predetermined thresholds and Aslam, [206]; In some implementations, the identification of the one or more problem shifts is performed by comparing the shift attributes in the received template against an already created record (table) of problem shifts and/or corresponding shift attributes generated from an ML model. The record may be created from previously generated shift compliance metric values and by applying a suitable decision boundary). Examiner interprets the shift attributes and metrics as biases which are compared to a threshold. Examiner further notes that Aslam does not teach evaluation based on simulations, nor training based on the evaluations. The Johnson prior art below is explicitly relied upon to teach those aspects. improve one or more performance metrics corresponding to the input schedule by optimizing the input schedule, based on at least one of the generated set of experimental schedules, using the one or more of the biases output by the one or more Al models, wherein at least one of input data to or output data from the schedule optimization circuit is adjusted based on the one or more biases, and (Aslam, [05]; A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions and Aslam, [131]; FIG. 6A illustrates an example graph of logistic regression based odds of problem shifts generated as a function of shift attributes (shift feature vector) based on a trained machine learning (NIL) model, in accordance with some implementations and Aslam, [156]; In some implementations, problem shift(s) in the shift templates are identified by determining a shift compliance metric based on the shift attributes in the shift template, and by comparing the determined shift compliance metric with predetermined thresholds and Aslam, [195]; Method 700, or portions thereof, may be repeated any number of times using additional inputs. In another example, block 710-750 may be repeated with additional incentive offers. Method 700 may be repeated until a threshold level of shift compliance is reached, or a threshold lift from a randomized incentive offer is measured). While Aslam teaches schedule modification parameters for users, Aslam does not appear to teach: wherein the at least one schedule modification parameter includes a parameter indicating a user's level of risk tolerance for an unacceptable schedule; However, Aslam in view of the analogous art of Johnson (i.e. scheduling) does teach the entirety of the limitation: (Johnson, [190-191]; For example, factor(s) contributing to the schedule risk state change can be evaluated. A degree of schedule risk state change can be evaluated (e.g., slightly behind/ahead of schedule (e.g., within a tolerance or standard deviation), more than a threshold or tolerance behind/ahead of schedule, etc.). A circumstance of schedule risk state change can be evaluated. Based on the change in schedule risk state, a next action is determined. The next action can be based on the degree, circumstance, and/or other factor(s) associated with the schedule risk change…. At block 1714, schedule risk state can be adjusted. For example, if the degree of state change was slight (e.g., within a tolerance) and/or circumstances justify the change, then a definition of the schedule risk state (and/or adjacent/associated schedule risk state(s)) can be adjusted. For example, set point(s) defining the schedule risk states can be automatically moved by the system based on the processing of the schedule risk state change/transition and Johnson, [187]; At block 1706, one or more schedule risk states are defined based on the CPDF of completion/schedule risk for tasks in the schedule. Each schedule risk state may be associated with a duration triggering one or more setpoints defining a transition between states of schedule risk (e.g., from okay (low risk), to a warning (medium risk), to a mitigating action (high risk)). Examiner interprets the high risk as unacceptable as the system looks to mitigating action. While Aslam teaches generating a plurality of experimental schedules, schedule modification parameters, and training of AI, Aslam does not appear to teach: a scheduling circuit structured to: executing a simulation of at least one of the generated plurality of experimental schedule. However, Aslam in view of the analogous art of Johnson (i.e. scheduling) does teach the entirety of the limitation: (Johnson, [47]; the user interface to the schedule manager as the process is occurring is configured to indicate and display variation along with suggestions as to “do-what” and enable “what-if” and will be referred to herein as “Day View”. Day View may be thought of as a “radar” for the clinical process, it brings schedule with other location and clinical information so that the staff can know when schedule deviations are occurring, what the cause is, have a way to visualize process interdependencies, have the ability to play out or simulate alternative process decisions and ultimately get the process stakeholders constructively involved in proceeding forward in a manner that has their intellectual buy-in to the course and Johnson, [83]; certain examples enable a plurality of scheduling scenarios to be manually or dynamically entered or simulated automatically to explore an available solution space and ramifications on current and future activities. Certain examples provide suggested decisions calculated to help meet one or more static, dynamic or path-dependent configurable objectives and Johnson, [110]; Logic 803, 715 may be rule-based, example-based, evidential reasoning, fuzzy logic-based, case-based, and/or other artificial intelligence-based logic, for example and Johnson, [157]; Likewise, a day can be replayed for study or training of decision support algorithms. The replay can be historical and/or comparative to what was planned or even what the scheduling algorithm chose as a robust path forward but was not necessarily selected). While Aslam teaches generating a plurality of experimental schedules, schedule modification parameters, and training of AI, Aslam does not appear to teach: evaluate the generated plurality of experimental schedules based on at least one outcome of the simulation, using and training, based on the evaluating (Johnson, [47]; the user interface to the schedule manager as the process is occurring is configured to indicate and display variation along with suggestions as to “do-what” and enable “what-if” and will be referred to herein as “Day View”. Day View may be thought of as a “radar” for the clinical process, it brings schedule with other location and clinical information so that the staff can know when schedule deviations are occurring, what the cause is, have a way to visualize process interdependencies, have the ability to play out or simulate alternative process decisions and ultimately get the process stakeholders constructively involved in proceeding forward in a manner that has their intellectual buy-in to the course and Johnson, [83]; certain examples enable a plurality of scheduling scenarios to be manually or dynamically entered or simulated automatically to explore an available solution space and ramifications on current and future activities. Certain examples provide suggested decisions calculated to help meet one or more static, dynamic or path-dependent configurable objectives and Johnson, [110]; Logic 803, 715 may be rule-based, example-based, evidential reasoning, fuzzy logic-based, case-based, and/or other artificial intelligence-based logic, for example and Johnson, [157]; Likewise, a day can be replayed for study or training of decision support algorithms. The replay can be historical and/or comparative to what was planned or even what the scheduling algorithm chose as a robust path forward but was not necessarily selected). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam including schedule modification parameters with the teachings of Johnson including risk tolerance parameters in order to allow for an opportunity to mitigate risks or use corrective actions (Johnson, [192]; a reaction to the schedule risk state change is triggered. For example, in response to the transition from one schedule risk state to another, a corrective or mitigating action such as a warning (e.g., a displayed icon, text, and/or audio), an altered/corrective/mitigating workflow (e.g., to move a patient, reschedule a procedure, reallocate resources, extend overtime, etc.), etc., is automatically triggered. As shown in the example of FIG. 17, at block 1718, a warning can be generated to alert a user to the state change and/or associated risk. At block 1720, a mitigating action can be triggered to attempt to correct the state change. Such action may be based on one or more forecasts and/or what-if simulations, for example). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam teaches generating a plurality of experimental schedules, schedule modification parameters, and training of AI with the teachings of Johnson including executing simulations and training based off of evaluations of simulations in order to playout alternative process decision and stakeholders involved in proceeding forward (Johnson, [47]; a way to visualize process interdependencies, have the ability to play out or simulate alternative process decisions and ultimately get the process stakeholders constructively involved in proceeding forward in a manner that has their intellectual buy-in to the course). While Aslam does teach generating experimental schedules and biases generated by an AI model, Aslam does not appear to teach: wherein the schedule data is output with metadata listing or describing one or more experimental biases applied during generation of the schedule data; However, Aslam in view of the analogous art of Zehtabi (i.e. scheduling does teach the entirety of the limitation: (Zehtabi, [14]; receive, via the communication interface, a first user input that relates to at least one employee preference; receive, via the communication interface, a second user input that relates to at least one manager preference; receive, via the communication interface, a third user input that relates to at least one business constraint; generate, based on the received first, second, and third user inputs, a respective schedule for each corresponding one of the plurality of persons; and output, via the communication interface, each respective schedule to a user interface for display thereon. wherein the optimized schedule is output with metadata listing or describing the one or more biases. (Zehtabi, [80]; At step S414, the dynamic working scheduler module 302 outputs the schedules, metrics, and explanations to a user interface for display thereon. In an exemplary embodiment, the user interface may be displayed on computer screens of various employees and/or members of a particular group or team in order to provide a notification as to the upcoming work schedules and also to show the team members that the schedule is intended to maximize fairness and flexibility in scheduling within the known constraints). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam does teach generating experimental schedules and biases generated using an AI model with the teachings of Zehtabi including generating optimized schedules using biases and further displaying biases in order to maximize fairness and flexibility in scheduling (Zehtabi, [80]; the dynamic working scheduler module 302 outputs the schedules, metrics, and explanations to a user interface for display thereon. In an exemplary embodiment, the user interface may be displayed on computer screens of various employees and/or members of a particular group or team in order to provide a notification as to the upcoming work schedules and also to show the team members that the schedule is intended to maximize fairness and flexibility in scheduling within the known constraints). Regarding Claim(s) 19, Aslam teaches: An agglomerate network for generating experimental schedule data, the agglomerate network comprising: a scheduler circuit structured to output schedule data, (Aslam, [05]; A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions and Aslam, [07]; displaying a randomized incentive offer for each of the identified problem shifts via the GUI; receiving shift bids from a set of users via one or more of the plurality of user devices; publishing a shift schedule based on the received shift bids; obtaining time and attendance data associated with the published shift schedule; obtaining historical problem shift data; and statistically analyzing the time and attendance data, the historical problem shift data, and the published shift schedule to determine a statistical shift compliance metric associated with the randomized incentive). Examiner interprets the software, firmware, hardware, or a combination as a circuit. a connector circuit structured to adjust at least one of an input to the scheduler circuit or the schedule data outputted by the scheduler circuit based on the one or more experimental biases; (Aslam, [05]; A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions and Aslam, [09]; determining an incentive offer for each problem shift and Aslam, [11]; receiving a shift schedule template; identifying a plurality of problem shifts in the shift schedule template; determining an incentive offer for each problem shift; instantiating a graphical user interface (GUI) portion on a plurality of user devices; displaying the incentive offer for each of the identified problem shifts via the GUI; receiving shift bids from each of a set of users via one of more user devices of the plurality of user devices; and adjusting a shift schedule based on the received shift bids to generate a published shift schedule). Examiner interprets the incentives paired with problem shifts as experimental biases. a schedule experimentation circuit structured to: receive schedule modification parameters; (Aslam, [05]; a computer-implemented method to identify problem shifts using machine learning. The computer-implemented method also includes obtaining a plurality of published shift schedules, each published shift schedule associated with a respective shift of a respective employer of one or more employers, where each published shift schedule includes a location attribute, industry code attribute, and week indicator attribute; for each published shift schedule, obtaining a corresponding time and attendance record; programmatically analyzing the published shift schedule and the corresponding time and attendance record to determine one or more unscheduled shifts; and adding unscheduled shift data associated with the one or more unscheduled shifts to a training corpus, where the unscheduled shift data includes two or more of an employer identifier, a location identifier, a shift identifier, an industry identifier, an employee identifier, a job type identifier). Examiner interprets the unscheduled shifts as modification parameters. generate the one or more experimental biases for the connector circuit, by generating, using a neural network and based at least in part on the one or more schedule modification parameters, asset of experimental schedules; (Aslam, [06]; Implementations may include one or more of the following features. The computer-implemented method where applying the machine learning algorithm to the training corpus may include determining a shift compliance metric for each problem shift. Applying the machine learning algorithm to the training corpus may include applying a logistic regression model to the training corpus. Applying a machine learning algorithm to the training corpus may include applying the machine learning algorithm to at least 400 published shift schedules. Determining the plurality of problem shifts may include: identifying one or more decision boundaries based on logistic regression values; comparing a distance of a logistic regression value associated with each of a plurality of shifts to the one or more decision boundaries; and determining the plurality of problem shifts based on the comparison. Obtaining the corresponding time and attendance data may include receiving the corresponding time and attendance data as a csv file or via an api. Applying a machine learning algorithm to the training corpus may include applying a clustering algorithm to generate a plurality of clusters of problem shifts and Aslam, [11]; receiving a shift schedule template; identifying a plurality of problem shifts in the shift schedule template; determining an incentive offer for each problem shift; instantiating a graphical user interface (GUI) portion on a plurality of user devices; displaying the incentive offer for each of the identified problem shifts via the GUI; receiving shift bids from each of a set of users via one of more user devices of the plurality of user devices; and adjusting a shift schedule based on the received shift bids to generate a published shift schedule and Aslam, [129]; In some implementations, the ML model is a neural network). Examiner interprets the incentives paired with problem shifts as experimental shifts. … and training,… one or more artificial intelligence ("Al") models to output one or more experimental biases in response to the one or more schedule modification parameters; and (Aslam, [99]; Application of the ML model is utilized to determine a shift compliance metric (odds of a shift being a problem shift), expressed as a number between 0 and 1, where 0 is indicative of a shift unlikely to be missed by an employee scheduled to work the shift, and where 1 represents a shift that is highly likely (e.g., almost definitely) to be missed and Aslam, [153]; In some implementations, the identification of the one or more problem shifts is performed by comparing the shift attributes in the received template against an already created record (table) of problem shifts and/or corresponding shift attributes generated from an ML model. The record may be created from previously generated shift compliance metric values and by applying a suitable decision boundary and Aslam, [156]; In some implementations, problem shift(s) in the shift templates are identified by determining a shift compliance metric based on the shift attributes in the shift template, and by comparing the determined shift compliance metric with predetermined thresholds and Aslam, [206]; In some implementations, the identification of the one or more problem shifts is performed by comparing the shift attributes in the received template against an already created record (table) of problem shifts and/or corresponding shift attributes generated from an ML model. The record may be created from previously generated shift compliance metric values and by applying a suitable decision boundary). Examiner interprets the shift attributes and metrics as biases which are compared to a threshold. Examiner further notes that Aslam does not teach evaluation based on simulations, nor training based on the evaluations. The Johnson prior art below is explicitly relied upon to teach those aspects. the schedule experimentation circuit further structured to: transmit the one or more experimental biases to the connector circuit; and (Aslam, [158]; a graphical user interface (GUI) is initiated and/or caused to be initiated on one or more user device(s) to highlight an incentive associated with each of the identified problem shifts. In some implementations, the GUI may be initiated soon after identification of a problem shift. In some implementations, a notification may be transmitted to one or more user devices, and the GUI activated when a user opens an App or window associated with an employer communication system. a schedule evaluation circuit further structured to: evaluate the schedule data for performance, using one or more pre-trained AI models; and (Aslam, [99]; Application of the ML model is utilized to determine a shift compliance metric (odds of a shift being a problem shift), expressed as a number between 0 and 1, where 0 is indicative of a shift unlikely to be missed by an employee scheduled to work the shift, and where 1 represents a shift that is highly likely (e.g., almost definitely) to be missed and Aslam, [146]; FIG. 7 is a flowchart illustrating an example method to perform a randomized testing of incentives for problem shifts, in accordance with some implementations. The method may be utilized, for example, to support evaluation of an incentive structure for resolving problem shifts and Aslam, [178]; the selection of incentive offers is based on a determination of clusters of types of problem shifts. Selection of clusters for randomized offers is made such that similar groups (clusters) of shift slots (e.g. night shifts, long weekend shifts, weekday shift slots in a restaurant industry, etc.) are offered different types of incentive offer(s) to test the sensitivity of shift compliance to the incentive offers). determine when the performance is below a threshold and, in response, modify the one or more schedule modification parameters to improve one or more performance metrics corresponding to the schedule data (Aslam, [156]; In some implementations, problem shift(s) in the shift templates are identified by determining a shift compliance metric based on the shift attributes in the shift template, and by comparing the determined shift compliance metric with predetermined thresholds and Aslam, [195]; Method 700, or portions thereof, may be repeated any number of times using additional inputs. In another example, block 710-750 may be repeated with additional incentive offers. Method 700 may be repeated until a threshold level of shift compliance is reached, or a threshold lift from a randomized incentive offer is measured. While Aslam teaches schedule modification parameters for users, Aslam does not appear to teach: wherein the at least one schedule modification parameter includes a parameter indicating a user's level of risk tolerance for an unacceptable schedule; However, Aslam in view of the analogous art of Johnson (i.e. scheduling) does teach the entirety of the limitation: (Johnson, [190-191]; For example, factor(s) contributing to the schedule risk state change can be evaluated. A degree of schedule risk state change can be evaluated (e.g., slightly behind/ahead of schedule (e.g., within a tolerance or standard deviation), more than a threshold or tolerance behind/ahead of schedule, etc.). A circumstance of schedule risk state change can be evaluated. Based on the change in schedule risk state, a next action is determined. The next action can be based on the degree, circumstance, and/or other factor(s) associated with the schedule risk change…. At block 1714, schedule risk state can be adjusted. For example, if the degree of state change was slight (e.g., within a tolerance) and/or circumstances justify the change, then a definition of the schedule risk state (and/or adjacent/associated schedule risk state(s)) can be adjusted. For example, set point(s) defining the schedule risk states can be automatically moved by the system based on the processing of the schedule risk state change/transition and Johnson, [187]; At block 1706, one or more schedule risk states are defined based on the CPDF of completion/schedule risk for tasks in the schedule. Each schedule risk state may be associated with a duration triggering one or more setpoints defining a transition between states of schedule risk (e.g., from okay (low risk), to a warning (medium risk), to a mitigating action (high risk)). Examiner interprets the high risk as unacceptable as the system looks to mitigating action. While Aslam teaches generating a plurality of experimental schedules, schedule modification parameters, and training of AI, Aslam does not appear to teach: a schedule evaluation circuit structured to: executing a simulation of at least one of the generated plurality of experimental schedule. However, Aslam in view of the analogous art of Johnson (i.e. scheduling) does teach the entirety of the limitation: (Johnson, [47]; the user interface to the schedule manager as the process is occurring is configured to indicate and display variation along with suggestions as to “do-what” and enable “what-if” and will be referred to herein as “Day View”. Day View may be thought of as a “radar” for the clinical process, it brings schedule with other location and clinical information so that the staff can know when schedule deviations are occurring, what the cause is, have a way to visualize process interdependencies, have the ability to play out or simulate alternative process decisions and ultimately get the process stakeholders constructively involved in proceeding forward in a manner that has their intellectual buy-in to the course and Johnson, [83]; certain examples enable a plurality of scheduling scenarios to be manually or dynamically entered or simulated automatically to explore an available solution space and ramifications on current and future activities. Certain examples provide suggested decisions calculated to help meet one or more static, dynamic or path-dependent configurable objectives and Johnson, [110]; Logic 803, 715 may be rule-based, example-based, evidential reasoning, fuzzy logic-based, case-based, and/or other artificial intelligence-based logic, for example and Johnson, [157]; Likewise, a day can be replayed for study or training of decision support algorithms. The replay can be historical and/or comparative to what was planned or even what the scheduling algorithm chose as a robust path forward but was not necessarily selected). While Aslam teaches generating a plurality of experimental schedules, schedule modification parameters, and training of AI, Aslam does not appear to teach: evaluate the generated set of experimental schedules based on at least one outcome of the simulation, using and training, based on the evaluating (Johnson, [47]; the user interface to the schedule manager as the process is occurring is configured to indicate and display variation along with suggestions as to “do-what” and enable “what-if” and will be referred to herein as “Day View”. Day View may be thought of as a “radar” for the clinical process, it brings schedule with other location and clinical information so that the staff can know when schedule deviations are occurring, what the cause is, have a way to visualize process interdependencies, have the ability to play out or simulate alternative process decisions and ultimately get the process stakeholders constructively involved in proceeding forward in a manner that has their intellectual buy-in to the course and Johnson, [83]; certain examples enable a plurality of scheduling scenarios to be manually or dynamically entered or simulated automatically to explore an available solution space and ramifications on current and future activities. Certain examples provide suggested decisions calculated to help meet one or more static, dynamic or path-dependent configurable objectives and Johnson, [110]; Logic 803, 715 may be rule-based, example-based, evidential reasoning, fuzzy logic-based, case-based, and/or other artificial intelligence-based logic, for example and Johnson, [157]; Likewise, a day can be replayed for study or training of decision support algorithms. The replay can be historical and/or comparative to what was planned or even what the scheduling algorithm chose as a robust path forward but was not necessarily selected). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam including schedule modification parameters with the teachings of Johnson including risk tolerance parameters in order to allow for an opportunity to mitigate risks or use corrective actions (Johnson, [192]; a reaction to the schedule risk state change is triggered. For example, in response to the transition from one schedule risk state to another, a corrective or mitigating action such as a warning (e.g., a displayed icon, text, and/or audio), an altered/corrective/mitigating workflow (e.g., to move a patient, reschedule a procedure, reallocate resources, extend overtime, etc.), etc., is automatically triggered. As shown in the example of FIG. 17, at block 1718, a warning can be generated to alert a user to the state change and/or associated risk. At block 1720, a mitigating action can be triggered to attempt to correct the state change. Such action may be based on one or more forecasts and/or what-if simulations, for example). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam including schedule modification parameters with the teachings of Johnson including risk tolerance parameters in order to allow for an opportunity to mitigate risks or use corrective actions (Johnson, [192]; a reaction to the schedule risk state change is triggered. For example, in response to the transition from one schedule risk state to another, a corrective or mitigating action such as a warning (e.g., a displayed icon, text, and/or audio), an altered/corrective/mitigating workflow (e.g., to move a patient, reschedule a procedure, reallocate resources, extend overtime, etc.), etc., is automatically triggered. As shown in the example of FIG. 17, at block 1718, a warning can be generated to alert a user to the state change and/or associated risk. At block 1720, a mitigating action can be triggered to attempt to correct the state change. Such action may be based on one or more forecasts and/or what-if simulations, for example). While Aslam does teach generating experimental schedules and biases generated by an AI model, Aslam does not appear to teach: wherein the schedule data is output with metadata listing or describing one or more experimental biases applied during generation of the schedule data; However, Aslam in view of the analogous art of Zehtabi (i.e. scheduling does teach the entirety of the limitation: (Zehtabi, [80]; At step S414, the dynamic working scheduler module 302 outputs the schedules, metrics, and explanations to a user interface for display thereon. In an exemplary embodiment, the user interface may be displayed on computer screens of various employees and/or members of a particular group or team in order to provide a notification as to the upcoming work schedules and also to show the team members that the schedule is intended to maximize fairness and flexibility in scheduling within the known constraints and Zehtabi, [45]; Furthermore, the computer system 102 may include any additional devices, components, parts, peripherals, hardware, software or any combination thereof which are commonly known and understood as being included with or within a computer system, such as, but not limited to, a network interface 114 and an output device 116). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam does teach generating experimental schedules and biases generated using an AI model with the teachings of Zehtabi including metadata listing one or more experimental biases in order to (Zehtabi, [80]; the dynamic working scheduler module 302 outputs the schedules, metrics, and explanations to a user interface for display thereon. In an exemplary embodiment, the user interface may be displayed on computer screens of various employees and/or members of a particular group or team in order to provide a notification as to the upcoming work schedules and also to show the team members that the schedule is intended to maximize fairness and flexibility in scheduling within the known constraints). Claim(s) 3 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aslam et al. (US 20220198353 A1) in view of Johnson et al. (US 20160063192 A1), and Zehtabi et al. (US 20230112156 A115), and Dvorscak et al. (US 20200394594 A1). Regarding Claim(s) 3 and 14, While Aslam teaches schedule features and staffing coverages, Aslam does not appear to teach a feature relating to historically low employee coverage. However, Aslam in view of the analogous art of Dvorscak (i.e. scheduling) does teach: The method of claim 1, wherein the schedule feature is a feature with historically low employee coverage. (Dvorscak, [21]; Currently, there are no systems or methods to distribute an incentive budget over time intervals in a work schedule, based on agents' past behavior i.e., historical schedule changes or based on analyzed historical demand of agents to the time intervals Incentivized time intervals today are based solely on current net staffing variance, thus a portion of the incentives-budget or even the whole incentives-budget might be wasted on time intervals that may have otherwise been accepted by agents without the incentive. Therefore, the concern would be that time intervals with high demand are unnecessarily incentivized over those with low demand. In another case, incentive budget might be wasted when time intervals are incentivized based on net staffing value only, because when one or more time intervals have the same forecasted understaffed net staffing value, i.e., these one or more time intervals are of low yet different agents demand, then time intervals with lower agents demand might be incentivized instead of time intervals with higher agents demand). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam including schedule features and staffing coverages with the teachings of Dvorscak including historically low coverage in order to better budget for times when business is understaffed or demand is higher (Dvorscak, [21]; the concern would be that time intervals with high demand are unnecessarily incentivized over those with low demand. In another case, incentive budget might be wasted when time intervals are incentivized based on net staffing value only, because when one or more time intervals have the same forecasted understaffed net staffing value, i.e., these one or more time intervals are of low yet different agents demand, then time intervals with lower agents demand might be incentivized instead of time intervals with higher agents demand. That is, incentivizing time intervals with lower agents demand would be a waste, compared to incentivizing time intervals with higher demand, as these would be more easily staffed by offering the incentive). Claim(s) 6, 9, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aslam et al. (US 20220198353 A1) in view of Johnson et al. (US 20160063192 A1), and Zehtabi et al. (US 20230112156 A115), and Mimassi (US 20210406815 A1). Regarding Claim(s) 6 and 17, While Aslam teaches a set of incentives, Aslam does not appear to teach the incentive being paid time off. However, Aslam in view of the analogous art of Mimassi (i.e. scheduling) does teach: The method of claim 1, wherein at least one of the set of incentives is paid time off. (Mimassi, [71]; In some aspects, real-time staffing management microservice 113, through staff mobile device 130, may also provide information to the staff of schedule modifications or upcoming staffing needs which the staff may accept or decline. If the restaurant has entered information such as incentive pay, real-time staffing management microservice 113 may use that information to offer the restaurant staff additional monetary or other incentives (e.g. future vacation day with pay) to accept a shift schedule that is sorely needed to be filled. Such incentives may be adjusted for busy periods at the restaurant (typically around lunch and dinner) either automatically based on the restaurant's history as stored in a database(s) 120, or by retrieving information stored in a database(s)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam including incentives for shifts with the teachings of Mimassi including an incentive being paid time off in order to aid filling shifts that sorely need to be filled (Mimassi, [71]; f the restaurant has entered information such as incentive pay, real-time staffing management microservice 113 may use that information to offer the restaurant staff additional monetary or other incentives (e.g. future vacation day with pay) to accept a shift schedule that is sorely needed to be filled. Such incentives may be adjusted for busy periods at the restaurant (typically around lunch and dinner) either automatically based on the restaurant's history as stored in a database(s) 120, or by retrieving information stored in a database). Regarding Claim(s) 9, While Aslam teaches employees having the ability to opt-in to an experiment, Aslam does not appear to teach the ability to opt out. However, Aslam in view of the analogous art of Mimassi (i.e. scheduling) does teach: The method of claim 1, wherein the employees opt out for an experimental schedule. (Mimassi, [71]; In some aspects, real-time staffing management microservice 113, through staff mobile device 130, may also provide information to the staff of schedule modifications or upcoming staffing needs which the staff may accept or decline. If the restaurant has entered information such as incentive pay, real-time staffing management microservice 113 may use that information to offer the restaurant staff additional monetary or other incentives (e.g. future vacation day with pay) to accept a shift schedule that is sorely needed to be filled). It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Aslam including the ability to opt-in to an experiment with the teachings of Mimassi including the ability for an employee to opt out of an experiment in order to keep up with real-time staffing needs and further allow the business to find employees to accept the shifts needed by possibly adjusting incentives (Mimassi, [71]; real-time staffing management microservice 113, through staff mobile device 130, may also provide information to the staff of schedule modifications or upcoming staffing needs which the staff may accept or decline. If the restaurant has entered information such as incentive pay, real-time staffing management microservice 113 may use that information to offer the restaurant staff additional monetary or other incentives (e.g. future vacation day with pay) to accept a shift schedule that is sorely needed to be filled. Such incentives may be adjusted for busy periods at the restaurant (typically around lunch and dinner) either automatically based on the restaurant's history as stored in a database(s) 120, or by retrieving information stored in a database(s) 120 that has been manually entered by the restaurant through website/webapp 150 or restaurant mobile device 130). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L GUNN whose telephone number is (571)270-1728. The examiner can normally be reached Monday - Friday 6:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEREMY L GUNN/ Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Jan 24, 2023
Application Filed
Jan 08, 2025
Non-Final Rejection — §101, §103
Jul 09, 2025
Response Filed
Sep 03, 2025
Final Rejection — §101, §103
Dec 01, 2025
Request for Continued Examination
Dec 11, 2025
Response after Non-Final Action
Mar 04, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572859
TAGGING OF ASSETS FOR CONTENT DISTRIBUTION IN AN ENTERPRISE MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12541728
SYSTEMS AND METHODS FOR AN INTERACTIVE CUSTOMER INTERFACE UTILIZING CUSTOMER DEVICE CONTEXT
2y 5m to grant Granted Feb 03, 2026
Patent 12524717
USE OF IDENTITY AND ACCESS MANAGEMENT FOR SERVICE PROVISIONING
2y 5m to grant Granted Jan 13, 2026
Patent 12481952
LOGISTICS MANAGEMENT METHOD, DEVICE, APPARATUS AND READABLE STORAGE MEDIUM BASED ON INTERNET OF THINGS
2y 5m to grant Granted Nov 25, 2025
Patent 12417436
Automated Parameterized Modeling And Scoring Intelligence System
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
29%
Grant Probability
74%
With Interview (+45.0%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 149 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month