Prosecution Insights
Last updated: April 19, 2026
Application No. 17/481,120

SYSTEMS, METHODS, AND APPARATUSES FOR EVALUATING WAIT TIMES AND QUEUE LENGTHS AT MULTI-STATION AND MULTI-STAGE SCREENING ZONES VIA A DETERMINISTIC DECISION SUPPORT ALGORITHM

Non-Final OA §101§103§112
Filed
Sep 21, 2021
Examiner
BOLEN, NICHOLAS D
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Arizona Board of Regents
OA Round
3 (Non-Final)
10%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
20%
With Interview

Examiner Intelligence

Grants only 10% of cases
10%
Career Allow Rate
12 granted / 122 resolved
-42.2% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
29 currently pending
Career history
151
Total Applications
across all art units

Statute-Specific Performance

§101
36.5%
-3.5% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 122 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant Claims 1-15, 17 and 19-21 are presently amended. Claims 1-21 are pending. Response to Amendment Applicant’s amendments are acknowledged. Response to Arguments Applicant' s arguments filed 3/10/2026 have been fully considered in view of further consideration of statutory law, Office policy, precedential common law, and the cited prior art as necessitated by the amendments to the claims, and are persuasive in-part for the reasons set forth below. Claim Interpretation First, Applicant argues that “…the Office Action indicates that the feature "an adjustment module" in claims 15, 20, and 21 was interpreted under 35 U.S.C. § 112(f). Applicant has amended claims 15, 20, and 21 to remove the term "module" and instead introduce specific structural execution features… Applicant respectfully submits that the claims, as amended, include sufficient structure to perform the functions. Accordingly, Applicant respectfully submits that interpretation under 35 U.S.C. § 112(f) is no longer applicable, rendering the issue moot” [Arguments, page 12]. In response, Applicant’s arguments are considered and are persuasive. Examiner observes that the presently amended claims no longer invoke a 35 U.S.C. § 112(f) interpretation. 35 USC § 101 Rejections First, Applicant argues that “…The Office Action asserts that the claims are directed to the abstract ideas of "organizing human activity" and "mental processes." Applicant respectfully disagrees. Applicant respectfully directs the Examiner's attention to the USPTO's recently issued SMED Examiner Memo regarding subject matter eligibility ("USPTO SMED" hereinafter). Because this new guidance was issued and publicized essentially concurrently with the mailing of the current Office Action, Applicant recognizes that the Examiner may not have had sufficient notice or opportunity to consider this updated framework prior to issuing the rejection. The recent USPTO SMED guidance clarifies how the mental process grouping should be applied, specifically noting that an applicant may successfully refute a "mental process" allegation by demonstrating that the claimed operations "cannot practically be performed in the human mind." To further clarify the nature of the claimed operations, Applicant has amended claim 1 to include the features: " executing a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths " and "generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors" As demonstrated by the quoted features above, amended claim 1 now includes features that align directly with the operations protected by the new guidance. Specifically, the feature of "executing a machine learning model configured to estimate adjusting factors by minimizing error involves operations that "cannot practically be performed in the human mind." Accordingly, Applicant respectfully submits that the features of claim 1 do not constitute mental steps under Step 2A, Prong One. For at least these reasons, claims 1-21 recite patentable subject matter, and Applicant respectfully requests that the rejection under 35 U.S.C. § 101 be withdrawn” [Arguments, pages 12-13]. In response, Applicant’s arguments are considered but are not persuasive. Examiner respectfully maintains that the present claims recite an abstract idea without significantly more. In response to the assertion that “the feature of "executing a machine learning model configured to estimate adjusting factors by minimizing error involves operations that "cannot practically be performed in the human mind."”, Examiner respectfully observes that the claims of the present invention have not been rejected under the abstract idea grouping of “mental processes”. Instead the claims have been rejected under the abstract idea grouping of “certain methods of organizing human activity”. Examiner further respectfully maintains that the presently amended claims remain directed to certain methods of organizing human activity. In particular, the presently amended limitations describe steps for managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions. Specifically, predicting passenger wait times and allocation officers based on passenger volumes is considered to describe steps for managing personal behavior as well as interactions between people. Thus, claims 1, 13, 14, 15, 20 and 21 recite concepts identified as abstract ideas. As such, Examiner remains unpersuaded. 35 USC § 103 Rejections First, Applicant argues that “Applicant has amended claim 1 to now include the features: "executing a mechanistic model to generate an initial passenger arrival prediction based on a business fundamentals data set and " executing a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths," and further to include the feature " generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors" The applied references do not disclose or suggest this combination of features. For at least these reasons, amended independent claim 1 is patentable” [Arguments, pages 13-14]. In response, Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Examiner respectfully maintains that the art of record renders the above-argued claim limitations obvious for the reasons set forth in the rejection below. As such, Examiner remains unpersuaded. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 3, the phrase "…or a variation thereof" renders the claim(s) indefinite because the claim(s) include(s) elements not actually disclosed (those encompassed by "…or a variation thereof "), thereby rendering the scope of the claim(s) unascertainable. See MPEP § 2173.05(d). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Claims 1-21 are directed to statutory categories, namely processes (claims 1-12 and claim 20), machines (claim 13 and claims 15-19), and articles of manufacture (claim 14 and claim 21). Step 2A, Prong 1: Claims 1, 13, 14, 15, 20 and 21 in part, recite the following abstract ideas: A method of dynamically allocating resources in multi- stage screening zones using a hybrid analytical model of… the method comprising: receiving, by… observed wait times and queue lengths at multi-station and multi-stage screening zones; receiving… user specified configuration selections for processing the wait times and queue lengths; executing a mechanistic model to generate an initial passenger arrival prediction based on a business fundamentals data set defining one or more of flight departure schedules, airplane capacities, and expected number of passengers; executing … configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors; applying… using the hybrid arrival prediction, an algorithm to yield future predicted wait times and queue lengths at the multi-station and multi-stage screening zones based at least in part on the observed wait times and queue lengths and the user specified configuration selections; periodically retraining the … based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; accepting, by … the observed wait times and queue lengths as initial starting conditions; incrementally updating, by … queue lengths at each stage to the start of a next period by adding any arrivals during a previous period and subtracting throughput for the respective stage based on a number of customers served; sequentially processing, by … each of the stages of the multi-station and multi- stage screening zones to compute a number served at each stage during the time interval as the minimum of a service capacity based on (i) a number of service stations open and based further on (ii) a service rate per station provided by the user specified configuration selections, (iii) a number of initial customers in queue plus those arriving, and (iv) a service rate of a subsequent workstation when the subsequent station buffer space is full; and computing and outputting, by … a predicted wait time for any passenger by progressing the passenger in a first-come first-served manner through a network of service queues affiliated with each of the multi-station and multi-stage screening zones (Claim 1), …dynamically allocating resources in multi- stage screening zones using a hybrid predictive model of…; receive… observed wait times and queue lengths at multi-station and multi-stage screening zones; receive …, user specified configuration selections for processing the wait times and queue lengths; execute a mechanistic model to generate an initial passenger arrival prediction based on a business fundamentals data set defining one or more of flight departure schedules, airplane capacities, and expected number of passengers; execute … configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generate a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors; apply… using the hybrid arrival prediction, an algorithm to yield future predicted wait times and queue lengths at the multi-station and multi-stage screening zones based at least in part on the observed wait times and queue lengths and the user specified configuration selections; periodically retrain … based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; wherein … is configured to: accept the observed wait times and queue lengths as initial starting conditions and incrementally update queue lengths at each stage to the start of a next period by adding any arrivals during a previous period and subtracting throughput for the respective stage based on a number of customers served; sequentially process each of the stages of the multi-station and multi-stage screening zones to compute a number served at each stage during a time interval as the minimum of a service capacity based on (i) a number of service stations open and based further on (ii) a service rate per station provided by the user specified configuration selections, (iii) a number of initial customers in queue plus those arriving, and (iv) a service rate of the subsequent workstation when a subsequent station buffer space is full; and compute and output a predicted wait time for any passenger by progressing that passenger on a first-come first-served manner through a network of service queues affiliated with each of the multi-station and multi-stage screening zones (Claim 13), …to perform operations including: receiving…, observed wait times and queue lengths at multi-station and multi-stage screening zones; receiving …, user specified configuration selections for processing the wait times and queue lengths; executing a mechanistic model to generate an initial passenger arrival prediction based on a business fundamentals data set defining one or more of flight departure schedules, airplane capacities, and expected number of passengers; executing … configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors; applying… using a hybrid arrival prediction, an algorithm to yield future predicted wait times and queue lengths at the multi-station and multi-stage screening zones based at least in part on the observed wait times and queue lengths and the user specified configuration selections; periodically retraining … based on a threshold quantity of accumulated real-time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; accepting, by…, the observed wait times and queue lengths as initial starting conditions and incrementally updating queue lengths at each stage to the start of the next period by adding any arrivals during a previous period and subtracting throughput for the respective stage based on a number of customers served; sequentially processing, by … each of the stages of the multi-station and multi-stage screening zones to compute a number served at each stage during a time interval as the minimum of a service capacity based on (i) a number of service stations open and based further on (ii) a service rate per station provided by the user specified configuration selections, (iii) a number of initial customers in queue plus those arriving, and (iv) a service rate of a subsequent workstation when the subsequent station buffer space is full; and computing and outputting, by … the predicted wait time for any passenger by progressing that passenger on a first-come first-served manner through a network of service queues affiliated with each of the multi-station and multi-stage screening zones (Claim 14), A system for predicting passenger arrivals and allocating Transportation Security Officers (TSOs) within a multi-station and multi-stage security screening area having a plurality of Security Screening Checkpoints (SSCPs), wherein the system comprises: …; retrieve a business fundamentals data comprising at least one of: flight departure schedules, airplane capacities, and expected number of passengers; execute a mechanistic prediction model configured to generate an initial passenger arrival prediction based at least in part on the business fundamentals data set; retrieve observed screening data comprising a number of passengers processed at each SSCP, the observed screening data serving as a proxy for actual passenger arrivals at each SSCP; execute… configured to estimate adjustment factors for the proxy by minimizing error between the mechanistic prediction and the observed screening data to refine passenger arrival estimations; adjust the mechanistic prediction based on historical passenger data, wherein … is periodically retrained based on accuracy thresholds relative to operational performance metrics; and allocate TSOs to one or more of the SSCPs based at least in part on refined passenger arrival predictions, adjusting allocations in response to real-time changes in predicted passenger volumes (Claim 15), A method performed by …therein predicting passenger arrivals and allocating Transportation Security Officers (TSOs) within a multi-station and multi-stage security screening area having a plurality of Security Screening Checkpoints (SSCPs), wherein the method comprises: retrieving… a business fundamentals data set comprising at least one of: flight departure schedules, airplane capacities, and expected number of passengers; executing… a mechanistic prediction model configured to generate an initial passenger arrival prediction based at least in part on business fundamentals data set; retrieving… observed screening data comprising a number of passengers processed at each SSCP, the observed screening data serving as a proxy of actual passenger arrivals at each SSCP; executing… configured to estimate adjustment factors for the proxy by minimizing error between the mechanistic prediction and the observed screening data to refine passenger arrival estimations; predicting passenger volumes based on historical passenger data, wherein … is periodically retrained based on accuracy thresholds relative to operational performance metrics; and allocating… TSOs to one or more of the SSCPs based at least in part on refined passenger arrival predictions, adjusting allocations in response to real-time changes in predicted passenger volumes on the passenger volumes predicted (Claim 20), …for predicting passenger arrivals and allocating Transportation Security Officers (TSOs) within a multi-station and multi-stage security screening area having a plurality of Security Screening Checkpoints (SSCPs), wherein the instructions, when executed, cause the system to perform operations including: retrieving a business fundamentals data set comprising at least one of: flight departure schedules, airplane capacities, and expected number of passengers; executing a mechanistic prediction model configured to generate an initial passenger arrival prediction based at least in part on the business fundamentals data set; retrieving observed screening data comprising a number of passengers processed at each SSCP, the observed screening data serving as a proxy of actual passenger arrivals at each SSCP; executing… configured to estimate adjustment factors for the proxy by minimizing error between the mechanistic prediction and the observed screening data to refine passenger arrival estimations; predicting passenger volumes based on historical data, wherein… is periodically retrained based on accuracy thresholds relative to operational performance metrics; and allocating TSOs to one or more of the SSCPs based at least in part on refined passenger arrival predictions, adjusting allocations in response to real-time changes in predicted passenger volumes (Claim 21). These concepts are not meaningfully different than the following concepts identified by the MPEP: Concepts relating to certain methods of organizing human activity. The aforementioned limitations describe steps for managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions. Specifically, predicting passenger wait times and allocation officers based on passenger volumes is considered to describe steps for managing personal behavior as well as interactions between people. As such, claims 1, 13, 14, 15, 20 and 21 recite concepts identified as abstract ideas. The dependent claims recite limitations relative to the independent claims, including, for example: … exploring a hypothetical "what if' scenario created by an end user by: receiving… manually adjusted input parameters overriding the observed wait times and queue lengths at the multi-station and multi-stage screening zones; processing the manually adjusted input parameters via the algorithm of … to output new predicted wait times and queue lengths at the multi-station and multi-stage screening zones; and displaying the new predicted wait times and queue lengths in fulfillment of the hypothetical "what if' scenario created by the end user [Claim 2], …applying, with … a first- service queue discipline, or a variation thereof [Claim 3], …generating, with … the predicted wait times and queue lengths by converting a dynamic stream of customer arrivals and planned staffing levels for a multistage, parallel processor, finite queue, serial flow network into estimates of queue lengths and throughput times at each processing stage at each point in time [Claim 4]. The limitations of these dependent claims are merely narrowing the abstract idea identified in the independent claims, and thus, the dependent claims also recite abstract ideas. Step 2A, Prong 2: This judicial exception is not integrated into a practical application. In particular, claims 1, 13, 14, 15, 20 and 21 only recite the following additional elements – A Visual Analytics and Decision Support System platform (VADSS platform)… …a processor configured to execute instructions stored in a memory…; …by the processor…; …a machine learning model…; …by the processor…; …the machine learning model…; …the machine learning model…; …the machine learning model…; …the machine learning model… (Claim 1), A system… of a Visual Analytics and Decision Support System platform (VADSS platform) wherein the system comprises: a memory; and one or more processors configured to execute instructions stored in the memory, the one or more processors configured to: … by the one or more processors…; …a machine learning model…; …by the one or more processors…; …by the one or more processors…; …the machine learning model…; …the machine learning model…; …the machine learning model…; …the machine learning model… (Claim 13), Non-transitory computer readable storage media having instructions stored thereupon that, when executed by a Visual Analytics and Decision Support System platform (VADSS platform) having at least a processor and a memory therein, the instructions cause the VADSS platform to… by a processor configured to execute instructions stored in a memory…; …by the processor…; …a machine learning model…; …by the processor…; …the machine learning model…; …the machine learning model…; …the machine learning model…; …the machine learning model… (Claim 14), …a memory to store instructions; …one or more processors configured to execute instructions stored in the memory, the one or more processors configured to…; …a machine learning time series model…; …the machine learning time series model… (Claim 15), …a system having at least a processor and a memory…; …by the processor…; …by the processor…; …by the processor…; …by the processor, a machine learning time series model…; …the machine learning time series model…; …by the processor… (Claim 20), Non-transitory computer readable storage media having instructions stored thereupon that, when executed by a system having at least a processor and a memory therein…; …a machine learning time series model…; …the machine learning time series model… (Claim 21). The dependent claims recite the following new additional elements – …a set of tabular and graphical interface displays… (Claim 8), …an Advanced Imaging Technology (AIT) full body scanner or a Walk Through Metal Detector (WTMD)… (Claim 18). The apparatus, processor, memory, interfaces and executable instructions are recited at a high-level of generality (see MPEP § 2106.05(a)), like the following MPEP example: iii. Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48; Furthermore, the computer implemented element is considered to amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)), like the following MPEP example: i. A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); Accordingly, these additional elements do not integrate the abstract idea into a practical application. The remaining dependent claims do not recite any new additional elements, and thus do not integrate the abstract idea into a practical application. Step 2B: Claims 1, 13, 14, 15, 20 and 21 and their underlying limitations, steps, features and terms, considered both individually and as a whole, do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the following reasons: The independent claims only recite the following additional elements A Visual Analytics and Decision Support System platform (VADSS platform)… …a processor configured to execute instructions stored in a memory…; …by the processor…; …a machine learning model…; …by the processor…; …the machine learning model…; …the machine learning model…; …the machine learning model…; …the machine learning model… (Claim 1), A system… of a Visual Analytics and Decision Support System platform (VADSS platform) wherein the system comprises: a memory; and one or more processors configured to execute instructions stored in the memory, the one or more processors configured to: … by the one or more processors…; …a machine learning model…; …by the one or more processors…; …by the one or more processors…; …the machine learning model…; …the machine learning model…; …the machine learning model…; …the machine learning model… (Claim 13), Non-transitory computer readable storage media having instructions stored thereupon that, when executed by a Visual Analytics and Decision Support System platform (VADSS platform) having at least a processor and a memory therein, the instructions cause the VADSS platform to… by a processor configured to execute instructions stored in a memory…; …by the processor…; …a machine learning model…; …by the processor…; …the machine learning model…; …the machine learning model…; …the machine learning model…; …the machine learning model… (Claim 14), …a memory to store instructions; …one or more processors configured to execute instructions stored in the memory, the one or more processors configured to…; …a machine learning time series model…; …the machine learning time series model… (Claim 15), …a system having at least a processor and a memory…; …by the processor…; …by the processor…; …by the processor…; …by the processor, a machine learning time series model…; …the machine learning time series model…; …by the processor… (Claim 20), Non-transitory computer readable storage media having instructions stored thereupon that, when executed by a system having at least a processor and a memory therein…; …a machine learning time series model…; …the machine learning time series model… (Claim 21). These elements do not amount to significantly more than the abstract idea for the reasons discussed in 2A prong 2 with regard to MPEP 2106.05(a) and MPEP 2106.05(f). By the failure of the elements to integrate the abstract idea into a practical application there, the additional elements likewise fail to amount to an inventive concept that is significantly more than an abstract idea here, in Step 2B. As such, both individually or in combination, these limitations do not add significantly more to the judicial exception. The remaining dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the dependent claims do not recite any new additional elements other than those mentioned in the independent claims, which amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). As such, these claims are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Robertson et al. U.S. Publication No. 2004/0193473 [hereinafter Robertson] in view of Sahay et al., U.S. Publication No. 2016/0363450 [hereinafter Sahay] and in further view of Szeto et al., U.S. Publication No. 2017/0124487 [hereinafter Szeto]. Regarding Claim 1, Robertson discloses …A method of dynamically allocating resources in multi- stage screening zones using a hybrid predictive model of a Visual Analytics and Decision Support System platform (VADSS platform), the method comprising: receiving, … observed wait times and queue lengths at multi-station and multi-stage screening zones (Robertson, ¶ 98, Referring now to FIG. 9, another embodiment of the present invention provides an effective security scheduling system 900. As depicted in FIG. 9, the effective security scheduling system 900 generally includes separate modules that are interconnected to implement the steps in the effective security scheduling method 300. Specifically, the effective security scheduling system 900 includes a demand forecasting module 910. The demand forecasting modeling module 910 accepts input data related to the facility. For instance, security demand at an airport may be forecasted using flight schedules, flight capacity data, and predetermined demand distribution curves, as described above), ( Id., ¶ 11, the present invention has specific application to staffing security checkpoints. In this embodiment, the number of needed open stations in security checkpoints is determined (discloses multi-station screening zones) by translating the variable demand for security at different times and using linear programming to optimize and determine a schedule as needed to staff the needed number of open stations), (Id., ¶ 25, Each of the stations is separately staffed with a number of employees as needed. For instance, a security station may use five employees, each manning a component of the security station (discloses multi-stage screening zones) (a walk-through metal detector, an x-ray machine, a hand-held metal detector, a station to manually search personal belongings, and an area to perform other security tests). Obviously, any number of people may be staffed to a station. A station may also be partially staffed, operating a lower level of throughput as the security workers are required to perform more than one function. Furthermore, additional workers may be staffed to a security checkpoint to improve the throughput of that station. In this way, the capacity of security checkpoints generally corresponds to the number of security workers staffed at the security stations), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies); receiving, …user specified configuration selections for processing the wait times and queue lengths; (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id., ¶ 100, Continuing with FIG. 9, a schedule-defining module 930 uses user-defined inputs (discloses interface for configuration selections) and the outputs from the demand forecasting module 910 and the checkpoint simulation module 920 to create a security work schedule. As described above in FIG. 8B and the associated text, the user-defined inputs generally include data related to the number security workers and the condition of work for these workers. This type of information includes the shift length, possible starting and end times, shift frequency, breaks, etc. associated with each of the workers. Furthermore, the user-defined inputs may include constraints limiting potential staffing configurations, such as limiting the staffing of certain positions to workers with sufficient employees); executing a mechanistic model to generate an initial passenger arrival prediction based on a business fundamentals data set defining one or more of flight departure schedules, airplane capacities, and expected number of passengers (Id., ¶ 42, The black-box security checkpoint model 2 (discloses model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 34, FIG. 5 depicts an exemplary demand curve 500 representing the demand attributable to a single event at 6 PM, such as a flight or a public event. In curve 500, increasing numbers of people arrive at the checkpoint before 6 PM, but the number of the people drops off rapidly thereafter (discloses passenger arrival predictions)), (Id., ¶ 37, It should be appreciated that the above-described method for estimating demand at the security checkpoint, while presented in the context of an airport or seaport, may be used in a variety of circumstances. For instance, the above-described method may be used to determine security screening demand at a large volume event, such as a concert or sports contest. The total number of people may then be estimated as the number of ticket-holders minus forecasted non-attendance. The instantaneous demand at the security checkpoint may then be determined at using a demand curve for the event), (Id., ¶ 107, The changes in the needed number of workers over an extended period may be predicted through the forecasting the needed number of security stations in step 400 and defining an effective schedule in step 800, both over the extended period of interest. For instance, needed number of security stations at an airport may be forecasted over an extended period to form the extended needed worker graph 1000 by examining the number of flights departing from the airport, (discloses flight departure schedules) the load factors for these flights, (discloses airplane capacities) etc. as described above in FIG. 4B and the associated text), (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport); applying, … using the hybrid arrival prediction, an algorithm to yield future predicted wait times and queue lengths at the multi-station and multi-stage screening zones based at least in part on the observed wait times and queue lengths and the user specified configuration selections (Id., ¶ 42, The black-box security checkpoint model 2 functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id., ¶ 60, Returning to FIG. 8A, an effective schedule is formed in step 820 using the worker data from steps 811, 812, and 813. In the field of employee staffing and scheduling, several techniques are known to create an optimized schedule using the worker data, such as the information described above in steps 811, 812, and 813. For instance, an optimized schedule for a security checkpoint may be formed using linear programming, quadratic or mixed-integer programming, nonlinear optimization, global optimization, non-smooth optimization using genetic and evolutionary algorithms, and constraint programming methods from artificial intelligence); …sequentially processing, by …, each of the stages of the multi-station and multi- stage screening zones to compute a number served at each stage during a time interval as the minimum of a service capacity based on (i) a number of service stations open and based further on (ii) a service rate per station provided by the user specified configuration selections, (iii) a number of initial customers in queue plus those arriving, and (iv) a service rate of a subsequent workstation when the subsequent station buffer space is full (Id., ¶ 40, The security checkpoint may be modeled in step 420 using a certain number of open stations (discloses number of service stations open). The security checkpoint is then modeled again using a different number of open stations. The results from the two models may be compared to choose a desirable number of open stations. Typically, reducing the number of stations is detrimental to service measures, such as waiting time, but reduced employment costs. In this way, the model may then be used to provide a fact-based forecast of the varying number of stations. It should be appreciated that the modeling of the security checkpoint does not schedule workers. Instead, the model provides an optimal number of open stations per time period as needed to meet various service measures (and thus, the optimal number of security workers for each of the time periods). The actual staffing of the security workers is described below), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id, ¶ 42, The black-box security checkpoint model 2 functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. (discloses specified service rates per station) Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 36, The number of passengers arriving at the security checkpoint may be divided into fixed time periods (discloses number of passengers in queue including those arriving), such as 30-minute intervals. The average demand during each of the periods may then be displayed, as illustrated in total demand curve 600' in FIG. 6B, as the horizontal line in each of the boxes. The overall number of passengers during the time period will be the area of the box, or the average demand multiplied by the time period), (Id., ¶ 49, As described in U.S. application Ser. No. 10/293,469, the models 2 and 10 may also be used to calculate the effect of policy changes such as estimating the impact of adding another security test or incorporating different security equipment. Specifically, the model supports data modeling and simulation by provided quantitative modeling support and analysis to develop fact-based recommendations for policy decisions. For example, the model 10 may be used to simulate checkpoint staffing requirements such as a required number of wanders, bag searchers, etc. for various checkpoint configurations. The model 10 may also be used to simulate checkpoint equipment requirements, such a required number of X-Rays machines for various station configurations. The model 10 may further be used to recommend checkpoint staffing for peak volume and non-peak operations. Similarly, the model 10 may be used to assess (1) continuous (random) policy compliance levels for security devices; (2) the impact of alternative, gender based scanning policies; (3) the impact of eliminating or adding various screening steps in the security checkpoint; (4) the impact of check-in counter wait time on security checkpoint demand; or (5) the impact of reduced station staffing on checkpoint operations), (Id., ¶ 50, The data modeling provides analytical support for security checkpoint operations focusing on resources requirements (equipment & staffing), process performance, customer experience and cost. For instance, the model 10 may be modified to provide analytical support for various resource requirement policy concerns such as: Employee work rules (impact of number of breaks, lunch, training etc.); reduced checkpoint staffing requirements (impacts of reduced staff on checkpoint operations); reduced airport staffing requirements (optimized scheduling of shared resources across airport); new staffing requirements based on process changes (i.e. checkpoint selectee screening); or annual labor planning based on seasonal demand (Workforce management on annual basis). Specifically, the addition/subtraction of requirements in a checkpoint may be modeled through the addition/elimination of substeps in the model 10), (Id., ¶ 51, varying the values in the model 10, the model 10 further provides analytic support for various checkpoint process change policies concerns such as: Process changes or re-designs (i.e. new security directives which change process steps or time); new technology inserted into the existing or redesigned process (i.e. new type of x-ray); or emergency response planning (concourse dumps, checkpoint shutdowns, etc.). Specifically, these process changes refer to modification of processes already included in a model 10); computing and outputting, by … the predicted wait time for any passenger by progressing the passenger in a first-come first-served manner through a network of service queues affiliated with each of the multi-station and multi-stage screening zones (Id., ¶ 42, The black-box security checkpoint model 2 (discloses analytical model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time (discloses output of predicted wait time) in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 44, In a preferred embodiment of the present invention, the security checkpoint is modeled as described in co-owned U.S. patent application Ser. No. 10/293,469 entitled SECURITY CHECKPOINT SIMULATION, the disclosure of which is hereby incorporated by reference in full. U.S. patent application Ser. No. 10/293,469 provides a security checkpoint model 10, as depicted in FIG. 4D, having two or more processes, such as entering the security checkpoint in step 11, screening items in step 12, and screening people in step 13. This security checkpoint model is more similar to an actual security checkpoint. Each of the steps 11, 12, and 13 may be separately simulated to produce output values as described above. Thus, each of the steps 11, 12, and 13 may be separately modeled black-boxes. For instance, a user may define rules for simulating output values for each of the steps 11, 12, and 13. To model changes in the checkpoint, the values or distribution for steps 11, 12, or 13 may be adjusted. By adjusting values for separate steps, the passenger checkpoint model 10 more accurately approximates changes in a passenger checkpoint). While suggested in at least Fig. 9 and related text, Robertson does not explicitly disclose … by a processor configured to execute instructions stored in a memory…; …by the processor…; …by the processor…; executing a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors; periodically retraining the machine learning model based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; accepting, by the machine learning model, the observed wait times and queue lengths as initial starting conditions; incrementally updating, by the machine learning model, queue lengths at each stage to the start of the next period by adding any arrivals during a previous period and subtracting throughput for the respective stage based on a number of customers served; …the machine learning model…; …the machine learning model… However, Sahay discloses …by a processor configured to execute instructions stored in a memory…; …by the processor…; …by the processor… (Sahay, ¶ 35, The wait time client 132 may be stored in any type of memory that may or may not be integrated with the computing device 135. In some embodiments, the wait time client 132 may be stored in a universal serial bus (USB) flash drive that is connected to a USB port of the computing device 135. The computing device 135 may be any type of device capable of executing application programs), (Id., ¶ 36, FIG. 1B is a more detailed illustration of the computing device 135 of FIG. 1A, according to various embodiments. As shown, computing device 135 includes, without limitation, a processing unit 190, input/output (I/O) devices 192, and a memory unit 194. Memory unit 194 includes the wait time client 132 and is configured to interact with a sensor database 196)), (Id., ¶ 50, In the manual entry mode, both the wait start time (i.e., the time that the queue is entered) and the wait end time 244 (the time that the queue is exited) are entered via a user interface as part of the client-specific input 250 (discloses parameter input interface). Upon receiving a wait start time, the measured wait time calculator 240 stores the wait start time. Subsequently, upon receiving the corresponding wait end time 244, the measured wait time calculator 240 subtracts the stored wait start time from the wait end time 244 to determine the measured wait time 242. The measured wait time calculator 240 then transmits the measured wait data 245 to the wait time server 150. The measured wait data 245 includes, without limitation the measured wait time 242, the wait end time 244, and the point-of-interest 246. The measured wait time calculator 240 assigns the point-of-interest 246 in any technically feasible fashion, such as entry via the user interface, search engine results, or global positioning system (GPS) data); accepting, by … the observed wait times and queue lengths as initial starting conditions; incrementally updating, by …, queue lengths at each stage to the start of the next period by adding any arrivals during a previous period and subtracting throughput for the respective stage based on a number of customers served (Id., ¶ 36, FIG. 1B is a more detailed illustration of the computing device 135 of FIG. 1A, according to various embodiments. As shown, computing device 135 includes, without limitation, a processing unit 190, input/output (I/O) devices 192, and a memory unit 194. Memory unit 194 includes the wait time client 132 and is configured to interact with a sensor database 196)), (Id., ¶ 30, Based on the input the wait time client 132 receives, the wait time client 132 may interpret a location corresponding to the computing device 135 that is executing the wait time client 132 as the point-of-query 180, the point-of-interest 110, both, or neither. More specifically, as the wait time client 132 generates measured wait data, the wait time unit 132 relays the measured wait data and the current location—interpreted as the point-of-interest 110—to the wait time server 150 for incorporation into the crowdsourced wait data 160. To generate predicted wait times, the wait time server 150 processes the updated crowdsourced wait data 160 based on a wait time predictive model 165. The wait time predictive model 165 may implement any technically feasible algorithm designed to aggregate the crowdsourced wait data 160 into discerning predicted wait data), (Id., ¶ 47, In a complementary fashion to the predicted time calculator 270, the measured wait time calculator 240 determines a wait end time 244 and a wait start time for a queue located at the point-of-interest 110. The wait time calculator 240 then subtracts the wait start time from the wait end time 244 to determine a measured wait time 242. After determining the measured wait time 242, the wait time calculator transmits the measured wait data 245 to the wait time server 150. As shown, the measured wait data 245 includes, without limitation, the measured wait time 242, the point-of-interest 110 and the wait end time 244 Upon receiving the measured wait data 245, the wait time server 150 updates the crowdsourced wait data 160 (discloses incrementing model with updated queue length and wait time data) and/or the wait time predictive model 165 to reflect the measured wait time data 245), (Id., ¶ 36, FIG. 1B is a more detailed illustration of the computing device 135 of FIG. 1A, according to various embodiments. As shown, computing device 135 includes, without limitation, a processing unit 190, input/output (I/O) devices 192, and a memory unit 194. Memory unit 194 includes the wait time client 132 and is configured to interact with a sensor database 196)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson to include the queue and wait time interface elements of Sahay in the analogous art of crowdsourced-based wait time estimates. The motivation for doing so would have been to provide “more effective techniques for estimating wait times…”, wherein a “user may select a planning option that optimizes the predicted wait time or a planning option that optimizes an overall errand time (i.e., predicted travel time and predicted wait time)” (Sahay, ¶¶ 9, 12), wherein such improvements would benefit Robertson’s system which seeks to “achieve numerous desired results, including lower total personnel costs; reduced numbers of full-time employees (FTE); greater diversity in the workforce (through the use of part-time or seasonal employees); improved cost effectiveness while at least maintaining the customer service level; the creation of consistency in staffing and scheduling; the development of rule-driven, repeatable schedules; maximizing employee morale; reducing costs associated with scheduling; reducing the costs of creating and maintaining schedule” [Sahay, ¶¶ 9, 12; Robertson, ¶ 119]. While suggested, the combination of Robertson and Sahay does not explicitly disclose … executing a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors; periodically retraining the machine learning model based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; …the machine learning model…; …the machine learning model…; …the machine learning model… However, through KSR Rationale D (See MPEP 2141(III)(D)), the combination of Robertson and Szeto discloses executing a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors. First, Robertson discloses an initial passenger arrival prediction, as well as shift optimization techniques for minimizing error between passenger arrival predictions and observed wait times and queue lengths (Robertson, ¶ 30, Based on the input the wait time client 132 receives, the wait time client 132 may interpret a location corresponding to the computing device 135 that is executing the wait time client 132 as the point-of-query 180, the point-of-interest 110, both, or neither. More specifically, as the wait time client 132 generates measured wait data, the wait time unit 132 relays the measured wait data and the current location—interpreted as the point-of-interest 110—to the wait time server 150 for incorporation into the crowdsourced wait data 160. To generate predicted wait times, the wait time server 150 processes the updated crowdsourced wait data 160 based on a wait time predictive model 165. The wait time predictive model 165 may implement any technically feasible algorithm designed to aggregate the crowdsourced wait data 160 into discerning predicted wait data), (Id., ¶ 34, FIG. 5 depicts an exemplary demand curve 500 representing the demand attributable to a single event at 6 PM, such as a flight or a public event. In curve 500, increasing numbers of people arrive at the checkpoint before 6 PM, but the number of the people drops off rapidly thereafter (discloses passenger arrival predictions)), (Id., ¶ 36, The number of passengers arriving at the security checkpoint may be divided into fixed time periods (discloses number of passengers in queue including those arriving), such as 30-minute intervals. The average demand during each of the periods may then be displayed, as illustrated in total demand curve 600' in FIG. 6B, as the horizontal line in each of the boxes. The overall number of passengers during the time period will be the area of the box, or the average demand multiplied by the time period), (Id., ¶ 49, As described in U.S. application Ser. No. 10/293,469, the models 2 and 10 may also be used to calculate the effect of policy changes such as estimating the impact of adding another security test or incorporating different security equipment. Specifically, the model supports data modeling and simulation by provided quantitative modeling support and analysis to develop fact-based recommendations for policy decisions. For example, the model 10 may be used to simulate checkpoint staffing requirements such as a required number of wanders, bag searchers, etc. for various checkpoint configurations. The model 10 may also be used to simulate checkpoint equipment requirements, such a required number of X-Rays machines for various station configurations. The model 10 may further be used to recommend checkpoint staffing for peak volume and non-peak operations. (discloses staffing predictions to minimize error based on passenger arrival and wait times) Similarly, the model 10 may be used to assess (1) continuous (random) policy compliance levels for security devices; (2) the impact of alternative, gender based scanning policies; (3) the impact of eliminating or adding various screening steps in the security checkpoint; (4) the impact of check-in counter wait time on security checkpoint demand; or (5) the impact of reduced station staffing on checkpoint operations), Further Szeto discloses the use of a machine learning model to generate a hybrid predictive engine to generate and refine predictions (Szeto, ¶ 53, “Algorithm” refers to an algorithmic component of a predictive engine for generating predictions and decisions. The Algorithm component includes machine learning algorithms, as well as settings of algorithm parameters that determine how a predictive model is constructed. (discloses machine learning model for predictions) A predictive engine may include one or more algorithms, to be used independently or in combination. Parameters of a predictive engine specify which algorithms are used, the algorithm parameters used in each algorithm, and how the results of each algorithm are congregated or combined to arrive at a prediction engine result, also known as an output or prediction), (Id., ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors), (Id., ¶ 230, Further disclosed are methods and systems for monitoring and replaying queries, predicted results, subsequence end-user actions/behaviors, or actual results, and internal tracking information for determining factors that affect the performance of the machine learning system. For example, iterative replay of dynamic queries, corresponding predicted results, and subsequent actual user actions may provide to operators insights into the tuning of data sources, algorithms, algorithm parameters, as well as other system parameters that may affect the performance of the machine learning system. Prediction performances may be evaluated in terms of prediction scores and visualized through plots and diagrams. By segmenting available replay data, prediction performances of different engines or engine variants may be compared and studied conditionally for further engine parameter optimization, (Id., ¶ 259, Prediction result 445 and evaluation result 455 can be passed to other components within a PredictionIO or machine learning server. As discussed previously, a PredictionIO or machine learning server is a predictive engine deployment platform that enables developers to customize engine components, evaluate predictive models, and tune predictive engine parameters to improve performance of prediction results. A PredictionIO or machine learning server may also maintain adjustment history (discloses adjustment factors) in addition to prediction and evaluation results for developers to further customize and improve each component of an engine for specific business needs). One of ordinary skill in the art would have recognized that applying the known machine learning techniques of Szeto would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the machine learning techniques of Szeto to the passenger arrival prediction elements of Robertson would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such passenger arrival data processing features into similar prediction systems. Further, applying iterative machine learning algorithms to Robertson with parameter data and adjustment factors considered accordingly, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow more optimal staffing based on passenger arrival predictions. Thus, through KSR Rationale D (See MPEP 2141(III)(D)), the combination of Robertson and Szeto discloses executing a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors. Szeto further discloses …periodically retraining the machine learning model based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; …the machine learning model…; …the machine learning model…; …the machine learning model…; …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors), (Id., ¶ 209, Such trained models are updated regularly as new data comes in from actual user experiences and actual transaction data and it is desirable to train a new model periodically based on such data. (discloses adjusting retraining intervals based on performance metrics) For instance, perhaps 30 days worth of new data may be utilized to train or re-train a given prediction model. A week later, still more data is available and so the developers may seek to again train the model using the new week's worth of additional data, or train the model with a month and a week's worth of data, or simply train the model using only the least 1-week period worth of data. Alternatively, the developers may simply seek to train the model utilizing a different range of data), (Id., ¶ 210, Regardless of the reasons or period selected, a new model is created with different data from before and will therefore have different machine learning and therefore different predictive results. Certain model updates include simply updating the model as new data becomes available in real time whereas other update schemes involve batch updates, such as daily, or every 12 hours, and so forth). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism. The motivation for doing so would have been to “to help developers understand particular behaviors of engine variants of interest, and to tailor and improve prediction engine design” (Szeto, ¶ 234), wherein such improvements to prediction modeling would benefit Sahay’s method which seeks to provide “more effective techniques for estimating wait times…”, wherein a “user may select a planning option that optimizes the predicted wait time or a planning option that optimizes an overall errand time (i.e., predicted travel time and predicted wait time)” (Sahay, ¶¶ 9, 12), and wherein such improvements would further benefit Robertson’s system which seeks to “achieve numerous desired results, including lower total personnel costs; reduced numbers of full-time employees (FTE); greater diversity in the workforce (through the use of part-time or seasonal employees); improved cost effectiveness while at least maintaining the customer service level; the creation of consistency in staffing and scheduling; the development of rule-driven, repeatable schedules; maximizing employee morale; reducing costs associated with scheduling; reducing the costs of creating and maintaining schedule” [Szeto, ¶ 234; Sahay, ¶¶ 9, 12; Robertson, ¶ 119]. Regarding Claim 2, the combination of Robertson, Sahay and Szeto discloses …The method platform of claim 1… Robertson further discloses … further comprising: exploring a hypothetical "what if' scenario created by an end user, including: receiving, by the processor, manually adjusted input parameters overriding the observed wait times and queue lengths at the multi-station and multi-stage screening zones (Robertson, ¶ 40, The security checkpoint may be modeled in step 420 using a certain number of open stations. The security checkpoint is then modeled again using a different number of open stations. The results from the two models may be compared to choose a desirable number of open stations. Typically, reducing the number of stations is detrimental to service measures, such as waiting time, but reduced employment costs. In this way, the model may then be used to provide a fact-based forecast of the varying number of stations. It should be appreciated that the modeling of the security checkpoint does not schedule workers. Instead, the model provides an optimal number of open stations per time period as needed to meet various service measures (and thus, the optimal number of security workers for each of the time periods). The actual staffing of the security workers is described below), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id., ¶ 44, In a preferred embodiment of the present invention, the security checkpoint is modeled as described in co-owned U.S. patent application Ser. No. 10/293,469 entitled SECURITY CHECKPOINT SIMULATION, the disclosure of which is hereby incorporated by reference in full. U.S. patent application Ser. No. 10/293,469 provides a security checkpoint model 10, as depicted in FIG. 4D, having two or more processes, such as entering the security checkpoint in step 11, screening items in step 12, and screening people in step 13. This security checkpoint model is more similar to an actual security checkpoint. Each of the steps 11, 12, and 13 may be separately simulated to produce output values as described above. Thus, each of the steps 11, 12, and 13 may be separately modeled black-boxes. For instance, a user may define rules for simulating output values for each of the steps 11, 12, and 13. (discloses overriding observed wait times and queue lengths) To model changes in the checkpoint, the values or distribution for steps 11, 12, or 13 may be adjusted. By adjusting values for separate steps, the passenger checkpoint model 10 more accurately approximates changes in a passenger checkpoint), (Id., ¶ 45, One or more of the steps 11, 12, and 13 may be further decomposed into one or more separate substeps. Then, each of the substeps of steps 11, 12, and 13 may be separately modeled processes having user-defined rules for simulating output values, which are aggregated to produce total output values for steps 11, 12, and 13.), (Id., ¶ 53, the model produced in step 420 may be used to determine the impact of changing the number of stations. Using this model, a decision maker may determine the number of stations needed at the security checkpoint at various different times, step 430. Likewise, the model may be used to allocate security machinery at the checkpoint. These decisions are typically made to achieve various performance measures of the security checkpoint, and the desired number of stations will be the smallest number needed to achieve the desired performance measure. For example, the security checkpoint may have a maximum desired wait time (such as 10 minutes) during peak periods on average or busy days, and the effective work schedule staffs the number of stations as needed to achieve this wait time during different time periods. In this way, this demand data is then used to determine the number of needed stations, step 430); processing the manually adjusted input parameters via the algorithm of… to output new predicted wait times and queue lengths at the multi-station and multi-stage screening zones (Id., ¶ 43, In this way, the black-box-style security checkpoint model 2 aggregates together the individual tasks and processes occurring in the security checkpoint to determine output values. While the black-box-style security checkpoint model 2 illustrated in FIG. 4C is able to simulate an existing security checkpoint, this type of model has a limited ability to predict the effects of changes in the individual tasks and processes occurring in the checkpoint. Specifically, the black-box model 2 does not match up resources to activities in the checkpoint. While someone may attempt to use the black-box model 2 to predict the effects of changes by varying the output value ranges or the distribution of the values, the predictive accuracy of the black-box model 2 is generally poor. In particular, the effects of changes in one or more of the individual tasks and processes occurring in the security checkpoint are not easily represented through the black-box model 2 because these the individual tasks and processes are not separately replicated), (Id., ¶ 45, One or more of the steps 11, 12, and 13 may be further decomposed into one or more separate substeps. Then, each of the substeps of steps 11, 12, and 13 may be separately modeled processes having user-defined rules for simulating output values, which are aggregated to produce total output values for steps 11, 12, and 13.); and displaying the new predicted wait times and queue lengths in fulfillment of the hypothetical "what if' scenario created by the end user (Id., ¶ 44, In a preferred embodiment of the present invention, the security checkpoint is modeled as described in co-owned U.S. patent application Ser. No. 10/293,469 entitled SECURITY CHECKPOINT SIMULATION, the disclosure of which is hereby incorporated by reference in full. U.S. patent application Ser. No. 10/293,469 provides a security checkpoint model 10, as depicted in FIG. 4D, having two or more processes, such as entering the security checkpoint in step 11, screening items in step 12, and screening people in step 13. This security checkpoint model is more similar to an actual security checkpoint. Each of the steps 11, 12, and 13 may be separately simulated to produce output values as described above. Thus, each of the steps 11, 12, and 13 may be separately modeled black-boxes), (Id., ¶ 45, One or more of the steps 11, 12, and 13 may be further decomposed into one or more separate substeps. Then, each of the substeps of steps 11, 12, and 13 may be separately modeled processes having user-defined rules for simulating output values, which are aggregated to produce total output values for steps 11, 12, and 13.). While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …the machine learning model… However, Szeto discloses …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 4, the combination of Robertson, Sahay and Szeto discloses …The method platform of claim 1… Robertson further discloses … planned staffing levels for a multistage, parallel processor, finite queue, serial flow network… (Robertson, ¶ 28, In most checkpoints, security workers are typically staffed using block scheduling. FIG. 2B. schematically represents a block scheduling scheme in which a certain number of security workers are employed from 12AM to 12PM, and a second number of security workers are employed from 12PM to 12AM. It should be appreciated that most checkpoints are not staffed in twelve-hour blocks, and that this example is provided merely for illustration. In FIG. 2B, line 210' represents the number of required security employees (corresponding to FIG. 2A), and line 220' represents the number of security employees working in the twelve-hour block scheduling scheme. With block scheduling, the security workers are typically understaffed at times, and overstaffed at other times. As described above, overstaffing is inefficient and results in excessive labor costs, while understaffing results in excessive delays as the security workers are unable to meet demand for security screening), (Id., ¶ 50, The data modeling provides analytical support for security checkpoint operations focusing on resources requirements (equipment & staffing), process performance, customer experience and cost. For instance, the model 10 may be modified to provide analytical support for various resource requirement policy concerns such as: Employee work rules (impact of number of breaks, lunch, training etc.); reduced checkpoint staffing requirements (impacts of reduced staff on checkpoint operations); reduced airport staffing requirements (optimized scheduling of shared resources across airport); new staffing requirements based on process changes (i.e. checkpoint selectee screening); or annual labor planning based on seasonal demand (Workforce management on annual basis). Specifically, the addition/subtraction of requirements in a checkpoint may be modeled through the addition/elimination of substeps in the model 10). While suggested in at least Fig. 2B, Robertson does not explicitly disclose …further comprising: generating, with the machine learning model, the predicted wait times and queue lengths by converting a dynamic stream of customer arrivals and… into estimates of queue lengths and throughput times at each processing stage at each point in time. However, Sahay further discloses … further comprising: generating, with … the predicted wait times and queue lengths by converting a dynamic stream of customer arrivals and… into estimates of queue lengths and throughput times at each processing stage at each point in time (Sahay, ¶ 45, the predicted wait time calculation functionality described herein may be implemented using any type of algorithms as known in the art and distributed in any technically feasible, consistent fashion between the wait time server 150 and the wait time client 132. For example, and without limitation, in some embodiments, the wait time client 132 may not perform any predicted wait time calculations. In such embodiments, the wait time client 132 may omit the predicted wait time calculator 270, forgo downloading the predicted wait data 275, and relay client-specific input 250 directly to the wait time server 150 at run-time. Such client-specific input 250 may include, without limitation, predicted wait time queries and associated information such as the point-of-interest 110, the point-of-query 180, the wait start time, and the like. The wait time server 150 then calculates the predicted wait time 285 corresponding to the query and transmits the predicted wait time 285 to the wait time client 132), (Id., ¶ 51, In the proximity sensor mode, the wait start time is detected via a sensor that is located at the entry of the queue, and the wait end time 244 is detected via another sensor that is located at the exit of the queue. (discloses stream of customer arrivals and customer throughput for generating predicted wait times and queue lengths) The sensors may be implemented in any technically feasible fashion that enables communication with the mobile device 130 that executes the wait time client 132. In some embodiments, the mobile device 130 may be a Bluetooth enabled device, such as a smartphone, and the sensors may be Bluetooth Low Energy (BLE) sensors that are installed at the point-of-interest 110. The sensors may transmit information, such as the point-of-interest 110 and the type of sensor (entry sensor or exit sensor) to the mobile device 130 using any technique as known in the art. Such techniques may include a notification beacon, transmission of a “sensor-encountered” message, and so forth. For example, in some embodiments, the sensors may transmit a unique identification (ID), and the wait time server 150 is configured to associate each ID with either an entry to a queue or an exit from a queue. Further, in some embodiments, the transmission may include the time of encounter. In other embodiments, the mobile device 130 may store the time that the transmission is received as a timestamp), (Id., ¶ 25, The public cloud 102 provides access to encapsulated shared resources (e.g., software applications, data, etc.) over a public network, such as the Internet, to clients at the point-of-query 180 and the point-of-interest 110. In alternate embodiments, the public cloud 102 may provide access to any type of shared resource to any number of clients at any number of stationary and mobile locations in any combination. In other embodiments, the public cloud 102 may be replaced with any type of cloud computing environment, such as a private or a hybrid cloud. As shown, the public cloud 102 includes a telematics server 170 that communicates with a navigation system 182 located at the point-of-query 180. For example, and without limitation, a telematics server provided by a car manufacturer could provide communication and control capabilities (e.g., Global Positioning System (GPS) functionality, remote door unlocking services, etc.) to a car navigation system “on-board” a moving vehicle). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson to include the queue and wait time elements of Sahay in the analogous art of crowdsourced-based wait time estimates for the same reasons as stated for claim 1. While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …the machine learning model… However, Szeto discloses …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 5, the combination of Robertson, Sahay and Szeto discloses …The method platform of claim 1… Robertson further discloses …wherein … is configured to provide one or more point estimates based on a deterministic algorithm (Robertson, ¶ 49, As described in U.S. application Ser. No. 10/293,469, the models 2 and 10 may also be used to calculate the effect of policy changes such as estimating the impact of adding another security test or incorporating different security equipment. Specifically, the model supports data modeling and simulation by provided quantitative modeling support and analysis to develop fact-based recommendations for policy decisions. For example, the model 10 may be used to simulate checkpoint staffing requirements such as a required number of wanders, bag searchers, etc. for various checkpoint configurations. The model 10 may also be used to simulate checkpoint equipment requirements, such a required number of X-Rays machines for various station configurations. The model 10 may further be used to recommend checkpoint staffing for peak volume and non-peak operations. Similarly, the model 10 may be used to assess (1) continuous (random) policy compliance levels for security devices; (2) the impact of alternative, gender based scanning policies; (3) the impact of eliminating or adding various screening steps in the security checkpoint; (4) the impact of check-in counter wait time on security checkpoint demand; or (5) the impact of reduced station staffing on checkpoint operations), (Id., ¶ 50, The data modeling provides analytical support for security checkpoint operations focusing on resources requirements (equipment & staffing), process performance, customer experience and cost. For instance, the model 10 may be modified to provide analytical support for various resource requirement policy concerns such as: Employee work rules (impact of number of breaks, lunch, training etc.); reduced checkpoint staffing requirements (impacts of reduced staff on checkpoint operations); reduced airport staffing requirements (optimized scheduling of shared resources across airport); new staffing requirements based on process changes (i.e. checkpoint selectee screening); or annual labor planning based on seasonal demand (Workforce management on annual basis). Specifically, the addition/subtraction of requirements in a checkpoint may be modeled through the addition/elimination of substeps in the model 10), (Id., ¶ 58, As depicted above in FIG. 7A, the number of desired stations may vary greatly between peak and non-peak periods, (discloses model providing point estimates) so the number of employees should vary correspondingly. Turning now to FIG. 8B, a first step in determining the desired number of workers in step 810 is to determine the minimum number of work hours needed to staff the desired number of stations, step 811. The number of workers is generally represented in worker-hours, corresponding the number of workers divided by the duration of the time periods of interest. For instance, if 30 worker-hours are required for a 30-minute period, then 60 (or 30.div.1/2) workers are actually required. Thus, the number of needed stations may be represented in worker hours, as depicted in needed worker hour curve 710 in FIG. 7B. Worker hour curve 710 corresponds to open station curve 700 in FIG. 7A. In particular, as described above in FIGS. 1A and 1B and the accompanying text, the number of workers has a linear relationship to the number of open stations. For instance, where there are five workers per open station, than the total number of workers needed at a particular time equals five times the number of open stations at that time. Obviously, step 811 may easily adjust for other relationships between the number of open stations and the number of needed workers. For instance, some security stations are configured such that problems identified in a first station is addressed at a second station. In that instance, the number of workers is then a function of two or more stations such as requiring nine workers for each pair of security stations). While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …the machine learning model… However, Szeto discloses …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 6, the combination of Robertson, Sahay and Szeto discloses …The method platform of claim 1… Robertson further discloses …further comprising: applying, with … a defined methodology for converting a dynamic forecast of expected arrivals and staffing levels into forecasts of queue lengths that will occur at each stage of the multi-station and multi-stage screening zones (Robertson, ¶ 49, As described in U.S. application Ser. No. 10/293,469, the models 2 and 10 may also be used to calculate the effect of policy changes such as estimating the impact of adding another security test or incorporating different security equipment. Specifically, the model supports data modeling and simulation by provided quantitative modeling support and analysis to develop fact-based recommendations for policy decisions. For example, the model 10 may be used to simulate checkpoint staffing requirements such as a required number of wanders, bag searchers, etc. for various checkpoint configurations. The model 10 may also be used to simulate checkpoint equipment requirements, such a required number of X-Rays machines for various station configurations. The model 10 may further be used to recommend checkpoint staffing for peak volume and non-peak operations. Similarly, the model 10 may be used to assess (1) continuous (random) policy compliance levels for security devices; (2) the impact of alternative, gender based scanning policies; (3) the impact of eliminating or adding various screening steps in the security checkpoint; (4) the impact of check-in counter wait time on security checkpoint demand; or (5) the impact of reduced station staffing on checkpoint operations), (Id., ¶ 50, The data modeling provides analytical support for security checkpoint operations focusing on resources requirements (equipment & staffing), process performance, customer experience and cost. For instance, the model 10 may be modified to provide analytical support for various resource requirement policy concerns such as: Employee work rules (impact of number of breaks, lunch, training etc.); reduced checkpoint staffing requirements (impacts of reduced staff on checkpoint operations); reduced airport staffing requirements (optimized scheduling of shared resources across airport); new staffing requirements based on process changes (i.e. checkpoint selectee screening); or annual labor planning based on seasonal demand (Workforce management on annual basis). Specifically, the addition/subtraction of requirements in a checkpoint may be modeled through the addition/elimination of substeps in the model 10), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies). While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …the machine learning model… However, Szeto discloses …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 7, the combination of Robertson, Sahay and Szeto discloses …The method platform of claim 1… Robertson further discloses …further comprising: applying, with … applies a defined means for converting the observed queue lengths and wait times into an estimated throughput for each stage of the multi-station and multi-stage screening zones (Robertson, ¶ 25, Each of the stations is separately staffed with a number of employees as needed. For instance, a security station may use five employees, each manning a component of the security station (a walk-through metal detector, an x-ray machine, a hand-held metal detector, a station to manually search personal belongings, and an area to perform other security tests). Obviously, any number of people may be staffed to a station. A station may also be partially staffed, operating a lower level of throughput as the security workers are required to perform more than one function. Furthermore, additional workers may be staffed to a security checkpoint to improve the throughput (discloses throughput estimation) of that station. In this way, the capacity of security checkpoints generally corresponds to the number of security workers staffed at the security stations), (Id., ¶ 50, The data modeling provides analytical support for security checkpoint operations focusing on resources requirements (equipment & staffing), process performance, customer experience and cost. For instance, the model 10 may be modified to provide analytical support for various resource requirement policy concerns such as: Employee work rules (impact of number of breaks, lunch, training etc.); reduced checkpoint staffing requirements (impacts of reduced staff on checkpoint operations); reduced airport staffing requirements (optimized scheduling of shared resources across airport); new staffing requirements based on process changes (i.e. checkpoint selectee screening); or annual labor planning based on seasonal demand (Workforce management on annual basis). Specifically, the addition/subtraction of requirements in a checkpoint may be modeled through the addition/elimination of substeps in the model 10), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies). While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …the machine learning model… However, Szeto discloses …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 8, the combination of Robertson, Sahay and Szeto discloses …The method platform of claim 1… While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …further comprising: generating as output, with the machine learning model, a set of tabular and graphical interface displays with predicted future performance of an overall security screening system made up of the multi-station and multi-stage screening zones. However, Sahay discloses … further comprising: generating as output, with … a set of tabular and graphical interface displays with predicted future performance of an overall security screening system made up of the multi-station and multi-stage screening zones (Sahay, ¶ 32, In various embodiments, the wait time client 132 may provide a user interface that enables the user to specify the point-of-interest 110. The user interface may include any amount of planning functionality. For example, and without limitation, the user interface could include a search function for specific types of businesses, a time optimization function to select the best time to run an errand based on both travel time and wait time, and so on), (Id., ¶ 33, To maximize opportunities to grow the crowdsourced wait data 160 and, consequently, increase the accuracy of the predicted wait times, the wait time client 132 provides various types of wait data gathering functionality. Notably, the wait time client 132 tailors the wait gathering operations to the available infrastructure and provides, without limitation, a manual entry mode, a proximity sensor mode, and a point-of-sale mode. As part of determining the measured wait time, in the manual entry mode, the wait time client 132 relies primarily on user input. In the proximity sensor mode, the wait time client 132 automatically incorporates data from sensors, such as Bluetooth Low Energy (BLE) sensors, located at the point-of-interest 110 into the measured wait time calculations), (Id., ¶ 39, FIG. 2 is a is a more detailed illustration of the wait time client 132 of FIG. 1A, according to various embodiments. In general, the wait time client 182 is integrated as part of a mobile or stationary computer-based device, such as the mobile device 130 or the navigation system 182. The computer-based device, among other things, provides general input data (e.g., sensor data, text messages, etc.) and client-specific input 250, such as user interface preferences, to the wait time client 182. The computer-based device also consumes output data, such as predicted wait time 285, that is generated by the wait time client 182. As part of processing input data and generating the predicted wait time 285, the wait time client 182 communicates with the wait time server 150 transmitting measured wait data 245 and receiving predicted wait data 275). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson to include the queue and wait time interface elements of Sahay in the analogous art of crowdsourced-based wait time estimates for the same reasons as stated for claim 1. While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …the machine learning model… However, Szeto discloses …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 9, the combination of Robertson, Sahay and Szeto discloses …The method platform of claim 1… Robertson further discloses …further comprising: adding, with … probabilistic visits to workstations including at least one of: secondary screening, manual inspection stations, or diagnostics stations, when configured by the user via the user specified configuration selections (Robertson, ¶ 46, In another embodiment, the security checkpoint model 10 may also consider the effects of passenger check-in 14 on the passenger demand for security screening, as further described in the above-cited U.S. patent application Ser. No. 10/293,469. In general, an extended check-in period serves to buffer the security demand. Specifically, the security checkpoint model 10 may be adapted to consider processes occurring in an airport before a passenger enters a security checkpoint. Typically, certain percentages of passengers check-in at various check-in locations, such as curb check-in, counter check-in, or self-serve check-in. These percentages are predetermined and may be selected as needed, and if one of the check-in locations is not present in an airport of interest, its associated usage percentage may be set to zero. Alternatively, passengers may also choose to not check-in and instead proceed directly to the security checkpoint), (Id., ¶ 47, During the check-in process in step 14, the passenger may also check-in baggage, and a certain percentage of the baggage may then be screened. For instance, baggage may be screened using an Explosive Detection System (EDS). The EDS tests baggage for explosives by scanning the internal contents of baggage placed in the EDS. The percentage of the bags searched during check-in step 14 is predetermined and may be defined as specified above. If there is no desire to simulate the EDS or other methods of screening checked-in baggage, the percentage of passengers affected by these processes may be set to zero. Similarly, if the airport safety rules change to require screening of all baggage, the percentage may be increased to unity, or 100%). While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …the machine learning model… However, Szeto discloses …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 10, the combination of Robertson, Sahay and Szeto discloses …The method platform of claim 1… Robertson further discloses …further comprising: reducing, with … a queue in front of any arriving customer at any stage (workstation) based on that stage's effective processing rate until the time interval in which the queue reaches zero (Robertson, ¶ 96, Returning to FIG. 3, after the schedule is defined in step 800, the schedule is implemented and studied in step 310. In particular, the performance measures, such as the average and maximum wait time for people passing through the checkpoint may be measured. The results of the scheduling can be studied and this data may be used to modify the demand data and to create an associated schedule, step 320. For instance, the assumptions used to forecast the required number of employees may be modified according to actual may be modified according to actual events), (Id., ¶ 25, Each of the stations is separately staffed with a number of employees as needed. For instance, a security station may use five employees, each manning a component of the security station (a walk-through metal detector, an x-ray machine, a hand-held metal detector, a station to manually search personal belongings, and an area to perform other security tests). Obviously, any number of people may be staffed to a station. A station may also be partially staffed, operating a lower level of throughput as the security workers are required to perform more than one function. Furthermore, additional workers may be staffed to a security checkpoint to improve the throughput of that station. (discloses reducing queue based on processing rate) In this way, the capacity of security checkpoints generally corresponds to the number of security workers staffed at the security stations). While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …the machine learning model… However, Szeto discloses …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 11, the combination of Robertson, Sahay and Szeto discloses …The method platform of claim 1… Robertson further discloses …further comprising: applying, with … interpolation within the last period to determine a final throughput time (Robertson, ¶ 36, The number of passengers arriving at the security checkpoint may be divided into fixed time periods, such as 30-minute intervals. The average demand during each of the periods may then be displayed, as illustrated in total demand curve 600' in FIG. 6B, as the horizontal line in each of the boxes. The overall number of passengers during the time period will be the area of the box, or the average demand multiplied by the time period), (Id., ¶ 25, Each of the stations is separately staffed with a number of employees as needed. For instance, a security station may use five employees, each manning a component of the security station (a walk-through metal detector, an x-ray machine, a hand-held metal detector, a station to manually search personal belongings, and an area to perform other security tests). Obviously, any number of people may be staffed to a station. A station may also be partially staffed, operating a lower level of throughput as the security workers are required to perform more than one function. Furthermore, additional workers may be staffed to a security checkpoint to improve the throughput of that station. (discloses throughput determination) In this way, the capacity of security checkpoints generally corresponds to the number of security workers staffed at the security stations). While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …the machine learning model… However, Szeto discloses …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 12, the combination of Robertson, Sahay and Szeto discloses …The method platform of claim 1… Robertson further discloses …further comprising: summing, with … the wait times for each stage of multi-station and multi-stage screening zones multiplied by a probability of a passenger visiting that station (Robertson, ¶ 46, In another embodiment, the security checkpoint model 10 may also consider the effects of passenger check-in 14 on the passenger demand for security screening, as further described in the above-cited U.S. patent application Ser. No. 10/293,469. In general, an extended check-in period serves to buffer the security demand. Specifically, the security checkpoint model 10 may be adapted to consider processes occurring in an airport before a passenger enters a security checkpoint. Typically, certain percentages of passengers check-in at various check-in locations, such as curb check-in, counter check-in, or self-serve check-in. These percentages are predetermined and may be selected as needed, and if one of the check-in locations is not present in an airport of interest, its associated usage percentage may be set to zero. Alternatively, passengers may also choose to not check-in and instead proceed directly to the security checkpoint), (Id.,, ¶ 36, The number of passengers arriving at the security checkpoint may be divided into fixed time periods, such as 30-minute intervals. The average demand during each of the periods may then be displayed, as illustrated in total demand curve 600' in FIG. 6B, as the horizontal line in each of the boxes. The overall number of passengers during the time period will be the area of the box, or the average demand multiplied by the time period), (Id., ¶ 74, The coefficient matrix A created in step 822 may be used to define various staffing conditions. For example, the coefficient matrix A may define tours for part-time workers. The coefficient matrix A may also define conditions of employment, such as mandatory breaks; for instance, an additional set of entries may be created for a "break station," and each employee may be required to spend a certain amount of time in the break station. Similarly, a maximum shift may be defined by subdividing shifts into intervals and preventing shifts that exceed a predefined sum of intervals). While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose …the machine learning model… However, Szeto discloses …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 13, Robertson discloses …A system for dynamically allocating resource in multi-stage screening zones using a hybrid predictive model of a Visual Analytics and Decision Support System platform (VADSS platform), wherein the system comprises: … receiving… observed wait times and queue lengths at multi-station and multi-stage screening zones; (Robertson, ¶ 98, Referring now to FIG. 9, another embodiment of the present invention provides an effective security scheduling system 900. As depicted in FIG. 9, the effective security scheduling system 900 generally includes separate modules that are interconnected to implement the steps in the effective security scheduling method 300. Specifically, the effective security scheduling system 900 includes a demand forecasting module 910. The demand forecasting modeling module 910 accepts input data related to the facility. For instance, security demand at an airport may be forecasted using flight schedules, flight capacity data, and predetermined demand distribution curves, as described above), ( Id., ¶ 11, the present invention has specific application to staffing security checkpoints. In this embodiment, the number of needed open stations in security checkpoints is determined (discloses multi-station screening zones) by translating the variable demand for security at different times and using linear programming to optimize and determine a schedule as needed to staff the needed number of open stations), (Id., ¶ 25, Each of the stations is separately staffed with a number of employees as needed. For instance, a security station may use five employees, each manning a component of the security station (discloses multi-stage screening zones) (a walk-through metal detector, an x-ray machine, a hand-held metal detector, a station to manually search personal belongings, and an area to perform other security tests). Obviously, any number of people may be staffed to a station. A station may also be partially staffed, operating a lower level of throughput as the security workers are required to perform more than one function. Furthermore, additional workers may be staffed to a security checkpoint to improve the throughput of that station. In this way, the capacity of security checkpoints generally corresponds to the number of security workers staffed at the security stations), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies); receive… user specified configuration selections for processing the wait times and queue lengths (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id., ¶ 100, Continuing with FIG. 9, a schedule-defining module 930 uses user-defined inputs (discloses interface for configuration selections) and the outputs from the demand forecasting module 910 and the checkpoint simulation module 920 to create a security work schedule. As described above in FIG. 8B and the associated text, the user-defined inputs generally include data related to the number security workers and the condition of work for these workers. This type of information includes the shift length, possible starting and end times, shift frequency, breaks, etc. associated with each of the workers. Furthermore, the user-defined inputs may include constraints limiting potential staffing configurations, such as limiting the staffing of certain positions to workers with sufficient employees); execute a mechanistic model to generate an initial passenger arrival prediction based on a business fundamentals data set defining one or more of flight departure schedules, airplane capacities, and expected number of passengers (Id., ¶ 42, The black-box security checkpoint model 2 (discloses model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 34, FIG. 5 depicts an exemplary demand curve 500 representing the demand attributable to a single event at 6 PM, such as a flight or a public event. In curve 500, increasing numbers of people arrive at the checkpoint before 6 PM, but the number of the people drops off rapidly thereafter (discloses passenger arrival predictions)), (Id., ¶ 37, It should be appreciated that the above-described method for estimating demand at the security checkpoint, while presented in the context of an airport or seaport, may be used in a variety of circumstances. For instance, the above-described method may be used to determine security screening demand at a large volume event, such as a concert or sports contest. The total number of people may then be estimated as the number of ticket-holders minus forecasted non-attendance. The instantaneous demand at the security checkpoint may then be determined at using a demand curve for the event), (Id., ¶ 107, The changes in the needed number of workers over an extended period may be predicted through the forecasting the needed number of security stations in step 400 and defining an effective schedule in step 800, both over the extended period of interest. For instance, needed number of security stations at an airport may be forecasted over an extended period to form the extended needed worker graph 1000 by examining the number of flights departing from the airport, (discloses flight departure schedules) the load factors for these flights, (discloses airplane capacities) etc. as described above in FIG. 4B and the associated text), (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport); apply… using the hybrid arrival prediction, an algorithm to yield future predicted wait times and queue lengths at the multi-station and multi-stage screening zones based at least in part on the observed wait times and queue lengths and the user specified configuration selections (Id., ¶ 42, The black-box security checkpoint model 2 (discloses analytical model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id., ¶ 60, Returning to FIG. 8A, an effective schedule is formed in step 820 using the worker data from steps 811, 812, and 813. In the field of employee staffing and scheduling, several techniques are known to create an optimized schedule using the worker data, such as the information described above in steps 811, 812, and 813. For instance, an optimized schedule for a security checkpoint may be formed using linear programming, quadratic or mixed-integer programming, nonlinear optimization, global optimization, non-smooth optimization using genetic and evolutionary algorithms, and constraint programming methods from artificial intelligence); sequentially process each of the stages of the multi-station and multi- stage screening zones to compute a number served at each stage during a time interval as the minimum of a service capacity based on (i) a number of service stations open and based further on (ii) a service rate per station provided by the user specified configuration selections, (iii) a number of initial customers in queue plus those arriving, and (iv) a service rate of a subsequent workstation when the subsequent station buffer space is full (Id., ¶ 40, The security checkpoint may be modeled in step 420 using a certain number of open stations (discloses number of service stations open). The security checkpoint is then modeled again using a different number of open stations. The results from the two models may be compared to choose a desirable number of open stations. Typically, reducing the number of stations is detrimental to service measures, such as waiting time, but reduced employment costs. In this way, the model may then be used to provide a fact-based forecast of the varying number of stations. It should be appreciated that the modeling of the security checkpoint does not schedule workers. Instead, the model provides an optimal number of open stations per time period as needed to meet various service measures (and thus, the optimal number of security workers for each of the time periods). The actual staffing of the security workers is described below), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id, ¶ 42, The black-box security checkpoint model 2 functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. (discloses specified service rates per station) Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 36, The number of passengers arriving at the security checkpoint may be divided into fixed time periods (discloses number of passengers in queue including those arriving), such as 30-minute intervals. The average demand during each of the periods may then be displayed, as illustrated in total demand curve 600' in FIG. 6B, as the horizontal line in each of the boxes. The overall number of passengers during the time period will be the area of the box, or the average demand multiplied by the time period), (Id., ¶ 49, As described in U.S. application Ser. No. 10/293,469, the models 2 and 10 may also be used to calculate the effect of policy changes such as estimating the impact of adding another security test or incorporating different security equipment. Specifically, the model supports data modeling and simulation by provided quantitative modeling support and analysis to develop fact-based recommendations for policy decisions. For example, the model 10 may be used to simulate checkpoint staffing requirements such as a required number of wanders, bag searchers, etc. for various checkpoint configurations. The model 10 may also be used to simulate checkpoint equipment requirements, such a required number of X-Rays machines for various station configurations. The model 10 may further be used to recommend checkpoint staffing for peak volume and non-peak operations. Similarly, the model 10 may be used to assess (1) continuous (random) policy compliance levels for security devices; (2) the impact of alternative, gender based scanning policies; (3) the impact of eliminating or adding various screening steps in the security checkpoint; (4) the impact of check-in counter wait time on security checkpoint demand; or (5) the impact of reduced station staffing on checkpoint operations), (Id., ¶ 50, The data modeling provides analytical support for security checkpoint operations focusing on resources requirements (equipment & staffing), process performance, customer experience and cost. For instance, the model 10 may be modified to provide analytical support for various resource requirement policy concerns such as: Employee work rules (impact of number of breaks, lunch, training etc.); reduced checkpoint staffing requirements (impacts of reduced staff on checkpoint operations); reduced airport staffing requirements (optimized scheduling of shared resources across airport); new staffing requirements based on process changes (i.e. checkpoint selectee screening); or annual labor planning based on seasonal demand (Workforce management on annual basis). Specifically, the addition/subtraction of requirements in a checkpoint may be modeled through the addition/elimination of substeps in the model 10), (Id., ¶ 51, varying the values in the model 10, the model 10 further provides analytic support for various checkpoint process change policies concerns such as: Process changes or re-designs (i.e. new security directives which change process steps or time); new technology inserted into the existing or redesigned process (i.e. new type of x-ray); or emergency response planning (concourse dumps, checkpoint shutdowns, etc.). Specifically, these process changes refer to modification of processes already included in a model 10); and compute and output a predicted wait time for any passenger by progressing that passenger on a first-come first-served manner through a network of service queues affiliated with each of the multi-station and multi-stage screening zones (Id., ¶ 42, The black-box security checkpoint model 2 (discloses analytical model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time (discloses output of predicted wait time) in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 44, In a preferred embodiment of the present invention, the security checkpoint is modeled as described in co-owned U.S. patent application Ser. No. 10/293,469 entitled SECURITY CHECKPOINT SIMULATION, the disclosure of which is hereby incorporated by reference in full. U.S. patent application Ser. No. 10/293,469 provides a security checkpoint model 10, as depicted in FIG. 4D, having two or more processes, such as entering the security checkpoint in step 11, screening items in step 12, and screening people in step 13. This security checkpoint model is more similar to an actual security checkpoint. Each of the steps 11, 12, and 13 may be separately simulated to produce output values as described above. Thus, each of the steps 11, 12, and 13 may be separately modeled black-boxes. For instance, a user may define rules for simulating output values for each of the steps 11, 12, and 13. To model changes in the checkpoint, the values or distribution for steps 11, 12, or 13 may be adjusted. By adjusting values for separate steps, the passenger checkpoint model 10 more accurately approximates changes in a passenger checkpoint). While suggested in at least Fig. 9 and related text, Robertson does not explicitly disclose …a memory; and one or more processors configured to execute instructions stored in the memory, the one or more processors configured to…; …by the one or more processors…; …by the one or more processors…; …by the one or more processors…; execute a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generate a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors; periodically retrain the machine learning model based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; wherein the machine learning model is configured to: accept the observed wait times and queue lengths as initial starting conditions and incrementally update queue lengths at each stage to the start of a next period by adding any arrivals during a previous period and subtracting throughput for the respective stage based on a number of customers served; … the machine learning model…; … the machine learning model… However, Sahay discloses … a memory; and one or more processors configured to execute instructions stored in the memory, the one or more processors configured to…; …by the one or more processors…; …by the one or more processors… (Sahay, ¶ 35, The wait time client 132 may be stored in any type of memory that may or may not be integrated with the computing device 135. In some embodiments, the wait time client 132 may be stored in a universal serial bus (USB) flash drive that is connected to a USB port of the computing device 135. The computing device 135 may be any type of device capable of executing application programs), (Id., ¶ 36, FIG. 1B is a more detailed illustration of the computing device 135 of FIG. 1A, according to various embodiments. As shown, computing device 135 includes, without limitation, a processing unit 190, input/output (I/O) devices 192, and a memory unit 194. Memory unit 194 includes the wait time client 132 and is configured to interact with a sensor database 196)), (Id., ¶ 50, In the manual entry mode, both the wait start time (i.e., the time that the queue is entered) and the wait end time 244 (the time that the queue is exited) are entered via a user interface as part of the client-specific input 250 (discloses parameter input interface). Upon receiving a wait start time, the measured wait time calculator 240 stores the wait start time. Subsequently, upon receiving the corresponding wait end time 244, the measured wait time calculator 240 subtracts the stored wait start time from the wait end time 244 to determine the measured wait time 242. The measured wait time calculator 240 then transmits the measured wait data 245 to the wait time server 150. The measured wait data 245 includes, without limitation the measured wait time 242, the wait end time 244, and the point-of-interest 246. The measured wait time calculator 240 assigns the point-of-interest 246 in any technically feasible fashion, such as entry via the user interface, search engine results, or global positioning system (GPS) data); wherein … is configured to: accept the observed wait times and queue lengths as initial starting conditions and incrementally update queue lengths at each stage to the start of a next period by adding any arrivals during a previous period and subtracting throughput for the respective stage based on a number of customers served (Id., ¶ 36, FIG. 1B is a more detailed illustration of the computing device 135 of FIG. 1A, according to various embodiments. As shown, computing device 135 includes, without limitation, a processing unit 190, input/output (I/O) devices 192, and a memory unit 194. Memory unit 194 includes the wait time client 132 and is configured to interact with a sensor database 196)), (Id., ¶ 30, Based on the input the wait time client 132 receives, the wait time client 132 may interpret a location corresponding to the computing device 135 that is executing the wait time client 132 as the point-of-query 180, the point-of-interest 110, both, or neither. More specifically, as the wait time client 132 generates measured wait data, the wait time unit 132 relays the measured wait data and the current location—interpreted as the point-of-interest 110—to the wait time server 150 for incorporation into the crowdsourced wait data 160. To generate predicted wait times, the wait time server 150 processes the updated crowdsourced wait data 160 based on a wait time predictive model 165. The wait time predictive model 165 may implement any technically feasible algorithm designed to aggregate the crowdsourced wait data 160 into discerning predicted wait data), (Id., ¶ 47, In a complementary fashion to the predicted time calculator 270, the measured wait time calculator 240 determines a wait end time 244 and a wait start time for a queue located at the point-of-interest 110. The wait time calculator 240 then subtracts the wait start time from the wait end time 244 to determine a measured wait time 242. After determining the measured wait time 242, the wait time calculator transmits the measured wait data 245 to the wait time server 150. As shown, the measured wait data 245 includes, without limitation, the measured wait time 242, the point-of-interest 110 and the wait end time 244 Upon receiving the measured wait data 245, the wait time server 150 updates the crowdsourced wait data 160 (discloses incrementing model with updated queue length and wait time data) and/or the wait time predictive model 165 to reflect the measured wait time data 245); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson to include the queue and wait time interface elements of Sahay in the analogous art of crowdsourced-based wait time estimates for the same reasons as disclosed for claim 1. While suggested, the combination of Robertson and Sahay does not explicitly disclose … execute a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generate a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors; periodically retrain the machine learning model based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; …the machine learning model…; …the machine learning model…; …the machine learning model… However, through KSR Rationale D (See MPEP 2141(III)(D)), the combination of Robertson and Szeto discloses execute a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generate a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors. First, Robertson discloses an initial passenger arrival prediction, as well as shift optimization techniques for minimizing error between passenger arrival predictions and observed wait times and queue lengths (Robertson, ¶ 30, Based on the input the wait time client 132 receives, the wait time client 132 may interpret a location corresponding to the computing device 135 that is executing the wait time client 132 as the point-of-query 180, the point-of-interest 110, both, or neither. More specifically, as the wait time client 132 generates measured wait data, the wait time unit 132 relays the measured wait data and the current location—interpreted as the point-of-interest 110—to the wait time server 150 for incorporation into the crowdsourced wait data 160. To generate predicted wait times, the wait time server 150 processes the updated crowdsourced wait data 160 based on a wait time predictive model 165. The wait time predictive model 165 may implement any technically feasible algorithm designed to aggregate the crowdsourced wait data 160 into discerning predicted wait data), (Id., ¶ 34, FIG. 5 depicts an exemplary demand curve 500 representing the demand attributable to a single event at 6 PM, such as a flight or a public event. In curve 500, increasing numbers of people arrive at the checkpoint before 6 PM, but the number of the people drops off rapidly thereafter (discloses passenger arrival predictions)), (Id., ¶ 36, The number of passengers arriving at the security checkpoint may be divided into fixed time periods (discloses number of passengers in queue including those arriving), such as 30-minute intervals. The average demand during each of the periods may then be displayed, as illustrated in total demand curve 600' in FIG. 6B, as the horizontal line in each of the boxes. The overall number of passengers during the time period will be the area of the box, or the average demand multiplied by the time period), (Id., ¶ 49, As described in U.S. application Ser. No. 10/293,469, the models 2 and 10 may also be used to calculate the effect of policy changes such as estimating the impact of adding another security test or incorporating different security equipment. Specifically, the model supports data modeling and simulation by provided quantitative modeling support and analysis to develop fact-based recommendations for policy decisions. For example, the model 10 may be used to simulate checkpoint staffing requirements such as a required number of wanders, bag searchers, etc. for various checkpoint configurations. The model 10 may also be used to simulate checkpoint equipment requirements, such a required number of X-Rays machines for various station configurations. The model 10 may further be used to recommend checkpoint staffing for peak volume and non-peak operations. (discloses staffing predictions to minimize error based on passenger arrival and wait times) Similarly, the model 10 may be used to assess (1) continuous (random) policy compliance levels for security devices; (2) the impact of alternative, gender based scanning policies; (3) the impact of eliminating or adding various screening steps in the security checkpoint; (4) the impact of check-in counter wait time on security checkpoint demand; or (5) the impact of reduced station staffing on checkpoint operations), Further Szeto discloses the use of a machine learning model to generate a hybrid predictive engine to generate and refine predictions (Szeto, ¶ 53, “Algorithm” refers to an algorithmic component of a predictive engine for generating predictions and decisions. The Algorithm component includes machine learning algorithms, as well as settings of algorithm parameters that determine how a predictive model is constructed. (discloses machine learning model for predictions) A predictive engine may include one or more algorithms, to be used independently or in combination. Parameters of a predictive engine specify which algorithms are used, the algorithm parameters used in each algorithm, and how the results of each algorithm are congregated or combined to arrive at a prediction engine result, also known as an output or prediction), (Id., ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors), (Id., ¶ 230, Further disclosed are methods and systems for monitoring and replaying queries, predicted results, subsequence end-user actions/behaviors, or actual results, and internal tracking information for determining factors that affect the performance of the machine learning system. For example, iterative replay of dynamic queries, corresponding predicted results, and subsequent actual user actions may provide to operators insights into the tuning of data sources, algorithms, algorithm parameters, as well as other system parameters that may affect the performance of the machine learning system. Prediction performances may be evaluated in terms of prediction scores and visualized through plots and diagrams. By segmenting available replay data, prediction performances of different engines or engine variants may be compared and studied conditionally for further engine parameter optimization, (Id., ¶ 259, Prediction result 445 and evaluation result 455 can be passed to other components within a PredictionIO or machine learning server. As discussed previously, a PredictionIO or machine learning server is a predictive engine deployment platform that enables developers to customize engine components, evaluate predictive models, and tune predictive engine parameters to improve performance of prediction results. A PredictionIO or machine learning server may also maintain adjustment history (discloses adjustment factors) in addition to prediction and evaluation results for developers to further customize and improve each component of an engine for specific business needs). One of ordinary skill in the art would have recognized that applying the known machine learning techniques of Szeto would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the machine learning techniques of Szeto to the passenger arrival prediction elements of Robertson would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such passenger arrival data processing features into similar prediction systems. Further, applying iterative machine learning algorithms to Robertson with parameter data and adjustment factors considered accordingly, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow more optimal staffing based on passenger arrival predictions. Thus, through KSR Rationale D (See MPEP 2141(III)(D)), the combination of Robertson and Szeto discloses execute a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generate a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors. Szeto further discloses …periodically retrain the machine learning model based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; …the machine learning model…; …the machine learning model…; …the machine learning model…; …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors), (Id., ¶ 209, Such trained models are updated regularly as new data comes in from actual user experiences and actual transaction data and it is desirable to train a new model periodically based on such data. (discloses adjusting retraining intervals based on performance metrics) For instance, perhaps 30 days worth of new data may be utilized to train or re-train a given prediction model. A week later, still more data is available and so the developers may seek to again train the model using the new week's worth of additional data, or train the model with a month and a week's worth of data, or simply train the model using only the least 1-week period worth of data. Alternatively, the developers may simply seek to train the model utilizing a different range of data), (Id., ¶ 210, Regardless of the reasons or period selected, a new model is created with different data from before and will therefore have different machine learning and therefore different predictive results. Certain model updates include simply updating the model as new data becomes available in real time whereas other update schemes involve batch updates, such as daily, or every 12 hours, and so forth). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 14, Robertson discloses … a Visual Analytics and Decision Support System platform (VADSS platform)… the instructions cause the VADSS platform to perform operations including: receiving… observed wait times and queue lengths at multi-station and multi-stage screening zones (Robertson, ¶ 98, Referring now to FIG. 9, another embodiment of the present invention provides an effective security scheduling system 900. As depicted in FIG. 9, the effective security scheduling system 900 generally includes separate modules that are interconnected to implement the steps in the effective security scheduling method 300. Specifically, the effective security scheduling system 900 includes a demand forecasting module 910. The demand forecasting modeling module 910 accepts input data related to the facility. For instance, security demand at an airport may be forecasted using flight schedules, flight capacity data, and predetermined demand distribution curves, as described above), (Id., ¶ 11, the present invention has specific application to staffing security checkpoints. In this embodiment, the number of needed open stations in security checkpoints is determined (discloses multi-station screening zones) by translating the variable demand for security at different times and using linear programming to optimize and determine a schedule as needed to staff the needed number of open stations), (Id., ¶ 25, Each of the stations is separately staffed with a number of employees as needed. For instance, a security station may use five employees, each manning a component of the security station (discloses multi-stage screening zones) (a walk-through metal detector, an x-ray machine, a hand-held metal detector, a station to manually search personal belongings, and an area to perform other security tests). Obviously, any number of people may be staffed to a station. A station may also be partially staffed, operating a lower level of throughput as the security workers are required to perform more than one function. Furthermore, additional workers may be staffed to a security checkpoint to improve the throughput of that station. In this way, the capacity of security checkpoints generally corresponds to the number of security workers staffed at the security stations), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies); receiving… user specified configuration selections for processing the wait times and queue lengths (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id., ¶ 100, Continuing with FIG. 9, a schedule-defining module 930 uses user-defined inputs (discloses interface for configuration selections) and the outputs from the demand forecasting module 910 and the checkpoint simulation module 920 to create a security work schedule. As described above in FIG. 8B and the associated text, the user-defined inputs generally include data related to the number security workers and the condition of work for these workers. This type of information includes the shift length, possible starting and end times, shift frequency, breaks, etc. associated with each of the workers. Furthermore, the user-defined inputs may include constraints limiting potential staffing configurations, such as limiting the staffing of certain positions to workers with sufficient employees); executing a mechanistic model to generate an initial passenger arrival prediction based on a business fundamentals data set defining one or more of flight departure schedules, airplane capacities, and expected number of passengers (Id., ¶ 42, The black-box security checkpoint model 2 (discloses model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 34, FIG. 5 depicts an exemplary demand curve 500 representing the demand attributable to a single event at 6 PM, such as a flight or a public event. In curve 500, increasing numbers of people arrive at the checkpoint before 6 PM, but the number of the people drops off rapidly thereafter (discloses passenger arrival predictions)), (Id., ¶ 37, It should be appreciated that the above-described method for estimating demand at the security checkpoint, while presented in the context of an airport or seaport, may be used in a variety of circumstances. For instance, the above-described method may be used to determine security screening demand at a large volume event, such as a concert or sports contest. The total number of people may then be estimated as the number of ticket-holders minus forecasted non-attendance. The instantaneous demand at the security checkpoint may then be determined at using a demand curve for the event), (Id., ¶ 107, The changes in the needed number of workers over an extended period may be predicted through the forecasting the needed number of security stations in step 400 and defining an effective schedule in step 800, both over the extended period of interest. For instance, needed number of security stations at an airport may be forecasted over an extended period to form the extended needed worker graph 1000 by examining the number of flights departing from the airport, (discloses flight departure schedules) the load factors for these flights, (discloses airplane capacities) etc. as described above in FIG. 4B and the associated text), (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport); applying … using a hybrid arrival prediction, an algorithm to yield future predicted wait times and queue lengths at the multi-station and multi-stage screening zones based at least in part on the observed wait times and queue lengths and the user specified configuration selections (Id., ¶ 42, The black-box security checkpoint model 2 (discloses analytical model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id., ¶ 60, Returning to FIG. 8A, an effective schedule is formed in step 820 using the worker data from steps 811, 812, and 813. In the field of employee staffing and scheduling, several techniques are known to create an optimized schedule using the worker data, such as the information described above in steps 811, 812, and 813. For instance, an optimized schedule for a security checkpoint may be formed using linear programming, quadratic or mixed-integer programming, nonlinear optimization, global optimization, non-smooth optimization using genetic and evolutionary algorithms, and constraint programming methods from artificial intelligence); sequentially processing, by … each of the stages of the multi-station and multi- stage screening zones to compute a number served at each stage during a time interval as the minimum of a service capacity based on (i) a number of service stations open and based further on (ii) a service rate per station provided by the user specified configuration selections, (iii) a number of initial customers in queue plus those arriving, and (iv) a service rate of a subsequent workstation when the subsequent station buffer space is full (Id., ¶ 40, The security checkpoint may be modeled in step 420 using a certain number of open stations (discloses number of service stations open). The security checkpoint is then modeled again using a different number of open stations. The results from the two models may be compared to choose a desirable number of open stations. Typically, reducing the number of stations is detrimental to service measures, such as waiting time, but reduced employment costs. In this way, the model may then be used to provide a fact-based forecast of the varying number of stations. It should be appreciated that the modeling of the security checkpoint does not schedule workers. Instead, the model provides an optimal number of open stations per time period as needed to meet various service measures (and thus, the optimal number of security workers for each of the time periods). The actual staffing of the security workers is described below), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id, ¶ 42, The black-box security checkpoint model 2 functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. (discloses specified service rates per station) Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 36, The number of passengers arriving at the security checkpoint may be divided into fixed time periods (discloses number of passengers in queue including those arriving), such as 30-minute intervals. The average demand during each of the periods may then be displayed, as illustrated in total demand curve 600' in FIG. 6B, as the horizontal line in each of the boxes. The overall number of passengers during the time period will be the area of the box, or the average demand multiplied by the time period), (Id., ¶ 49, As described in U.S. application Ser. No. 10/293,469, the models 2 and 10 may also be used to calculate the effect of policy changes such as estimating the impact of adding another security test or incorporating different security equipment. Specifically, the model supports data modeling and simulation by provided quantitative modeling support and analysis to develop fact-based recommendations for policy decisions. For example, the model 10 may be used to simulate checkpoint staffing requirements such as a required number of wanders, bag searchers, etc. for various checkpoint configurations. The model 10 may also be used to simulate checkpoint equipment requirements, such a required number of X-Rays machines for various station configurations. The model 10 may further be used to recommend checkpoint staffing for peak volume and non-peak operations. Similarly, the model 10 may be used to assess (1) continuous (random) policy compliance levels for security devices; (2) the impact of alternative, gender based scanning policies; (3) the impact of eliminating or adding various screening steps in the security checkpoint; (4) the impact of check-in counter wait time on security checkpoint demand; or (5) the impact of reduced station staffing on checkpoint operations), (Id., ¶ 50, The data modeling provides analytical support for security checkpoint operations focusing on resources requirements (equipment & staffing), process performance, customer experience and cost. For instance, the model 10 may be modified to provide analytical support for various resource requirement policy concerns such as: Employee work rules (impact of number of breaks, lunch, training etc.); reduced checkpoint staffing requirements (impacts of reduced staff on checkpoint operations); reduced airport staffing requirements (optimized scheduling of shared resources across airport); new staffing requirements based on process changes (i.e. checkpoint selectee screening); or annual labor planning based on seasonal demand (Workforce management on annual basis). Specifically, the addition/subtraction of requirements in a checkpoint may be modeled through the addition/elimination of substeps in the model 10), (Id., ¶ 51, varying the values in the model 10, the model 10 further provides analytic support for various checkpoint process change policies concerns such as: Process changes or re-designs (i.e. new security directives which change process steps or time); new technology inserted into the existing or redesigned process (i.e. new type of x-ray); or emergency response planning (concourse dumps, checkpoint shutdowns, etc.). Specifically, these process changes refer to modification of processes already included in a model 10); and computing and outputting, by … a predicted wait time for any passenger by progressing that passenger on a first-come first-served manner through a network of service queues affiliated with each of the multi-station and multi-stage screening zones (Id., ¶ 42, The black-box security checkpoint model 2 (discloses analytical model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time (discloses output of predicted wait time) in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 44, In a preferred embodiment of the present invention, the security checkpoint is modeled as described in co-owned U.S. patent application Ser. No. 10/293,469 entitled SECURITY CHECKPOINT SIMULATION, the disclosure of which is hereby incorporated by reference in full. U.S. patent application Ser. No. 10/293,469 provides a security checkpoint model 10, as depicted in FIG. 4D, having two or more processes, such as entering the security checkpoint in step 11, screening items in step 12, and screening people in step 13. This security checkpoint model is more similar to an actual security checkpoint. Each of the steps 11, 12, and 13 may be separately simulated to produce output values as described above. Thus, each of the steps 11, 12, and 13 may be separately modeled black-boxes. For instance, a user may define rules for simulating output values for each of the steps 11, 12, and 13. To model changes in the checkpoint, the values or distribution for steps 11, 12, or 13 may be adjusted. By adjusting values for separate steps, the passenger checkpoint model 10 more accurately approximates changes in a passenger checkpoint). While suggested in at least Fig. 9 and related text, Robertson does not explicitly disclose …Non-transitory computer readable storage media having instructions stored thereupon that, when executed by… having at least a processor and a memory therein… by a processor configured to execute instructions stored in a memory…; executing a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors; periodically retraining the machine learning model based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; accepting, by the machine learning model, the observed wait times and queue lengths as initial starting conditions and incrementally updating queue lengths at each stage to the start of a next period by adding any arrivals during a previous period and subtracting throughput for the respective stage based on a number of customers served. However, Sahay discloses …Non-transitory computer readable storage media having instructions stored thereupon that, when executed by… having at least a processor and a memory therein… by a processor configured to execute instructions stored in a memory… (Sahay, ¶ 11, Further embodiments provide, among other things, a system and a non-transitory computer-readable medium configured to implement the method set forth above), (Id., ¶ 35, The wait time client 132 may be stored in any type of memory that may or may not be integrated with the computing device 135. In some embodiments, the wait time client 132 may be stored in a universal serial bus (USB) flash drive that is connected to a USB port of the computing device 135. The computing device 135 may be any type of device capable of executing application programs), (Id., ¶ 36, FIG. 1B is a more detailed illustration of the computing device 135 of FIG. 1A, according to various embodiments. As shown, computing device 135 includes, without limitation, a processing unit 190, input/output (I/O) devices 192, and a memory unit 194. Memory unit 194 includes the wait time client 132 and is configured to interact with a sensor database 196)), (Id., ¶ 50, In the manual entry mode, both the wait start time (i.e., the time that the queue is entered) and the wait end time 244 (the time that the queue is exited) are entered via a user interface as part of the client-specific input 250 (discloses parameter input interface). Upon receiving a wait start time, the measured wait time calculator 240 stores the wait start time. Subsequently, upon receiving the corresponding wait end time 244, the measured wait time calculator 240 subtracts the stored wait start time from the wait end time 244 to determine the measured wait time 242. The measured wait time calculator 240 then transmits the measured wait data 245 to the wait time server 150. The measured wait data 245 includes, without limitation the measured wait time 242, the wait end time 244, and the point-of-interest 246. The measured wait time calculator 240 assigns the point-of-interest 246 in any technically feasible fashion, such as entry via the user interface, search engine results, or global positioning system (GPS) data); accepting, by … the observed wait times and queue lengths as initial starting conditions and incrementally updating queue lengths at each stage to the start of a next period by adding any arrivals during a previous period and subtracting throughput for the respective stage based on a number of customers served (Id., ¶ 36, FIG. 1B is a more detailed illustration of the computing device 135 of FIG. 1A, according to various embodiments. As shown, computing device 135 includes, without limitation, a processing unit 190, input/output (I/O) devices 192, and a memory unit 194. Memory unit 194 includes the wait time client 132 and is configured to interact with a sensor database 196)), (Id., ¶ 30, Based on the input the wait time client 132 receives, the wait time client 132 may interpret a location corresponding to the computing device 135 that is executing the wait time client 132 as the point-of-query 180, the point-of-interest 110, both, or neither. More specifically, as the wait time client 132 generates measured wait data, the wait time unit 132 relays the measured wait data and the current location—interpreted as the point-of-interest 110—to the wait time server 150 for incorporation into the crowdsourced wait data 160. To generate predicted wait times, the wait time server 150 processes the updated crowdsourced wait data 160 based on a wait time predictive model 165. The wait time predictive model 165 may implement any technically feasible algorithm designed to aggregate the crowdsourced wait data 160 into discerning predicted wait data), (Id., ¶ 47, In a complementary fashion to the predicted time calculator 270, the measured wait time calculator 240 determines a wait end time 244 and a wait start time for a queue located at the point-of-interest 110. The wait time calculator 240 then subtracts the wait start time from the wait end time 244 to determine a measured wait time 242. After determining the measured wait time 242, the wait time calculator transmits the measured wait data 245 to the wait time server 150. As shown, the measured wait data 245 includes, without limitation, the measured wait time 242, the point-of-interest 110 and the wait end time 244 Upon receiving the measured wait data 245, the wait time server 150 updates the crowdsourced wait data 160 (discloses incrementing model with updated queue length and wait time data) and/or the wait time predictive model 165 to reflect the measured wait time data 245); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson to include the queue and wait time interface elements of Sahay in the analogous art of crowdsourced-based wait time estimates for the same reasons as stated for claim 1. While suggested, the combination of Robertson and Sahay does not explicitly disclose … executing a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors; periodically retraining the machine learning model based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; …the machine learning model…; …the machine learning model…; …the machine learning model… However, through KSR Rationale D (See MPEP 2141(III)(D)), the combination of Robertson and Szeto discloses executing a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors. First, Robertson discloses an initial passenger arrival prediction, as well as shift optimization techniques for minimizing error between passenger arrival predictions and observed wait times and queue lengths (Robertson, ¶ 30, Based on the input the wait time client 132 receives, the wait time client 132 may interpret a location corresponding to the computing device 135 that is executing the wait time client 132 as the point-of-query 180, the point-of-interest 110, both, or neither. More specifically, as the wait time client 132 generates measured wait data, the wait time unit 132 relays the measured wait data and the current location—interpreted as the point-of-interest 110—to the wait time server 150 for incorporation into the crowdsourced wait data 160. To generate predicted wait times, the wait time server 150 processes the updated crowdsourced wait data 160 based on a wait time predictive model 165. The wait time predictive model 165 may implement any technically feasible algorithm designed to aggregate the crowdsourced wait data 160 into discerning predicted wait data), (Id., ¶ 34, FIG. 5 depicts an exemplary demand curve 500 representing the demand attributable to a single event at 6 PM, such as a flight or a public event. In curve 500, increasing numbers of people arrive at the checkpoint before 6 PM, but the number of the people drops off rapidly thereafter (discloses passenger arrival predictions)), (Id., ¶ 36, The number of passengers arriving at the security checkpoint may be divided into fixed time periods (discloses number of passengers in queue including those arriving), such as 30-minute intervals. The average demand during each of the periods may then be displayed, as illustrated in total demand curve 600' in FIG. 6B, as the horizontal line in each of the boxes. The overall number of passengers during the time period will be the area of the box, or the average demand multiplied by the time period), (Id., ¶ 49, As described in U.S. application Ser. No. 10/293,469, the models 2 and 10 may also be used to calculate the effect of policy changes such as estimating the impact of adding another security test or incorporating different security equipment. Specifically, the model supports data modeling and simulation by provided quantitative modeling support and analysis to develop fact-based recommendations for policy decisions. For example, the model 10 may be used to simulate checkpoint staffing requirements such as a required number of wanders, bag searchers, etc. for various checkpoint configurations. The model 10 may also be used to simulate checkpoint equipment requirements, such a required number of X-Rays machines for various station configurations. The model 10 may further be used to recommend checkpoint staffing for peak volume and non-peak operations. (discloses staffing predictions to minimize error based on passenger arrival and wait times) Similarly, the model 10 may be used to assess (1) continuous (random) policy compliance levels for security devices; (2) the impact of alternative, gender based scanning policies; (3) the impact of eliminating or adding various screening steps in the security checkpoint; (4) the impact of check-in counter wait time on security checkpoint demand; or (5) the impact of reduced station staffing on checkpoint operations), Further Szeto discloses the use of a machine learning model to generate a hybrid predictive engine to generate and refine predictions (Szeto, ¶ 53, “Algorithm” refers to an algorithmic component of a predictive engine for generating predictions and decisions. The Algorithm component includes machine learning algorithms, as well as settings of algorithm parameters that determine how a predictive model is constructed. (discloses machine learning model for predictions) A predictive engine may include one or more algorithms, to be used independently or in combination. Parameters of a predictive engine specify which algorithms are used, the algorithm parameters used in each algorithm, and how the results of each algorithm are congregated or combined to arrive at a prediction engine result, also known as an output or prediction), (Id., ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors), (Id., ¶ 230, Further disclosed are methods and systems for monitoring and replaying queries, predicted results, subsequence end-user actions/behaviors, or actual results, and internal tracking information for determining factors that affect the performance of the machine learning system. For example, iterative replay of dynamic queries, corresponding predicted results, and subsequent actual user actions may provide to operators insights into the tuning of data sources, algorithms, algorithm parameters, as well as other system parameters that may affect the performance of the machine learning system. Prediction performances may be evaluated in terms of prediction scores and visualized through plots and diagrams. By segmenting available replay data, prediction performances of different engines or engine variants may be compared and studied conditionally for further engine parameter optimization, (Id., ¶ 259, Prediction result 445 and evaluation result 455 can be passed to other components within a PredictionIO or machine learning server. As discussed previously, a PredictionIO or machine learning server is a predictive engine deployment platform that enables developers to customize engine components, evaluate predictive models, and tune predictive engine parameters to improve performance of prediction results. A PredictionIO or machine learning server may also maintain adjustment history (discloses adjustment factors) in addition to prediction and evaluation results for developers to further customize and improve each component of an engine for specific business needs). One of ordinary skill in the art would have recognized that applying the known machine learning techniques of Szeto would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the machine learning techniques of Szeto to the passenger arrival prediction elements of Robertson would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such passenger arrival data processing features into similar prediction systems. Further, applying iterative machine learning algorithms to Robertson with parameter data and adjustment factors considered accordingly, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow more optimal staffing based on passenger arrival predictions. Thus, through KSR Rationale D (See MPEP 2141(III)(D)), the combination of Robertson and Szeto discloses executing a machine learning model configured to estimate adjusting factors for the initial passenger arrival prediction by minimizing error between the initial passenger arrival prediction and the observed wait times and queue lengths; generating a hybrid arrival prediction by adjusting the initial passenger arrival prediction using the adjusting factors. Szeto further discloses …periodically retraining the machine learning model based on a threshold quantity of accumulated real- time observed wait times and queue lengths or at configurable time intervals, adjusting retraining intervals based on operational performance metrics, to maintain or improve predictive accuracy relative to operational targets; …the machine learning model…; …the machine learning model…; …the machine learning model…; …the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors), (Id., ¶ 209, Such trained models are updated regularly as new data comes in from actual user experiences and actual transaction data and it is desirable to train a new model periodically based on such data. (discloses adjusting retraining intervals based on performance metrics) For instance, perhaps 30 days worth of new data may be utilized to train or re-train a given prediction model. A week later, still more data is available and so the developers may seek to again train the model using the new week's worth of additional data, or train the model with a month and a week's worth of data, or simply train the model using only the least 1-week period worth of data. Alternatively, the developers may simply seek to train the model utilizing a different range of data), (Id., ¶ 210, Regardless of the reasons or period selected, a new model is created with different data from before and will therefore have different machine learning and therefore different predictive results. Certain model updates include simply updating the model as new data becomes available in real time whereas other update schemes involve batch updates, such as daily, or every 12 hours, and so forth). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. Regarding Claim 19, the combination of Robertson, Sahay and Szeto discloses …The system of claim 15… Robertson further discloses … wherein to allocate the TSOs to one or more of the SSCPs based on the predicted passenger volumes the one or more processors are further configured to: combine the predicted passenger volumes with data specifying a number of available TSOs per time interval (Robertson, ¶ 37, It should be appreciated that the above-described method for estimating demand at the security checkpoint, while presented in the context of an airport or seaport, may be used in a variety of circumstances. For instance, the above-described method may be used to determine security screening demand at a large volume event, such as a concert or sports contest. The total number of people may then be estimated as the number of ticket-holders minus forecasted non-attendance. The instantaneous demand at the security checkpoint may then be determined at using a demand curve for the event), (Id., ¶ 93, the coefficient matrix A and the demand matrix b may be programmed in to a spreadsheet or mathematical calculation program that can automatically solve the linear program. The tour assignment matrix x is optimally determined. Specifically, the matrix x may be determined using shift optimization to minimize the total number of hours worked in a week and to create the optimal number of shifts required to operate stations (using a defined mix of full and part-time employees). It should be appreciated that the tour assignment x may not be unique in that several possible tour assignments may provide desirable results. In this way, a particular tour assignment x may not be the best but, rather, provides a feasible staffing schedule that meets the forecasted demand levels. Furthermore, there may be no possible solution to the tour assignment x, indicating the need to make changes to the checkpoint or the workforce (e.g., hiring additional workers)); and allocate TSO teams to open Travel Document Check (TDC) and Baggage Screening lanes in Pre-Check and Standard lines of the multi-station and multi-stage security screening area to minimize passenger queue lengths and wait times (Id., ¶ 47, During the check-in process in step 14, the passenger may also check-in baggage, and a certain percentage of the baggage may then be screened. (discloses baggage screen) For instance, baggage may be screened using an Explosive Detection System (EDS). The EDS tests baggage for explosives by scanning the internal contents of baggage placed in the EDS. The percentage of the bags searched during check-in step 14 is predetermined and may be defined as specified above. If there is no desire to simulate the EDS or other methods of screening checked-in baggage, the percentage of passengers affected by these processes may be set to zero. Similarly, if the airport safety rules change to require screening of all baggage, the percentage may be increased to unity, or 100%), (Id., ¶ 48, The sub-steps in the baggage screening during check-in step 14 may also be separately modeled. For example, the baggage is typically loaded into the baggage screening device, and the baggage screening device checks the baggage. The next sub-step depends on whether the baggage screening device sounds an alarm. If the baggage screening device or personnel manning the device sounds an alarm, the alarm is resolved before the baggage is cleared for transport, such as a search by hand), (Id., ¶ 89, Incorporate "Fast-Pass" queuing), (Id., ¶ 75, the rows of the matrix indicate 8 hours of work by a person assigned to the tour. Each column represents a tour. A "1" placed in the appropriate row indicates that a worker is assigned to a particular associated 8 hour shift. If mandatory breaks or lunches must be included in the model then the definition of the row can be changed from 8 hour to 1 hour, 30 minutes or 15 minutes intervals. Again, as is the simple example, a value of "1" in a particular location represents that a worker associated with that location in the matrix is working during the interval associated with that location. In contrast, a "0" represents that the worker is not working during that period. To capture periods when workers are not working, such lunches (30 minutes) and breaks (15 minutes), rows must be defined as 15 minutes. Corresponding, additional rows are added to represent to increased number of periods. In the simple example, there are 21 rows (3 shifts per day times 7 days). By redefining the minimum time interval in the model to 15 minutes the new matrix would require 672 rows instead of 21 (3 shifts.times.8 hours.times.4 time intervals per hour.times.7 days). In the new matrix that includes lunches and breaks in addition to days off, values of "1" represent when workers are working and values of "0" now represent periods when workers are not working, either because the worker is on lunch, break or not scheduled. Typically, all combinations of tour possibilities are built into the matrix (more columns would be added to the simple example). The optimization calculation would then choose the set of tours that minimize the objective function). Claims 3 is rejected under 35 U.S.C. 103 as being unpatentable over Robertson in view of Sahay and Szeto and in further view of Pendergraft et al. U.S. Publication No. 2004/0098237 [hereinafter Pendergraft]. Regarding Claim 3, the combination of Robertson, Sahay and Szeto discloses …The method of claim 1… While suggested in at least Fig. 2B of Robertson, the combination of Robertson and Sahay does not explicitly disclose … further comprising: applying, with the machine learning model, a first-come, first- service queue discipline, or a variation thereof. However, Szeto discloses …with the machine learning model… (Szeto, ¶ 207, According to described embodiments, various models are built to analyze data, process data and produce or generate what are referred to as predictive models, predictive engines, prediction engines, or trained machine learning recommendation models which are then utilized to output predictions about possible future outcomes and behaviors). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay to include the machine learning elements of Szeto in the analogous art of implementing machine learning model training and deployment with a rollback mechanism for the same reasons as stated for claim 1. While suggested in at least Fig. 2B of Robertson, the combination of Robertson, Sahay and Szeto does not explicitly disclose … further comprising: applying, … a first-come, first- service queue discipline, or a variation thereof. However, Pendergraft discloses … further comprising: applying, … a first-come, first- service queue discipline. (Pendergraft, ¶ 40, if two people simultaneously demand a security task, then the task is performed on one person while the second waits for the task to finish with the first person. Generally, the checkpoint model considers the number of resources required to perform tasks and the impact those requirements have on the waiting time of people in the queue when calculating the total delay time for the security checkpoint). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson and the queue and wait time interface elements of Sahay and the re-training elements of Szeto to include the first-come, first-serve elements of Pendergraft in the analogous art of security checkpoint simulations. The motivation for doing so would have been to provide an improved “model that evaluates the time costs of security measures related to the screening of people and their belongings. The model is robust to allow changes in security configurations, schemes, devices, and personnel. In a preferred implementation, the model of the present invention further links interrelated steps in the screening of people and their personal effects.” (Pendergraft, ¶ 4), wherein such improvements would benefit Szeto’s system which seeks “to help developers understand particular behaviors of engine variants of interest, and to tailor and improve prediction engine design” (Szeto, ¶ 234), and wherein such improvements would further benefit Sahay’s system which seeks to provide “more effective techniques for estimating wait times…”, wherein a “user may select a planning option that optimizes the predicted wait time or a planning option that optimizes an overall errand time (i.e., predicted travel time and predicted wait time)” (Sahay, ¶¶ 9, 12), wherein such improvements would further still benefit Robertson’s system which seeks to “achieve numerous desired results, including lower total personnel costs; reduced numbers of full-time employees (FTE); greater diversity in the workforce (through the use of part-time or seasonal employees); improved cost effectiveness while at least maintaining the customer service level; the creation of consistency in staffing and scheduling; the development of rule-driven, repeatable schedules; maximizing employee morale; reducing costs associated with scheduling; reducing the costs of creating and maintaining schedule” [Pendergraft, ¶ 4; Szeto, ¶ 234; Sahay, ¶¶ 9, 12; Robertson, ¶ 119]. Claims 15-18 and 20-21 and are rejected under 35 U.S.C. 103 as being unpatentable over Robertson in view of Benjamin et al. U.S. Publication No. 2020/0334615 [hereinafter Benjamin]. Regarding Claim 15, Robertson discloses …A system for predicting passenger arrivals and allocating Transportation Security Officers (TSOs) within a multi-station and multi-stage security screening area having a plurality of Security Screening Checkpoints (SSCPs), wherein the system comprises... (Robertson, ¶ 98, Referring now to FIG. 9, another embodiment of the present invention provides an effective security scheduling system 900. As depicted in FIG. 9, the effective security scheduling system 900 generally includes separate modules that are interconnected to implement the steps in the effective security scheduling method 300. Specifically, the effective security scheduling system 900 includes a demand forecasting module 910. The demand forecasting modeling module 910 accepts input data related to the facility. For instance, security demand at an airport may be forecasted using flight schedules, flight capacity data, and predetermined demand distribution curves, as described above), (Id., ¶ 11, the present invention has specific application to staffing security checkpoints. In this embodiment, the number of needed open stations in security checkpoints is determined (discloses multi-station screening zones) by translating the variable demand for security at different times and using linear programming to optimize and determine a schedule as needed to staff the needed number of open stations), (Id., ¶ 25, Each of the stations is separately staffed with a number of employees as needed. For instance, a security station may use five employees, each manning a component of the security station (discloses multi-stage screening zones) (a walk-through metal detector, an x-ray machine, a hand-held metal detector, a station to manually search personal belongings, and an area to perform other security tests). Obviously, any number of people may be staffed to a station. A station may also be partially staffed, operating a lower level of throughput as the security workers are required to perform more than one function. Furthermore, additional workers may be staffed to a security checkpoint to improve the throughput of that station. In this way, the capacity of security checkpoints generally corresponds to the number of security workers staffed at the security stations), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies); retrieve a business fundamentals data set comprising at least one of: flight departure schedules, airplane capacities, and expected number of passengers (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules (discloses flight schedules) can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers (discloses airline capacities) may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers (discloses expected passengers) actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport); execute a mechanistic prediction model configured to generate an initial passenger arrival prediction based at least in part on the business fundamentals data set (Id., ¶ 42, The black-box security checkpoint model 2 (discloses mechanistic model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport (discloses passenger arrival prediction); retrieve observed screening data comprising a number of passengers processed at each SSCP, the observed screening data serving as a proxy for actual passenger arrivals at each SSCP (Id., ¶ 44, In a preferred embodiment of the present invention, the security checkpoint is modeled as described in co-owned U.S. patent application Ser. No. 10/293,469 entitled SECURITY CHECKPOINT SIMULATION, the disclosure of which is hereby incorporated by reference in full. U.S. patent application Ser. No. 10/293,469 provides a security checkpoint model 10, as depicted in FIG. 4D, having two or more processes, such as entering the security checkpoint in step 11, screening items in step 12, and screening people in step 13 (discloses screening passengers). This security checkpoint model is more similar to an actual security checkpoint. Each of the steps 11, 12, and 13 may be separately simulated to produce output values as described above. Thus, each of the steps 11, 12, and 13 may be separately modeled black-boxes. For instance, a user may define rules for simulating output values for each of the steps 11, 12, and 13. To model changes in the checkpoint, the values or distribution for steps 11, 12, or 13 may be adjusted. By adjusting values for separate steps, the passenger checkpoint model 10 more accurately approximates changes in a passenger checkpoint), (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport); …estimate adjustment factors for the proxy by minimizing error between the mechanistic prediction and the observed screening data to refine passenger arrival estimations (Id., ¶ 104, Returning to FIG. 9, the adjustment module 950 collects data regarding the operation of the security checkpoint and uses this data to adjust the operations of the other modules in the effective security scheduling system 900. For instance, the adjustment module 950 may alter the assumptions used by the security demand modeling module if proper implementation of the schedule has undesirable effects, such as excessive wait times. Similarly, the adjustment module 950 may suggests changes in the operation of the schedule defining module, such as the hiring of additional workers or additional types of workers, as needed to produce more effective schedules in view of the security demand model. In the same way, changes may be made to the schedule implementation module 940 where workers are not complying with the schedule created by the scheduling module 930), (Id., ¶ 105, The adjustment module 950 may also be used by management and employees to adjust the schedule as needed. For example, the adjustment module 950 may accept feedback from workers to adjust the schedule, such as requests for vacation days or requests for schedule changes. Similarly, management may add additional requirements, such as additional administrative time for the employees. For instance, the workers may be required to attend training or administrative meetings. The effective scheduling device 900 may then schedule these administrative tasks during periods of excess labor capacity, when the checkpoint can spare the loss of some workers without adversely effect to the performance measures), (Id., ¶ 107, The changes in the needed number of workers over an extended period may be predicted through the forecasting the needed number of security stations in step 400 and defining an effective schedule in step 800, both over the extended period of interest. For instance, needed number of security stations at an airport may be forecasted over an extended period to form the extended needed worker graph 1000 by examining the number of flights departing from the airport, the load factors for these flights, etc. as described above in FIG. 4B and the associated text), (Id., ¶ 111, When forecasting and staffing over an extending period, such as the extended needed worker graph 1000, various planning assumptions and factors may be used. For instance, the forecast may include data related to industry growth trends, individual checkpoints, and other factors. Likewise, staffing over an extended period may consider historic demand patterns, historic staffing requirements, and individual checkpoint characteristics. The extended forecasts and staffing may further implement policies that may not effect short-term staffing, including security directives, staffing rules, and policy changes); and allocate TSOs to one or more of the SSCPs based at least in part on refined passenger arrival predictions, adjusting allocations in response to real-time changes in predicted passenger volumes (Id., ¶ 61, In accordance with a preferred embodiment of the present invention, as described below, the effective schedule may be formed in step 820 using linear programming to optimize a chosen value (such as minimizing labor costs or the number of work hours) according to a series of equations representing to optimize the number of employees, the condition of work for these employees, and the desired scheduled of employees needed, as depicted in FIG. 7B. Linear programming is a proven optimization technique. To optimally match employees working with employees needed over the course of a week, all feasible work tours are explicitly enumerated, and then employees are assigned to these tours. A tour is defined as a set of shifts that an employee works in a single week. The formulation of the scheduling problem is therefore a linear programming problem of the form), (Id., ¶ 93, the coefficient matrix A and the demand matrix b may be programmed in to a spreadsheet or mathematical calculation program that can automatically solve the linear program. The tour assignment matrix x is optimally determined. Specifically, the matrix x may be determined using shift optimization to minimize the total number of hours worked in a week and to create the optimal number of shifts required to operate stations (using a defined mix of full and part-time employees). It should be appreciated that the tour assignment x may not be unique in that several possible tour assignments may provide desirable results. In this way, a particular tour assignment x may not be the best but, rather, provides a feasible staffing schedule that meets the forecasted demand levels. Furthermore, there may be no possible solution to the tour assignment x, indicating the need to make changes to the checkpoint or the workforce (e.g., hiring additional workers)). While suggested in at least Fig. 9 and related text, Robertson does not explicitly disclose …a memory to store instructions…; one or more processors configured to execute instructions stored in the memory, the one or more processors configured to…; execute a machine learning time series model configured to…; adjust the mechanistic prediction based on historical passenger data, wherein the machine learning time series model is periodically retrained based on accuracy thresholds relative to operation performance metrics However, Benjamin discloses …a memory to store instructions…; one or more processors configured to execute instructions stored in the memory, the one or more processors configured to… (Benjamin, ¶ 52, The computing device 900 includes a processor 910, memory 920, a storage device 930, a high-speed interface/controller 940 connecting to the memory 920 and high-speed expansion ports 950, and a low speed interface/controller 960 connecting to a low speed bus 970 and a storage device 930. Each of the components 910, 920, 930, 940, 950, and 960, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 910 can process instructions for execution within the computing device 900, including instructions stored in the memory 920 or on the storage device 930 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 980 coupled to high speed interface 940. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 900 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system)); execute a machine learning time series model configured to…; adjust the mechanistic prediction based on historical passenger data, wherein the machine learning time series model is periodically retrained based on accuracy thresholds relative to operation performance metrics (Id., ¶ 40, Referring now to FIG. 5, the wait time predictor 260 is configured to receive a plurality of high-level features 202 associated with a support request 120. Some of the features may be numerical features 202 (e.g., the active agent count 202a, the available agent count 202b, the queue depth 202c, and the actual wait time 202f) and some features may be string features 202 (e.g., the business ID 202d and the queue ID 202e). Using one or more of these features 202, the model 270 predicts the estimated wait time 130. When the actual wait time 202f is obtained upon a support request 120 being answered (i.e., a duration of time the support request 120 spent in the queue 400 before an agent 230 answers the support request 120), the predictor 260 may determine a loss 520 between the estimated wait time 130 and the actual wait time 202f. That is, the wait time predictor 260 may use a loss function 510 (e.g., a mean squared error loss function) to determine a loss 520 of the estimated wait time 130, where the loss 520 is a measure of how accurate the predicted wait time estimate 130 is relative to the actual wait time 202f. The predictor 260, in some implementations, uses the loss 520 to further train or tune the model 270), (Id., ¶ 41, the predictor 260 (or support request manager 200 or any other systems executing on the data processing hardware 144) tunes the model 270 with the loss 520 and/or any associated high-level features 202 immediately after the predictor 260 receives the actual wait time 202f of a recently answered support request 120. In other examples, the predictor 260 trains the model 270 at a configurable frequency. For example, the predictor 260 may train the model 270 once per day and the training data 202T may include all of the support requests 120 and associated features 202 that occurred that day (i.e., historical support requests 120.sub.H of FIG. 2). It is understood that the configurable frequency is not limited to once per day and may include any other period of time (e.g., once per hour, once per week, etc.). For example, the predictor 260 may train the model 270 automatically once per day (or some other predetermined period of time) to tune the model based on the prior day's data. In some implementations, the loss 520 of the tuned or retrained model 270 is compared against the loss of a previous model 270 (e.g., the model 270 trained from the previous day), and if the loss 520 of the new model 270 satisfies a threshold relative to the loss 520 of the previous model 270 (e.g., the loss 520 of the model 270 trained today versus the loss 520 of the model 270 trained yesterday), the wait time predictor 260 may revert to the previously trained model 270 (i.e., discard the newly tuned or retrained model 270). Put another way, if the model 270 is further trained on new training data 202T (e.g., collected from that day), but the loss 520 indicates that the accuracy of the model 270 has declined, the model 270 may revert to the previous, more accurate model 270), (Id., ¶ 34, the wait time predictor model 270 is trained on training data 202T obtained from a historical support request data store 250. The historical support request data store 250 may reside on the storage resources 146 of the distributed system 140, or may reside at some other remote location in communication with the system 140. The training data 202T includes a corpus of historical support requests 120.sub.H (also referred to as ‘training support requests 120.sub.H’), wherein each historical support request 120.sub.H includes a corresponding plurality of high-level features 202a-n and a corresponding actual wait time 203. For example, each historical support request 120.sub.H includes one or more of a number of active agents 202a, a number of available agents 202b, a queue depth 202c, a business ID 202d, a queue ID 202e, or an actual wait time 202f associated with the corresponding historical support request 120.sub.H. Here, the actual wait time 202f associated with the corresponding historical support request 120.sub.H is known since the support request 120.sub.H is “historical”, and thus, already processed by the manager 200. Thus, the actual wait time 203 associated with the corresponding historical support request 120.sub.H indicates an actual duration of time the historical support request 120.sub.H was pending before being answered. Moreover, the historical support request 120.sub.H may further include a previous actual wait time 202f associated one or more past support requests 120 that were answered before the corresponding historical support request 120.sub.H. Actual wait times 203 are described in greater detail below with reference to FIGS. 4 and 5. In the example shown, the training data 202T passes to a wait time trainer 204 for training the wait time predictor model 270. Based on the training data 202T, the wait time trainer 204 is able to model support request parameters 206 to train the wait time predictor model 270. Once trained, the wait time predictor model (e.g., trained model) 270 is used by the wait time predictor 260 during inference for predicting estimated wait times 130 for corresponding pending support requests 120. Thus, using training data 202T associated with a corpus of historical support requests 120.sub.H each including a corresponding plurality of high-level features 202 and/or a known corresponding actual wait time 202f, the wait time predictor model 270 is trained to predict estimated wait times 130), (Id., ¶ 35, The wait time predictor model 270 may include a neural network. For instance, the wait time trainer 204 may map the training data 202T to output data to generate the neural network model 270. Generally, the wait time trainer 204 generates hidden nodes, weights of connections between the hidden nodes and input nodes that correspond to the training data 202T, weights of connections between the hidden nodes and output nodes, and weights of connections between layers of the hidden nodes themselves. Thereafter, the fully trained neural network model 270 may be employed against input data (e.g., pending support request 120) to generate unknown output data (e.g., the estimated wait time 130). In some examples, the neural network model 270 is a deep neural network (e.g., a regressor deep neural network) that has a first hidden layer and a second hidden layer. For example, the first hidden layer may have sixteen nodes and the second hidden layer may have eight nodes. The wait time trainer 204 typically trains the model 270 in batches. That is, a model 270 is typically trained on a group of input parameters (i.e., high-level features 202 and actual wait times 203) at a time. In some implementations, the trained model 270 is trained with a batch size of ten. The implementations of the wait time predictor model described herein uses pre-existing historical data, with minimal preprocessing, thereby increasing the efficacy of the deep neural network approach). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson to include the queue and wait time regression elements of Benjamin in the analogous art of predicting business-agnostic contact center expected wait times. The motivation for doing so would have been to provide an “estimated wait time to the customer to indicate an estimated duration of time until the pending support request is answered, i.e., until the customer is connected to a customer support agent. The wait time predictor model provides a business-agnostic solution that may be deployed across any business/enterprise, while easily scaling to the size of the business/enterprise in a diverse, high-volume environment with only minimal, if any, instrumentation of software support services” (Benjamin, ¶ 29), wherein such improvements would benefit Robertson’s system which seeks to “achieve numerous desired results, including lower total personnel costs; reduced numbers of full-time employees (FTE); greater diversity in the workforce (through the use of part-time or seasonal employees); improved cost effectiveness while at least maintaining the customer service level; the creation of consistency in staffing and scheduling; the development of rule-driven, repeatable schedules; maximizing employee morale; reducing costs associated with scheduling; reducing the costs of creating and maintaining schedule” [Benjamin, ¶ 29, 12; Robertson, ¶ 119]. Regarding Claim 16, the combination of Robertson and Benjamin discloses …The system of claim 15… Robertson further discloses … and historical passenger data, including day of week, week of year, and time of day (Robertson, ¶ 35, The demand curve for each flight or event, such as demand curve 500 depicted in FIG. 5, is then totaled, step 415, to calculate the total number of people passing through a checkpoint at any particular time or time period. FIG. 6A depicts an exemplary total demand curve 600 having peak around 6AM and6PM. Locations, such as airports, typically have one or more peak periods during the day corresponding with periods of high traffic. In the same way, a demand at a security checkpoint generally vary over longer periods , with resulting peak days, peak weeks, etc.). While suggested in at least Fig. 9 and related text, Robertson does not explicitly disclose … wherein the mechanistic prediction model comprises a combination of: a business fundamentals data set defining one or more of flight departure schedules, airplane capacities, and expected number of passengers. However, Benjamin discloses …wherein the mechanistic prediction model comprises a combination of: a business fundamentals data set defining one or more of flight departure schedules, airplane capacities, and expected number of passengers (Benjamin, ¶ 34, the wait time predictor model 270 is trained on training data 202T obtained from a historical support request data store 250. The historical support request data store 250 may reside on the storage resources 146 of the distributed system 140, or may reside at some other remote location in communication with the system 140. The training data 202T includes a corpus of historical support requests 120.sub.H (also referred to as ‘training support requests 120.sub.H’), wherein each historical support request 120.sub.H includes a corresponding plurality of high-level features 202a-n and a corresponding actual wait time 203. For example, each historical support request 120.sub.H includes one or more of a number of active agents 202a, a number of available agents 202b, a queue depth 202c, a business ID 202d, a queue ID 202e, or an actual wait time 202f associated with the corresponding historical support request 120.sub.H. Here, the actual wait time 202f associated with the corresponding historical support request 120.sub.H is known since the support request 120.sub.H is “historical”, and thus, already processed by the manager 200. Thus, the actual wait time 203 associated with the corresponding historical support request 120.sub.H indicates an actual duration of time the historical support request 120.sub.H was pending before being answered. Moreover, the historical support request 120.sub.H may further include a previous actual wait time 202f associated one or more past support requests 120 that were answered before the corresponding historical support request 120.sub.H. Actual wait times 203 are described in greater detail below with reference to FIGS. 4 and 5. In the example shown, the training data 202T passes to a wait time trainer 204 for training the wait time predictor model 270. Based on the training data 202T, the wait time trainer 204 is able to model support request parameters 206 to train the wait time predictor model 270. Once trained, the wait time predictor model (e.g., trained model) 270 is used by the wait time predictor 260 during inference for predicting estimated wait times 130 for corresponding pending support requests 120. Thus, using training data 202T associated with a corpus of historical support requests 120.sub.H each including a corresponding plurality of high-level features 202 and/or a known corresponding actual wait time 202f, the wait time predictor model 270 is trained to predict estimated wait times 130), (Id., ¶ 35, The wait time predictor model 270 may include a neural network. For instance, the wait time trainer 204 may map the training data 202T to output data to generate the neural network model 270. Generally, the wait time trainer 204 generates hidden nodes, weights of connections between the hidden nodes and input nodes that correspond to the training data 202T, weights of connections between the hidden nodes and output nodes, and weights of connections between layers of the hidden nodes themselves. Thereafter, the fully trained neural network model 270 may be employed against input data (e.g., pending support request 120) to generate unknown output data (e.g., the estimated wait time 130). In some examples, the neural network model 270 is a deep neural network (e.g., a regressor deep neural network) that has a first hidden layer and a second hidden layer. For example, the first hidden layer may have sixteen nodes and the second hidden layer may have eight nodes. The wait time trainer 204 typically trains the model 270 in batches. That is, a model 270 is typically trained on a group of input parameters (i.e., high-level features 202 and actual wait times 203) at a time. In some implementations, the trained model 270 is trained with a batch size of ten. The implementations of the wait time predictor model described herein uses pre-existing historical data, with minimal preprocessing, thereby increasing the efficacy of the deep neural network approach). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson to include the queue and wait time regression elements of Benjamin in the analogous art of predicting business-agnostic contact center expected wait times for the same reasons as stated for claim 15. Regarding Claim 17, the combination of Robertson and Benjamin discloses …The system of claim 15… Robertson further discloses …wherein the one or more processors are further configured to: retrieve a historical passenger data set comprising data for at least one or more of: historical flight departure schedules, historical airplane capacities, and historical number of passengers processed at Security Screening Checkpoints (SSCPs); (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. (discloses historical data) Then, flight or ship schedules (discloses flight schedules) can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers (discloses airplane capacities) may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport), (Id., ¶ 111, When forecasting and staffing over an extending period, such as the extended needed worker graph 1000, various planning assumptions and factors may be used. For instance, the forecast may include data related to industry growth trends, individual checkpoints, and other factors. Likewise, staffing over an extended period may consider historic demand patterns, historic staffing requirements, and individual checkpoint characteristics. The extended forecasts and staffing may further implement policies that may not effect short-term staffing, including security directives, staffing rules, and policy changes). While suggested in at least Fig. 9 and related text, Robertson does not explicitly disclose …and retrain the machine learning time series model to improve the prediction generated by the mechanistic prediction model, using the adjustment factors obtained from the historical passenger data set, wherein the retraining is based on accuracy thresholds relative to operational performance metrics However, Benjamin discloses …and retrain the machine learning time series model to improve the prediction generated by the mechanistic prediction model, using the adjustment factors obtained from the historical passenger data set, wherein the retraining is based on accuracy thresholds relative to operational performance metrics (Benjamin, ¶ 29, The wait time predictor model is trained on a corpus of training support requests that each include a corresponding plurality of high-level features and a corresponding actual wait time. The wait time predictor model, after receiving the high-level features, predicts an estimated wait time for the customer of the pending support request. The system provides the estimated wait time to the customer to indicate an estimated duration of time until the pending support request is answered, i.e., until the customer is connected to a customer support agent. The wait time predictor model provides a business-agnostic solution that may be deployed across any business/enterprise, while easily scaling to the size of the business/enterprise in a diverse, high-volume environment with only minimal, if any, instrumentation of software support services), (Id., ¶ 34, Referring to FIG. 2, in some implementations, the wait time predictor model 270 is trained on training data 202T obtained from a historical support request data store 250. The historical support request data store 250 may reside on the storage resources 146 of the distributed system 140, or may reside at some other remote location in communication with the system 140. The training data 202T includes a corpus of historical support requests 120.sub.H (also referred to as ‘training support requests 120.sub.H’), wherein each historical support request 120.sub.H includes a corresponding plurality of high-level features 202a-n and a corresponding actual wait time 203. For example, each historical support request 120.sub.H includes one or more of a number of active agents 202a, a number of available agents 202b, a queue depth 202c, a business ID 202d, a queue ID 202e, or an actual wait time 202f associated with the corresponding historical support request 120.sub.H. Here, the actual wait time 202f associated with the corresponding historical support request 120.sub.H is known since the support request 120.sub.H is “historical”, and thus, already processed by the manager 200. Thus, the actual wait time 203 associated with the corresponding historical support request 120.sub.H indicates an actual duration of time the historical support request 120.sub.H was pending before being answered. Moreover, the historical support request 120.sub.H may further include a previous actual wait time 202f associated one or more past support requests 120 that were answered before the corresponding historical support request 120.sub.H. Actual wait times 203 are described in greater detail below with reference to FIGS. 4 and 5. In the example shown, the training data 202T passes to a wait time trainer 204 for training the wait time predictor model 270. Based on the training data 202T, the wait time trainer 204 is able to model support request parameters 206 to train the wait time predictor model 270. Once trained, the wait time predictor model (e.g., trained model) 270 is used by the wait time predictor 260 during inference for predicting estimated wait times 130 for corresponding pending support requests 120. Thus, using training data 202T associated with a corpus of historical support requests 120.sub.H each including a corresponding plurality of high-level features 202 and/or a known corresponding actual wait time 202f, the wait time predictor model 270 is trained to predict estimated wait times 130), (Id., ¶ 35, The wait time predictor model 270 may include a neural network. For instance, the wait time trainer 204 may map the training data 202T to output data to generate the neural network model 270. Generally, the wait time trainer 204 generates hidden nodes, weights of connections between the hidden nodes and input nodes that correspond to the training data 202T, weights of connections between the hidden nodes and output nodes, and weights of connections between layers of the hidden nodes themselves. Thereafter, the fully trained neural network model 270 may be employed against input data (e.g., pending support request 120) to generate unknown output data (e.g., the estimated wait time 130). In some examples, the neural network model 270 is a deep neural network (e.g., a regressor deep neural network) that has a first hidden layer and a second hidden layer. For example, the first hidden layer may have sixteen nodes and the second hidden layer may have eight nodes. The wait time trainer 204 typically trains the model 270 in batches. That is, a model 270 is typically trained on a group of input parameters (i.e., high-level features 202 and actual wait times 203) at a time. In some implementations, the trained model 270 is trained with a batch size of ten. The implementations of the wait time predictor model described herein uses pre-existing historical data, with minimal preprocessing, thereby increasing the efficacy of the deep neural network approach), (Id., ¶ 41, the predictor 260 (or support request manager 200 or any other systems executing on the data processing hardware 144) tunes the model 270 with the loss 520 and/or any associated high-level features 202 immediately after the predictor 260 receives the actual wait time 202f of a recently answered support request 120. In other examples, the predictor 260 trains the model 270 at a configurable frequency. For example, the predictor 260 may train the model 270 once per day and the training data 202T may include all of the support requests 120 and associated features 202 that occurred that day (i.e., historical support requests 120.sub.H of FIG. 2). It is understood that the configurable frequency is not limited to once per day and may include any other period of time (e.g., once per hour, once per week, etc.). For example, the predictor 260 may train the model 270 automatically once per day (or some other predetermined period of time) to tune the model based on the prior day's data. In some implementations, the loss 520 of the tuned or retrained model 270 is compared against the loss of a previous model 270 (e.g., the model 270 trained from the previous day), and if the loss 520 of the new model 270 satisfies a threshold relative to the loss 520 of the previous model 270 (e.g., the loss 520 of the model 270 trained today versus the loss 520 of the model 270 trained yesterday), the wait time predictor 260 may revert to the previously trained model 270 (i.e., discard the newly tuned or retrained model 270). Put another way, if the model 270 is further trained on new training data 202T (e.g., collected from that day), but the loss 520 indicates that the accuracy of the model 270 has declined, the model 270 may revert to the previous, more accurate model 270.). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson to include the queue and wait time training model elements of Benjamin in the analogous art of predicting business-agnostic contact center expected wait times for the same reasons as stated for claim 15. Regarding Claim 18, the combination of Robertson and Benjamin discloses …The system of claim 15… Robertson further discloses …wherein observed screening data serving as a proxy is defined by a quantity of passengers having passed through an Advanced Imaging Technology (AIT) full body scanner or a Walk Through Metal Detector (WTMD) at any of the SSCPs (Robertson, ¶ 25, Each of the stations is separately staffed with a number of employees as needed. For instance, a security station may use five employees, each manning a component of the security station (a walk-through metal detector, an x-ray machine, a hand-held metal detector, a station to manually search personal belongings, and an area to perform other security tests). Obviously, any number of people may be staffed to a station. A station may also be partially staffed, operating a lower level of throughput as the security workers are required to perform more than one function. Furthermore, additional workers may be staffed to a security checkpoint to improve the throughput of that station. In this way, the capacity of security checkpoints generally corresponds to the number of security workers staffed at the security stations). Regarding Claim 20, Robertson discloses …A method performed by a system… for predicting passenger arrivals and allocating Transportation Security Officers (TSOs) within a multi-station and multi-stage security screening area having a plurality of Security Screening Checkpoints (SSCPs), wherein the system comprises... (Robertson, ¶ 98, Referring now to FIG. 9, another embodiment of the present invention provides an effective security scheduling system 900. As depicted in FIG. 9, the effective security scheduling system 900 generally includes separate modules that are interconnected to implement the steps in the effective security scheduling method 300. Specifically, the effective security scheduling system 900 includes a demand forecasting module 910. The demand forecasting modeling module 910 accepts input data related to the facility. For instance, security demand at an airport may be forecasted using flight schedules, flight capacity data, and predetermined demand distribution curves, as described above), (Id., ¶ 11, the present invention has specific application to staffing security checkpoints. In this embodiment, the number of needed open stations in security checkpoints is determined (discloses multi-station screening zones) by translating the variable demand for security at different times and using linear programming to optimize and determine a schedule as needed to staff the needed number of open stations), (Id., ¶ 25, Each of the stations is separately staffed with a number of employees as needed. For instance, a security station may use five employees, each manning a component of the security station (discloses multi-stage screening zones) (a walk-through metal detector, an x-ray machine, a hand-held metal detector, a station to manually search personal belongings, and an area to perform other security tests). Obviously, any number of people may be staffed to a station. A station may also be partially staffed, operating a lower level of throughput as the security workers are required to perform more than one function. Furthermore, additional workers may be staffed to a security checkpoint to improve the throughput of that station. In this way, the capacity of security checkpoints generally corresponds to the number of security workers staffed at the security stations), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies); retrieving… a business fundamentals data set comprising at least one of: flight departure schedules, airplane capacities, and expected number of passengers (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules (discloses flight schedules) can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers (discloses airline capacities) may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers (discloses expected passengers) actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport); executing… a mechanistic prediction model configured to generate an initial passenger arrival prediction based at least in part on the business fundamentals data set (Id., ¶ 42, The black-box security checkpoint model 2 (discloses mechanistic model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport (discloses passenger arrival prediction); retrieving… observed screening data comprising a number of passengers processed at each SSCP, the observed screening data serving as a proxy for actual passenger arrivals at each SSCP (Id., ¶ 44, In a preferred embodiment of the present invention, the security checkpoint is modeled as described in co-owned U.S. patent application Ser. No. 10/293,469 entitled SECURITY CHECKPOINT SIMULATION, the disclosure of which is hereby incorporated by reference in full. U.S. patent application Ser. No. 10/293,469 provides a security checkpoint model 10, as depicted in FIG. 4D, having two or more processes, such as entering the security checkpoint in step 11, screening items in step 12, and screening people in step 13 (discloses screening passengers). This security checkpoint model is more similar to an actual security checkpoint. Each of the steps 11, 12, and 13 may be separately simulated to produce output values as described above. Thus, each of the steps 11, 12, and 13 may be separately modeled black-boxes. For instance, a user may define rules for simulating output values for each of the steps 11, 12, and 13. To model changes in the checkpoint, the values or distribution for steps 11, 12, or 13 may be adjusted. By adjusting values for separate steps, the passenger checkpoint model 10 more accurately approximates changes in a passenger checkpoint), (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport); …estimate adjustment factors for the proxy by minimizing error between the mechanistic prediction and the observed screening data to refine passenger arrival estimations (Id., ¶ 104, Returning to FIG. 9, the adjustment module 950 collects data regarding the operation of the security checkpoint and uses this data to adjust the operations of the other modules in the effective security scheduling system 900. For instance, the adjustment module 950 may alter the assumptions used by the security demand modeling module if proper implementation of the schedule has undesirable effects, such as excessive wait times. Similarly, the adjustment module 950 may suggests changes in the operation of the schedule defining module, such as the hiring of additional workers or additional types of workers, as needed to produce more effective schedules in view of the security demand model. In the same way, changes may be made to the schedule implementation module 940 where workers are not complying with the schedule created by the scheduling module 930), (Id., ¶ 105, The adjustment module 950 may also be used by management and employees to adjust the schedule as needed. For example, the adjustment module 950 may accept feedback from workers to adjust the schedule, such as requests for vacation days or requests for schedule changes. Similarly, management may add additional requirements, such as additional administrative time for the employees. For instance, the workers may be required to attend training or administrative meetings. The effective scheduling device 900 may then schedule these administrative tasks during periods of excess labor capacity, when the checkpoint can spare the loss of some workers without adversely effect to the performance measures), (Id., ¶ 107, The changes in the needed number of workers over an extended period may be predicted through the forecasting the needed number of security stations in step 400 and defining an effective schedule in step 800, both over the extended period of interest. For instance, needed number of security stations at an airport may be forecasted over an extended period to form the extended needed worker graph 1000 by examining the number of flights departing from the airport, the load factors for these flights, etc. as described above in FIG. 4B and the associated text), (Id., ¶ 111, When forecasting and staffing over an extending period, such as the extended needed worker graph 1000, various planning assumptions and factors may be used. For instance, the forecast may include data related to industry growth trends, individual checkpoints, and other factors. Likewise, staffing over an extended period may consider historic demand patterns, historic staffing requirements, and individual checkpoint characteristics. The extended forecasts and staffing may further implement policies that may not effect short-term staffing, including security directives, staffing rules, and policy changes); and allocating, by the processor, TSOs to one or more of the SSCPs based at least in part on refined passenger arrival predictions, adjusting allocations in response to real-time changes in predicted passenger volumes (Id., ¶ 61, In accordance with a preferred embodiment of the present invention, as described below, the effective schedule may be formed in step 820 using linear programming to optimize a chosen value (such as minimizing labor costs or the number of work hours) according to a series of equations representing to optimize the number of employees, the condition of work for these employees, and the desired scheduled of employees needed, as depicted in FIG. 7B. Linear programming is a proven optimization technique. To optimally match employees working with employees needed over the course of a week, all feasible work tours are explicitly enumerated, and then employees are assigned to these tours. A tour is defined as a set of shifts that an employee works in a single week. The formulation of the scheduling problem is therefore a linear programming problem of the form), (Id., ¶ 93, the coefficient matrix A and the demand matrix b may be programmed in to a spreadsheet or mathematical calculation program that can automatically solve the linear program. The tour assignment matrix x is optimally determined. Specifically, the matrix x may be determined using shift optimization to minimize the total number of hours worked in a week and to create the optimal number of shifts required to operate stations (using a defined mix of full and part-time employees). It should be appreciated that the tour assignment x may not be unique in that several possible tour assignments may provide desirable results. In this way, a particular tour assignment x may not be the best but, rather, provides a feasible staffing schedule that meets the forecasted demand levels. Furthermore, there may be no possible solution to the tour assignment x, indicating the need to make changes to the checkpoint or the workforce (e.g., hiring additional workers)). While suggested in at least Fig. 9 and related text, Robertson does not explicitly disclose …having at least a processor and a memory therein…; …by the processor…; …by the processor…; …by the processor…; executing, by the processor, a machine learning time series model configured to…; predicting passenger volumes based on historical passenger data, wherein the machine learning time series model is periodically retrained based on accuracy thresholds relative to operation performance metrics However, Benjamin discloses … having at least a processor and a memory therein…; …by the processor…; …by the processor…; …by the processor… (Benjamin, ¶ 52, The computing device 900 includes a processor 910, memory 920, a storage device 930, a high-speed interface/controller 940 connecting to the memory 920 and high-speed expansion ports 950, and a low speed interface/controller 960 connecting to a low speed bus 970 and a storage device 930. Each of the components 910, 920, 930, 940, 950, and 960, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 910 can process instructions for execution within the computing device 900, including instructions stored in the memory 920 or on the storage device 930 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 980 coupled to high speed interface 940. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 900 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system)); executing, by the processor, a machine learning time series model configured to…; predicting passenger volumes based on historical passenger data, wherein the machine learning time series model is periodically retrained based on accuracy thresholds relative to operation performance metrics (Id., ¶ 40, Referring now to FIG. 5, the wait time predictor 260 is configured to receive a plurality of high-level features 202 associated with a support request 120. Some of the features may be numerical features 202 (e.g., the active agent count 202a, the available agent count 202b, the queue depth 202c, and the actual wait time 202f) and some features may be string features 202 (e.g., the business ID 202d and the queue ID 202e). Using one or more of these features 202, the model 270 predicts the estimated wait time 130. When the actual wait time 202f is obtained upon a support request 120 being answered (i.e., a duration of time the support request 120 spent in the queue 400 before an agent 230 answers the support request 120), the predictor 260 may determine a loss 520 between the estimated wait time 130 and the actual wait time 202f. That is, the wait time predictor 260 may use a loss function 510 (e.g., a mean squared error loss function) to determine a loss 520 of the estimated wait time 130, where the loss 520 is a measure of how accurate the predicted wait time estimate 130 is relative to the actual wait time 202f. The predictor 260, in some implementations, uses the loss 520 to further train or tune the model 270), (Id., ¶ 41, the predictor 260 (or support request manager 200 or any other systems executing on the data processing hardware 144) tunes the model 270 with the loss 520 and/or any associated high-level features 202 immediately after the predictor 260 receives the actual wait time 202f of a recently answered support request 120. In other examples, the predictor 260 trains the model 270 at a configurable frequency. For example, the predictor 260 may train the model 270 once per day and the training data 202T may include all of the support requests 120 and associated features 202 that occurred that day (i.e., historical support requests 120.sub.H of FIG. 2). It is understood that the configurable frequency is not limited to once per day and may include any other period of time (e.g., once per hour, once per week, etc.). For example, the predictor 260 may train the model 270 automatically once per day (or some other predetermined period of time) to tune the model based on the prior day's data. In some implementations, the loss 520 of the tuned or retrained model 270 is compared against the loss of a previous model 270 (e.g., the model 270 trained from the previous day), and if the loss 520 of the new model 270 satisfies a threshold relative to the loss 520 of the previous model 270 (e.g., the loss 520 of the model 270 trained today versus the loss 520 of the model 270 trained yesterday), the wait time predictor 260 may revert to the previously trained model 270 (i.e., discard the newly tuned or retrained model 270). Put another way, if the model 270 is further trained on new training data 202T (e.g., collected from that day), but the loss 520 indicates that the accuracy of the model 270 has declined, the model 270 may revert to the previous, more accurate model 270), (Id., ¶ 34, the wait time predictor model 270 is trained on training data 202T obtained from a historical support request data store 250. The historical support request data store 250 may reside on the storage resources 146 of the distributed system 140, or may reside at some other remote location in communication with the system 140. The training data 202T includes a corpus of historical support requests 120.sub.H (also referred to as ‘training support requests 120.sub.H’), wherein each historical support request 120.sub.H includes a corresponding plurality of high-level features 202a-n and a corresponding actual wait time 203. For example, each historical support request 120.sub.H includes one or more of a number of active agents 202a, a number of available agents 202b, a queue depth 202c, a business ID 202d, a queue ID 202e, or an actual wait time 202f associated with the corresponding historical support request 120.sub.H. Here, the actual wait time 202f associated with the corresponding historical support request 120.sub.H is known since the support request 120.sub.H is “historical”, and thus, already processed by the manager 200. Thus, the actual wait time 203 associated with the corresponding historical support request 120.sub.H indicates an actual duration of time the historical support request 120.sub.H was pending before being answered. Moreover, the historical support request 120.sub.H may further include a previous actual wait time 202f associated one or more past support requests 120 that were answered before the corresponding historical support request 120.sub.H. Actual wait times 203 are described in greater detail below with reference to FIGS. 4 and 5. In the example shown, the training data 202T passes to a wait time trainer 204 for training the wait time predictor model 270. Based on the training data 202T, the wait time trainer 204 is able to model support request parameters 206 to train the wait time predictor model 270. Once trained, the wait time predictor model (e.g., trained model) 270 is used by the wait time predictor 260 during inference for predicting estimated wait times 130 for corresponding pending support requests 120. Thus, using training data 202T associated with a corpus of historical support requests 120.sub.H each including a corresponding plurality of high-level features 202 and/or a known corresponding actual wait time 202f, the wait time predictor model 270 is trained to predict estimated wait times 130), (Id., ¶ 35, The wait time predictor model 270 may include a neural network. For instance, the wait time trainer 204 may map the training data 202T to output data to generate the neural network model 270. Generally, the wait time trainer 204 generates hidden nodes, weights of connections between the hidden nodes and input nodes that correspond to the training data 202T, weights of connections between the hidden nodes and output nodes, and weights of connections between layers of the hidden nodes themselves. Thereafter, the fully trained neural network model 270 may be employed against input data (e.g., pending support request 120) to generate unknown output data (e.g., the estimated wait time 130). In some examples, the neural network model 270 is a deep neural network (e.g., a regressor deep neural network) that has a first hidden layer and a second hidden layer. For example, the first hidden layer may have sixteen nodes and the second hidden layer may have eight nodes. The wait time trainer 204 typically trains the model 270 in batches. That is, a model 270 is typically trained on a group of input parameters (i.e., high-level features 202 and actual wait times 203) at a time. In some implementations, the trained model 270 is trained with a batch size of ten. The implementations of the wait time predictor model described herein uses pre-existing historical data, with minimal preprocessing, thereby increasing the efficacy of the deep neural network approach). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson to include the queue and wait time regression elements of Benjamin in the analogous art of predicting business-agnostic contact center expected wait times for the same reasons as stated for claim 15. Regarding Claim 21, Robertson discloses …A method performed by a system… for predicting passenger arrivals and allocating Transportation Security Officers (TSOs) within a multi-station and multi-stage security screening area having a plurality of Security Screening Checkpoints (SSCPs), wherein the instructions, when executed, cause the system to perform operations including: retrieving a business fundamentals data set comprising at least one of: flight departure schedules, airplane capacities, and expected number of passengers (Robertson, ¶ 98, Referring now to FIG. 9, another embodiment of the present invention provides an effective security scheduling system 900. As depicted in FIG. 9, the effective security scheduling system 900 generally includes separate modules that are interconnected to implement the steps in the effective security scheduling method 300. Specifically, the effective security scheduling system 900 includes a demand forecasting module 910. The demand forecasting modeling module 910 accepts input data related to the facility. For instance, security demand at an airport may be forecasted using flight schedules, flight capacity data, and predetermined demand distribution curves, as described above), (Id., ¶ 11, the present invention has specific application to staffing security checkpoints. In this embodiment, the number of needed open stations in security checkpoints is determined (discloses multi-station screening zones) by translating the variable demand for security at different times and using linear programming to optimize and determine a schedule as needed to staff the needed number of open stations), (Id., ¶ 25, Each of the stations is separately staffed with a number of employees as needed. For instance, a security station may use five employees, each manning a component of the security station (discloses multi-stage screening zones) (a walk-through metal detector, an x-ray machine, a hand-held metal detector, a station to manually search personal belongings, and an area to perform other security tests). Obviously, any number of people may be staffed to a station. A station may also be partially staffed, operating a lower level of throughput as the security workers are required to perform more than one function. Furthermore, additional workers may be staffed to a security checkpoint to improve the throughput of that station. In this way, the capacity of security checkpoints generally corresponds to the number of security workers staffed at the security stations), (Id., ¶ 41, Security checkpoints may be modeled and simulated in step 420, as depicted in FIG. 4C, using a black-box security checkpoint model 2 that receives input data 1 and produces output data 3. The input data 1 generally corresponds to the number of people la entering the security checkpoint. The output value 3 generally includes measurements of customer experience (such as wait time, processing time, queue length, etc.) based on checkpoint demand, alarm rates, processing times, scheduled resources, and security policies), (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules (discloses flight schedules) can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers (discloses airline capacities) may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers (discloses expected passengers) actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport); executing a mechanistic prediction model configured to generate an initial passenger arrival prediction based at least in part on the business fundamentals data set (Id., ¶ 42, The black-box security checkpoint model 2 (discloses mechanistic model) functions as a black-box having a set of possible output values and some type of rule for selecting from the set of possible output values. For example, output data 3 may include customer wait time in the security checkpoint, where the process or service time for security checkpoint model 2 may be bounded by a minimum and a maximum time, such as 10 and 100 seconds. Particular process, service or activity values for each simulated person may be randomly assigned according to a statistical distribution, such as uniform, normal, Poisson distributions, etc. The particular values and distribution used in the black-box-style security checkpoint model 2 may be selected as necessary to conform to an actual security checkpoint. For instance, the actual process times at a security checkpoint may be measured to determine a minimum value, a maximum value, and a distribution of process times between these values. The customary wait time is then a function of the process time and number of resources in the checkpoint model), (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport (discloses passenger arrival prediction); retrieving observed screening data comprising a number of passengers processed at each SSCP, the observed screening data serving as a proxy for actual passenger arrivals at each SSCP (Id., ¶ 44, In a preferred embodiment of the present invention, the security checkpoint is modeled as described in co-owned U.S. patent application Ser. No. 10/293,469 entitled SECURITY CHECKPOINT SIMULATION, the disclosure of which is hereby incorporated by reference in full. U.S. patent application Ser. No. 10/293,469 provides a security checkpoint model 10, as depicted in FIG. 4D, having two or more processes, such as entering the security checkpoint in step 11, screening items in step 12, and screening people in step 13 (discloses screening passengers). This security checkpoint model is more similar to an actual security checkpoint. Each of the steps 11, 12, and 13 may be separately simulated to produce output values as described above. Thus, each of the steps 11, 12, and 13 may be separately modeled black-boxes. For instance, a user may define rules for simulating output values for each of the steps 11, 12, and 13. To model changes in the checkpoint, the values or distribution for steps 11, 12, or 13 may be adjusted. By adjusting values for separate steps, the passenger checkpoint model 10 more accurately approximates changes in a passenger checkpoint), (Id., ¶ 32, Preferably, the demand data is automatically and dynamically determined, as illustrated in FIG. 4B. In the context of an airport or seaport, the number of passengers can be estimated by connecting to reservation systems or to similar passenger record systems. Then, flight or ship schedules can be analyzed, step 411, to determine a total potential number of passengers. This capacity of passengers may be multiplied by a load factor (i.e., the actual percentage of seats sold) in step 412 to determine the actual number of passengers. This number is then adjusted for the number of passengers transferring from previous flights, step 413, to determine the number of passengers actually originating from the particular location and, therefore, actually passing through the security checkpoint. For example, if a flight has a capacity of 200 passengers and if the load factor is 75% (3/4), then 150 passengers should be on the flight. Of these 150 passengers, if a third (1/3) has transferred from other flights, then the remaining 100 passengers pass through the security checkpoint at that airport); …estimate adjustment factors for the proxy by minimizing error between the mechanistic prediction and the observed screening data to refine passenger arrival estimations (Id., ¶ 104, Returning to FIG. 9, the adjustment module 950 collects data regarding the operation of the security checkpoint and uses this data to adjust the operations of the other modules in the effective security scheduling system 900. For instance, the adjustment module 950 may alter the assumptions used by the security demand modeling module if proper implementation of the schedule has undesirable effects, such as excessive wait times. Similarly, the adjustment module 950 may suggests changes in the operation of the schedule defining module, such as the hiring of additional workers or additional types of workers, as needed to produce more effective schedules in view of the security demand model. In the same way, changes may be made to the schedule implementation module 940 where workers are not complying with the schedule created by the scheduling module 930), (Id., ¶ 105, The adjustment module 950 may also be used by management and employees to adjust the schedule as needed. For example, the adjustment module 950 may accept feedback from workers to adjust the schedule, such as requests for vacation days or requests for schedule changes. Similarly, management may add additional requirements, such as additional administrative time for the employees. For instance, the workers may be required to attend training or administrative meetings. The effective scheduling device 900 may then schedule these administrative tasks during periods of excess labor capacity, when the checkpoint can spare the loss of some workers without adversely effect to the performance measures), (Id., ¶ 107, The changes in the needed number of workers over an extended period may be predicted through the forecasting the needed number of security stations in step 400 and defining an effective schedule in step 800, both over the extended period of interest. For instance, needed number of security stations at an airport may be forecasted over an extended period to form the extended needed worker graph 1000 by examining the number of flights departing from the airport, the load factors for these flights, etc. as described above in FIG. 4B and the associated text), (Id., ¶ 111, When forecasting and staffing over an extending period, such as the extended needed worker graph 1000, various planning assumptions and factors may be used. For instance, the forecast may include data related to industry growth trends, individual checkpoints, and other factors. Likewise, staffing over an extended period may consider historic demand patterns, historic staffing requirements, and individual checkpoint characteristics. The extended forecasts and staffing may further implement policies that may not effect short-term staffing, including security directives, staffing rules, and policy changes); and allocating TSOs to one or more of the SSCPs based at least in part on refined passenger arrival predictions, adjusting allocations in response to real-time changes in predicted passenger volumes (Id., ¶ 61, In accordance with a preferred embodiment of the present invention, as described below, the effective schedule may be formed in step 820 using linear programming to optimize a chosen value (such as minimizing labor costs or the number of work hours) according to a series of equations representing to optimize the number of employees, the condition of work for these employees, and the desired scheduled of employees needed, as depicted in FIG. 7B. Linear programming is a proven optimization technique. To optimally match employees working with employees needed over the course of a week, all feasible work tours are explicitly enumerated, and then employees are assigned to these tours. A tour is defined as a set of shifts that an employee works in a single week. The formulation of the scheduling problem is therefore a linear programming problem of the form), (Id., ¶ 93, the coefficient matrix A and the demand matrix b may be programmed in to a spreadsheet or mathematical calculation program that can automatically solve the linear program. The tour assignment matrix x is optimally determined. Specifically, the matrix x may be determined using shift optimization to minimize the total number of hours worked in a week and to create the optimal number of shifts required to operate stations (using a defined mix of full and part-time employees). It should be appreciated that the tour assignment x may not be unique in that several possible tour assignments may provide desirable results. In this way, a particular tour assignment x may not be the best but, rather, provides a feasible staffing schedule that meets the forecasted demand levels. Furthermore, there may be no possible solution to the tour assignment x, indicating the need to make changes to the checkpoint or the workforce (e.g., hiring additional workers)). While suggested in at least Fig. 9 and related text, Robertson does not explicitly disclose …Non-transitory computer readable storage media having instructions stored thereupon that, when executed by a system having at least a processor and a memory therein…; executing a machine learning time series model configured to…; predicting passenger volumes based on historical passenger data, wherein the machine learning time series model is periodically retrained based on accuracy thresholds relative to operation performance metrics However, Benjamin discloses … Non-transitory computer readable storage media having instructions stored thereupon that, when executed by a system having at least a processor and a memory therein… (Benjamin, ¶ 52, The computing device 900 includes a processor 910, memory 920, a storage device 930, a high-speed interface/controller 940 connecting to the memory 920 and high-speed expansion ports 950, and a low speed interface/controller 960 connecting to a low speed bus 970 and a storage device 930. Each of the components 910, 920, 930, 940, 950, and 960, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 910 can process instructions for execution within the computing device 900, including instructions stored in the memory 920 or on the storage device 930 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 980 coupled to high speed interface 940. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 900 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system)), (Id., ¶ 53, The memory 920 stores information non-transitorily within the computing device 900. The memory 920 may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory 920 may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device 900. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM)/programmable read-only memory (PROM)/erasable programmable read-only memory (EPROM)/electronically erasable programmable read-only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes) executing a machine learning time series model configured to…; predicting passenger volumes based on historical passenger data, wherein the machine learning time series model is periodically retrained based on accuracy thresholds relative to operation performance metrics (Id., ¶ 40, Referring now to FIG. 5, the wait time predictor 260 is configured to receive a plurality of high-level features 202 associated with a support request 120. Some of the features may be numerical features 202 (e.g., the active agent count 202a, the available agent count 202b, the queue depth 202c, and the actual wait time 202f) and some features may be string features 202 (e.g., the business ID 202d and the queue ID 202e). Using one or more of these features 202, the model 270 predicts the estimated wait time 130. When the actual wait time 202f is obtained upon a support request 120 being answered (i.e., a duration of time the support request 120 spent in the queue 400 before an agent 230 answers the support request 120), the predictor 260 may determine a loss 520 between the estimated wait time 130 and the actual wait time 202f. That is, the wait time predictor 260 may use a loss function 510 (e.g., a mean squared error loss function) to determine a loss 520 of the estimated wait time 130, where the loss 520 is a measure of how accurate the predicted wait time estimate 130 is relative to the actual wait time 202f. The predictor 260, in some implementations, uses the loss 520 to further train or tune the model 270), (Id., ¶ 41, the predictor 260 (or support request manager 200 or any other systems executing on the data processing hardware 144) tunes the model 270 with the loss 520 and/or any associated high-level features 202 immediately after the predictor 260 receives the actual wait time 202f of a recently answered support request 120. In other examples, the predictor 260 trains the model 270 at a configurable frequency. For example, the predictor 260 may train the model 270 once per day and the training data 202T may include all of the support requests 120 and associated features 202 that occurred that day (i.e., historical support requests 120.sub.H of FIG. 2). It is understood that the configurable frequency is not limited to once per day and may include any other period of time (e.g., once per hour, once per week, etc.). For example, the predictor 260 may train the model 270 automatically once per day (or some other predetermined period of time) to tune the model based on the prior day's data. In some implementations, the loss 520 of the tuned or retrained model 270 is compared against the loss of a previous model 270 (e.g., the model 270 trained from the previous day), and if the loss 520 of the new model 270 satisfies a threshold relative to the loss 520 of the previous model 270 (e.g., the loss 520 of the model 270 trained today versus the loss 520 of the model 270 trained yesterday), the wait time predictor 260 may revert to the previously trained model 270 (i.e., discard the newly tuned or retrained model 270). Put another way, if the model 270 is further trained on new training data 202T (e.g., collected from that day), but the loss 520 indicates that the accuracy of the model 270 has declined, the model 270 may revert to the previous, more accurate model 270), (Id., ¶ 34, the wait time predictor model 270 is trained on training data 202T obtained from a historical support request data store 250. The historical support request data store 250 may reside on the storage resources 146 of the distributed system 140, or may reside at some other remote location in communication with the system 140. The training data 202T includes a corpus of historical support requests 120.sub.H (also referred to as ‘training support requests 120.sub.H’), wherein each historical support request 120.sub.H includes a corresponding plurality of high-level features 202a-n and a corresponding actual wait time 203. For example, each historical support request 120.sub.H includes one or more of a number of active agents 202a, a number of available agents 202b, a queue depth 202c, a business ID 202d, a queue ID 202e, or an actual wait time 202f associated with the corresponding historical support request 120.sub.H. Here, the actual wait time 202f associated with the corresponding historical support request 120.sub.H is known since the support request 120.sub.H is “historical”, and thus, already processed by the manager 200. Thus, the actual wait time 203 associated with the corresponding historical support request 120.sub.H indicates an actual duration of time the historical support request 120.sub.H was pending before being answered. Moreover, the historical support request 120.sub.H may further include a previous actual wait time 202f associated one or more past support requests 120 that were answered before the corresponding historical support request 120.sub.H. Actual wait times 203 are described in greater detail below with reference to FIGS. 4 and 5. In the example shown, the training data 202T passes to a wait time trainer 204 for training the wait time predictor model 270. Based on the training data 202T, the wait time trainer 204 is able to model support request parameters 206 to train the wait time predictor model 270. Once trained, the wait time predictor model (e.g., trained model) 270 is used by the wait time predictor 260 during inference for predicting estimated wait times 130 for corresponding pending support requests 120. Thus, using training data 202T associated with a corpus of historical support requests 120.sub.H each including a corresponding plurality of high-level features 202 and/or a known corresponding actual wait time 202f, the wait time predictor model 270 is trained to predict estimated wait times 130), (Id., ¶ 35, The wait time predictor model 270 may include a neural network. For instance, the wait time trainer 204 may map the training data 202T to output data to generate the neural network model 270. Generally, the wait time trainer 204 generates hidden nodes, weights of connections between the hidden nodes and input nodes that correspond to the training data 202T, weights of connections between the hidden nodes and output nodes, and weights of connections between layers of the hidden nodes themselves. Thereafter, the fully trained neural network model 270 may be employed against input data (e.g., pending support request 120) to generate unknown output data (e.g., the estimated wait time 130). In some examples, the neural network model 270 is a deep neural network (e.g., a regressor deep neural network) that has a first hidden layer and a second hidden layer. For example, the first hidden layer may have sixteen nodes and the second hidden layer may have eight nodes. The wait time trainer 204 typically trains the model 270 in batches. That is, a model 270 is typically trained on a group of input parameters (i.e., high-level features 202 and actual wait times 203) at a time. In some implementations, the trained model 270 is trained with a batch size of ten. The implementations of the wait time predictor model described herein uses pre-existing historical data, with minimal preprocessing, thereby increasing the efficacy of the deep neural network approach). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to have modified the multi-station queue and wait time elements of Robertson to include the queue and wait time regression elements of Benjamin in the analogous art of predicting business-agnostic contact center expected wait times for the same reasons as stated for claim 15. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hale et al., U.S. Publication No. 2005/0065834 discloses management of the flow of passengers, baggage and cargo in relation to travel facilities. Hua et al., U.S. Publication No. 2009/0222388 discloses a method of and system for hierarchical human/crowd behavior detection. Garg et al., U.S. Publication No. 2020/0334592 discloses a method, system, and computer program product for wait time estimation using predictive modeling. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS D BOLEN whose telephone number is (408)918-7631. The examiner can normally be reached Monday - Friday 8:00 AM - 5:00 PM PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patty Munson can be reached on (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS D BOLEN/ Examiner, Art Unit 3624 /PATRICIA H MUNSON/ Supervisory Patent Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Sep 21, 2021
Application Filed
Feb 02, 2025
Non-Final Rejection — §101, §103, §112
Jul 01, 2025
Response Filed
Dec 30, 2025
Final Rejection — §101, §103, §112
Mar 10, 2026
Request for Continued Examination
Mar 25, 2026
Response after Non-Final Action
Mar 29, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12205077
SMART REMINDERS FOR RESPONDING TO EMAILS
2y 5m to grant Granted Jan 21, 2025
Patent 12198105
SMART REMINDERS FOR RESPONDING TO EMAILS
2y 5m to grant Granted Jan 14, 2025
Patent 12093873
USER PERFORMANCE ANALYSIS AND CORRECTION FOR S/W
2y 5m to grant Granted Sep 17, 2024
Patent 11935077
OPERATIONAL PREDICTIVE SCORING OF COMPONENTS AND SERVICES OF AN INFORMATION TECHNOLOGY SYSTEM
2y 5m to grant Granted Mar 19, 2024
Patent 11635224
OPERATION SUPPORT SYSTEM, OPERATION SUPPORT METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Apr 25, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
10%
Grant Probability
20%
With Interview (+10.5%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 122 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month