Prosecution Insights
Last updated: April 19, 2026
Application No. 16/576,449

Using Routing Rules to Generate Custom Models For Deployment as a Set

Final Rejection §103§112
Filed
Sep 19, 2019
Examiner
WONG, WILLIAM
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Aible Inc.
OA Round
8 (Final)
30%
Grant Probability
At Risk
9-10
OA Rounds
4y 11m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
120 granted / 397 resolved
-24.8% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
33 currently pending
Career history
430
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
23.5%
-16.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to communications filed on 11/10/2025. Claims 16 and 29 have been canceled. Claim 30 has been added. Claims 1-15, 17-28, and 30 are pending and have been examined. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-15, 17-28, and 30 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 is amended to recite “the respective condition characterizes a resourcing level… wherein the resourcing level specifies at least one condition on an output of the predictive model”. However, the specification does not support the above features. Applicant cites paragraph 41 of the specification for alleged support, but this paragraph only describes inputs. The specification generally states “resourcing levels can provide respective conditions… In some cases, the resourcing levels can provide conditions on the output of the model” (e.g. in paragraph 62). As understood by examiner given the context of the specification (“resourcing levels can provide respective conditions”), a resourcing level provides a “respective” condition, i.e. only one. The specification is silent as to the (singular) resourcing level specifying at least (which includes plural) one condition on an output of the predictive model. As such, claim 1 lacks written description. This similarly applies to claims 17, 23, and 25. Dependent claims 2-15, 18-22, 24, 26-28, and 30 incorporate the features of corresponding independent claims, and thus also lack written description. Response to Arguments Applicant’s arguments with respect to the amended features have been considered but are moot in view of new grounds of rejection. See Zhang et al. (US 10719645 B1) below. However, as noted previously, London describes “target [i.e. output] resource budget or time budget” and the model of Morris2 allows setting noted constraints for an outputted project portfolio. Applicant's other arguments filed have been fully considered but they are not persuasive. Applicant argues that the references allegedly do not teach “wherein the predictive model includes a set of submodels, the set of submodels including a first submodel, the first value associated with the first submodel, wherein each submodel in the set of submodels is trained with the respective condition on a variable of a training data set including the resourcing level”. However, examiner respectfully disagrees. For example, Merrill teaches wherein the predictive model includes a set of submodels, the set of submodels including a first submodel (e.g. in paragraphs 27, 86 and 101, “Ensemble modeling system which is built upon one or more of submodels… ensembles of models[i.e. submodels]”), the first value associated with the first submodel (e.g. Merrill, in paragraphs 101-102, “a selector determines which model to use based on the input variables according to predetermined rules… map… determine whether to execute the corresponding sub-model based on the input variable values”) and London teaches a respective condition on a variable, wherein the respective condition characterizes a resourcing level, wherein a (sub)model is trained with the respective condition on a variable of a training data set including the resourcing level (e.g. in column 14 lines 47-59, “training request may indicate a training data source 622, a model type 624, and/or one or more constraints or preferences 626 pertaining to the training of the model… the constraints/preferences 626 may indicate that adaptive sampling of the training data is to be used. In at least one embodiment the client may indicate a target resource budget or time budget as a constraint for the training”). As such, the combination teaches the claimed features. Applicant also argues in substance that allegedly there is no motivation to combine the references. However, examiner respectfully disagrees. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, all the references are pertaining to constraints, optimization, etc. In particular, motivations include facilitating optimization of other well-known resources, or producing outcomes that align with enterprise needs. Additionally, it was noted that combination with respect to Morris2 can also be considered as amounting to a simple substitution that yields predictable results; e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP § 2143(B). Morris2 is relied upon to teach a condition including a condition characterizing a resourcing level of an enterprise associated with an enterprise resource planning system. One of ordinary skill in the art would have understood that the condition (condition characterizing a resourcing level) of the combination can simply be substituted with the conditions (condition characterizing a resourcing level of an enterprise associated with an enterprise resource planning system) taught by Morris2. With respect to claim 25, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., efficient frontier, etc.) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In this case, the claimed limitations do not provide any of details on any efficient frontier with respect to the cost-benefit tradeoff such as noted by applicant with respect to application 16/512647. Heckerman teaches “evaluate the tradeoff between the expected incremental cost of additional training and the expected incremental benefit of increasing the size of the considered data subset by going from subset D.sub.n to subset D.sub.n+1… models 62 trained by…training algorithm… building a refined statistical model…based at least in part on an associated training policy that includes determining acceptability based at least in part on an expected incremental benefit relative to an expected incremental cost associated with increasing the size of the aggregate data set in order to facilitate reducing cost associated with clustering data relative to the computer readable data set” (e.g. in column 5 line 58 – column 7 line 56 and claim 39), which reads on training with a respective cost-benefit tradeoff as claimed. As such, applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 8-10, 17-18, 22, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1). As per independent claim 1, Merrill teaches a method comprising: receiving user input specifying a first selection of a first value of a variable of a dataset (e.g. in paragraphs 27-28, “receives a first set of input variable values for a first data set… input variable values by accessing user selection of a set of variables of the first set of input variables to be modified individually”), the variable including a set of values associated with a predictive model of an enterprise system (e.g. in paragraphs 3 and 27, “modeling system receives [the] first set of input variable values” and “Business… ensembled models… predictive power”); wherein the predictive model includes a set of submodels, the set of submodels including a first submodel (e.g. in paragraphs 86 and 101, “Ensemble modeling system which is built upon one or more of submodels… ensembles of models[i.e. submodels]”), the first value associated with the first submodel (e.g. Merrill, in paragraphs 101-102, “a selector determines which model to use based on the input variables according to predetermined rules… map… determine whether to execute the corresponding sub-model based on the input variable values”); determining a first routing rule specifying use of the first submodel associated with the selected first value when the model receives the selected first value as input (e.g. in paragraph 101, “a selector determines which model to use based on the input variables according to predetermined rules”), wherein the variable identifies a subgroup of a population within the dataset (e.g. in paragraph 100, “a subset of input variable values”); and deploying the model with the first routing rule (e.g. in paragraph 101, using modeling system “according to predetermined rules”), but does not specifically teach the predictive model of an enterprise resource planning system, receiving user input specifying a respective condition on a variable of the dataset, wherein the respective condition characterizes a resourcing level of an enterprise associated with the enterprise resource planning system, wherein the resourcing level specifies at least one condition on an output of the predictive model, wherein the output of the predictive model comprises one or more output values, wherein each submodel in the set of submodels is trained with the respective condition on a variable of a training data set including the resourcing level and the first routing rule characterizing an order of priority for the set of submodels such that the first submodel has a higher priority for the subgroup of the population than a second submodel for the subgroup of the population; assessing, after deployment of the model, performance of the first submodel and the second submodel with respect to the subgroup of the population; and modifying, after deployment of the model, the first routing rule to adjust the order of priority for the set of submodels based on the second submodel outperforming the first submodel for the subgroup of the population. However, Morris teaches a predictive model of an enterprise resource planning system (e.g. in paragraphs 20, 27, and 35, “business-related… prediction [associated with] resource utilization, operational costs”, etc.) and determining a first routing rule characterizing an order of priority for a set of submodels such that a first submodel has a higher priority for a subgroup of a population than a second submodel for the subgroup of the population (e.g. in paragraphs 73-74, 80, and 84-85, “indicates that the performance of the newly trained predictive model 194 is better for predicting the operational outcome of interest than the predictive model 194 currently in use [i.e. priority order]… allow the selection of the sub-model or combination of sub-models that perform best at modeling the outcome of interest… where only a subset of parameters (e.g., a subset of sensor data type) from a complete set of parameters available that provides the greatest sensitivity in predicting operational outcomes of interest may be used” and/or “combine the outputs of the sub-models using various weights…for the purposes of predicting any variety of operational outcomes of interest [i.e. priority order]”), wherein each submodel in the set of submodels is trained (e.g. in paragraph 73, “predictive models 194 are…trained 192 using data”), assessing, after deployment of the model, performance of the first submodel and the second submodel with respect to the subgroup of the population and modifying, after deployment of the model, the first routing rule to adjust the order of priority for the set of submodels based on the second submodel outperforming the first submodel for the subgroup of the population (e.g. in paragraphs 74, 88-89, 103, 118, and 125, “super-model may be deployed… continuously or periodically update the sub-models and/or the super-model based on features evaluated… test/tune the sub-models and/or the weights ascribed to the sub-models for the super-model” with “indicates that the performance of…predictive model 194 is better for predicting the operational outcome of interest than the predictive model 194 currently in use” and/or “plurality of super-models 330 can be deployed to provide predictions regarding the operational outcome of interest. In various embodiments, the super-models can compete with each other over some interval of time to determine which performs the best and select that super-model 330 for use. The competition can be repeated at defined intervals to make sure the most effective super-model 330 is being used to provide the predicted output”; note: in this case, “model” is interpreted as the plurality of super-models and “submodel” is interpreted as a super-model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Merrill to include the teachings of Morris because one of ordinary skill in the art would have recognized the benefit of using the most effective (sub)model, but does not specifically teach receiving user input specifying a respective condition on a variable of the dataset, wherein the respective condition characterizes a resourcing level of an enterprise associated with the enterprise resource planning system, wherein the resourcing level specifies at least one condition on an output of the predictive model, wherein the output of the predictive model comprises one or more output values, wherein each submodel in the set of submodels is trained with the respective condition on a variable of a training data set including the resourcing level. However, London teaches receiving user input specifying a respective condition on a variable of the dataset, wherein the respective condition characterizes a resourcing level, wherein a (sub)model is trained with the respective condition on a variable of a training data set including the resourcing level (e.g. in column 14 lines 47-59, “training request may indicate a training data source 622, a model type 624, and/or one or more constraints or preferences 626 pertaining to the training of the model… the constraints/preferences 626 may indicate that adaptive sampling of the training data is to be used. In at least one embodiment the client may indicate a target resource budget or time budget as a constraint for the training”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of London because one of ordinary skill in the art would have recognized the benefit of allowing the learning process to be controlled, but does not specifically teach wherein the respective condition characterizes a feature including a resourcing level of an enterprise associated with the enterprise resource planning system, wherein the resourcing level specifies at least one condition on an output of the predictive model, wherein the output of the predictive model comprises one or more output values. However, Morris2 teaches a respective condition characterizing a feature including a resourcing level of an enterprise associated with an enterprise resource planning system (e.g. in column 1 lines 53-63, column 4 lines 34-67, column 8 lines 38-67, column 10 lines 47-60, and column 16 lines 30-35, “take into account both objective (hard) constraints (e.g., available resources, required start/end dates, risk, criticality, cost and return on investment weightings, etc.) and subjective (soft) constraints (e.g., specified named resources to work on projects, tolerances in delivering on all projects and meeting the required resourcing levels, etc.) to ensure that any outcome is fully aligned with the needs of the business… a "resource vs cost" criterion (293) for allowing a user to instruct the system to allocate resources and/or schedule projects without considering cost or resource utilization… an "individual resource utilization" criterion (295) for allowing a user to specify the percentage of resource utilization per project, and a "total resource utilization" criterion (296) for allowing a user to specify the percentage of resource utilization across all projects in a portfolio [etc.]… scenarios” and figure 1B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Morris2 because one of ordinary skill in the art would have recognized the benefit of facilitating optimization of other well-known resources and/or producing outcomes that align with enterprise needs (note: also amounts to a simple substitution that yields predictable results; e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP § 2143(B); condition characterizing a resourcing level of the enterprise associated with the enterprise resource planning system as the condition), but does not specifically teach, as a whole, wherein the feature specifies at least one condition on an output of the predictive model, wherein the output of the predictive model comprises one or more output values. However, Zhang teaches a feature specifying at least one condition on an output of a model, wherein the output of the model comprises one or more output values (e.g. in column 8 line 61 – column 9 line 16 and column 10 lines 54-59, “constraint may limit or constrain an execution of the model… limit…to a particular range of values… constrain one or more outputs of the model… model…generating output values based on the set of input values”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Zhang because one of ordinary skill in the art would have recognized the benefit of fully controlling constraints on a model. As per claim 2, the rejection of claim 1 is incorporated and the combination further teaches receiving the dataset, the dataset including the variable, the variable including the set of values (e.g. Merrill, in paragraph 27); training, using the dataset, a first candidate model and a second candidate model (e.g. Morris, in paragraphs 56, 76, and 112, “data set…used to train and validate 192 the predictive models”); determining a first performance of the first candidate model based on output of the first candidate model when the first value is provided as input to the first candidate model and determining a second performance of the second candidate model based on output of the second candidate model when the first value is provided as input to the second candidate model (e.g. Morris, in paragraphs 74 and 76, “If the validation indicates that the performance of the newly trained predictive model 194 is better for predicting the operational outcome of interest than the predictive model 194 currently in use, this validation can be used as a basis for deploying the newly trained predictive model 194 to replace the predictive model 194 that is currently being used”). As per claim 8, the rejection of claim 1 is incorporated and the combination further teaches wherein the input is received from a user, an application, a process, or a data source (e.g. Merrill, in paragraphs 27-28, “user selection”). As per claim 9, the rejection of claim 1 is incorporated and the combination further teaches receiving data characterizing a first input to the model deployed with the first routing rule, the first input including the first value (e.g. Merrill, in paragraphs 27 and 101); determining, based on the first routing rule, use of the first submodel in response to receiving the first value as input to the model (e.g. Merrill, in paragraph 101, “a selector determines which model to use based on the input variables according to predetermined rules”); determining, using the first input, a first output of the first submodel associated with the first value and providing the first output of the first submodel as output of the model (e.g. Merrill, in paragraph 60, “generate the output explanation information for the original input variable values”). As per claim 10, the rejection of claim 9 is incorporated and the combination further teaches wherein providing the first output includes transmitting, persisting, or displaying the first output (e.g. Merrill, in paragraphs 60-61) Claims 17-18 is the system claim corresponding to method claims 1-2 and are rejected under the same reasons set forth, and the combination further teaches at least one data processor and memory storing instructions which when executed by the at least one data processor causes the at least one data processor to perform operations (e.g. Merrill, in paragraphs 170 and 173). As per claim 22, the rejection of claim 1 is incorporated and the combination further teaches wherein the first routing rule characterizing the order of priority indicates that the first submodel is to be used to assess input data belonging to the first subgroup, and the modified first routing rule characterizing the modified order of priority indicates that the second submodel is to be used to assess input data belonging to the first subgroup (e.g. Morris, in paragraphs 74, 88-89, 103, 118, and 125, “super-model may be deployed… continuously or periodically update the sub-models and/or the super-model based on features evaluated… test/tune the sub-models and/or the weights ascribed to the sub-models for the super-model” with “indicates that the performance of…predictive model 194 is better for predicting the operational outcome of interest than the predictive model 194 currently in use” and/or “plurality of super-models 330 can be deployed to provide predictions regarding the operational outcome of interest. In various embodiments, the super-models can compete with each other over some interval of time to determine which performs the best and select that super-model 330 for use. The competition can be repeated at defined intervals to make sure the most effective super-model 330 is being used to provide the predicted output”; note: in this case, “model” reads on the plurality of super-models and “submodel” reads on a super-model). As per claim 30, the rejection of claim 1 is incorporated and the combination further teaches wherein the first routing rule is in a set of routing rules comprising a map associating a value of the variable with a respective submodel of the set of submodels (e.g. Merrill, in paragraphs 101-102, “a selector determines which model to use based on the input variables according to predetermined rules… map… determine whether to execute the corresponding sub-model based on the input variable values”). Claims 3-5 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Rotenberg (US 20120030074 A1). As per claim 3, the rejection of claim 2 is incorporated and the combination further teaches determining that the first performance is greater than the second performance and associating, in response to determining that the first performance is greater than the second performance, the first candidate model with the first value (e.g. Merrill, in paragraph 101, “determines which model to use based on the input variables”; Morris, in paragraphs 74 and 76, “If the validation indicates that the performance of the newly trained predictive model 194 is better for predicting the operational outcome of interest than the predictive model 194 currently in use, this validation can be used as a basis for deploying the newly trained predictive model 194 to replace the predictive model 194 that is currently being used”), wherein the first candidate model is included in the model as the first submodel (e.g. Merrill, in paragraphs 86 and 101), but does not specifically teach displaying, within a graphical user interface display space, a first icon associated with the first value, the first icon including a first characteristic representative of the first performance. However, Rotenberg teaches displaying, within a graphical user interface display space, a first icon associated with the first value, the first icon including a first characteristic representative of a performance (e.g. in paragraph 56 and claim 1, “color and size of each sphere communicate to the user the value of parameters of a corresponding asset… Green color indicates good performance, yellow color indicates moderate performance and red color is bad performance”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Rotenberg because one of ordinary skill in the art would have recognized the benefit of allowing a user to easily understand performance. As per claim 4, the rejection of claim 3 is incorporated and the combination further teaches wherein the set of values includes a second value (e.g. Merrill, in paragraph 27, use other input values), the method further comprising: determining a third performance of the first candidate model based on output of the first candidate model when the second value is provided as input to the first candidate model (e.g. Merrill, in paragraph 101, “determines which model to use based on the input variables” of the ensemble; Morris, in paragraphs 74 and 76, determines “performance” of a model); determining a fourth performance of the second candidate model based on output of the second candidate model when the second value is provided as input to the second candidate model (e.g. Merrill, in paragraph 101, “determines which model to use based on the input variables” of the ensemble; Morris, in paragraphs 74 and 76, determines “performance” of another model); determining that the fourth performance is greater than the third performance and associating, in response to determining that the fourth performance is greater than the third performance, the second candidate model with the second value (e.g. Merrill, in paragraph 101, “determines which model to use based on the input variables” of the ensemble; Morris, in paragraphs 74 and 76, “If the validation indicates that the performance of the newly trained predictive model 194 is better for predicting the operational outcome of interest than the predictive model 194 currently in use, this validation can be used as a basis for deploying the newly trained predictive model 194 to replace the predictive model 194 that is currently being used”), but does not specifically teach displaying, within the graphical user interface display space, a second icon associated with the second value, the second icon including a characteristic representative of the fourth performance, wherein the first characteristic and the second characteristic include size, color, shape, position, opacity, alignment, shading, origin, border, font, margin, or padding. However, Rotenberg teaches displaying, within a graphical user interface display space, a second icon associated with a second value, the second icon including a characteristic representative of another performance, wherein a first characteristic and a second characteristic include size, color, shape, position, opacity, alignment, shading, origin, border, font, margin, or padding (e.g. in paragraph 56 and claim 1, “color and size of each sphere communicate to the user the value of parameters of a corresponding asset… Green color indicates good performance, yellow color indicates moderate performance and red color is bad performance”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Rotenberg because one of ordinary skill in the art would have recognized the benefit of allowing a user to easily understand performance. As per claim 5, the rejection of claim 4 is incorporated and the combination further teaches receiving input specifying a second selection of the second value (e.g. Merrill, in paragraph 27, use other input values); and determining a second routing rule specifying use of the second candidate model associated with the selected second value in response to receiving the selected second value as input to the model (e.g. Merrill, in paragraphs 86 and 101, determine another model of the ensemble “to use based on the [other] input variables according to predetermined rules”), wherein the model is deployed with the first routing rule and the second routing rule (e.g. Merrill, in paragraphs 27 and 101, modeling system “according to predetermined rules”), and wherein the set of submodels includes the second candidate model (e.g. Merrill, in paragraphs 86 and 101, “Ensemble modeling system which is built upon one or more of submodels… ensembles of models[i.e. submodels]”). Claims 19-20 are the system claims corresponding to method claims 3-4, and are rejected under the same reasons set forth. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Muramatsu et al. (US 20060136567 A1). As per claim 6, the rejection of claim 1 is incorporated and the combination further teaches integrating the model into an event-driven computing environment (e.g. Merrill, in paragraphs 43, “API… explanation generator 190…is communicatively coupled to the modeling system 110”); and providing a network interface as an entry point for the model in the event-driven computing environment (e.g. Merrill, in paragraph 43, “explanation generator 190…is communicatively coupled to the modeling system 110 via a private network”), wherein the event-driven computing environment facilitates receiving an input value in the set of values and providing the input value as input to the model (e.g. Merrill, in paragraph 43, “provide at least one modified set of input variable values to the modeling system 110”), but does not specifically teach with a private internet protocol address. However, a private internet protocol address for networking was well known in the art, as shown by Muramatsu (e.g. in paragraph 9). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Muramatsu because one of ordinary skill in the art would have recognized the benefit of facilitating identifying nodes in a network. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Labrecque et al. (US 20190069124 A1). As per claim 7, the rejection of claim 1 is incorporated, but the combination does not specifically teach wherein the deploying further comprises: encapsulating the model and the first routing rule in a virtual container configured to share a kernel, binaries, and libraries with a host; and providing the virtual container. However, the combination teaches resources including a model and a first routing rule (e.g. Merrill, in paragraph 101) and Labrecque teaches encapsulating resources in a virtual container configured to share a kernel, binaries, and libraries with a host and providing the virtual container (e.g. in paragraph 36). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Labrecque because one of ordinary skill in the art would have recognized the benefit of incorporating well-known software components (also amounts to a simple substitution that yields predictable results; e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP § 2143(B)). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Barsoum et al. (US 20140279754 A1). As per claim 11, the rejection of claim 1 is incorporated and the combination further teaches wherein determining the first routing rule further comprises: parsing an input signal for the first value (e.g. Merrill, in paragraph 27), but does not specifically teach filtering, using the parsed first value, the dataset for records of the dataset including the parsed first value; and associating the filtered records with the first submodel. However, Barsoum teaches filtering, using a parsed first value, a dataset for records of the dataset including the parsed first value and associating the filtered records with a first entity (e.g. in paragraph 24, filter “patients having varying situations… patients with a diagnosed heart condition might have their own set of models”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Barsoum because one of ordinary skill in the art would have recognized the benefit of associating a model(s) with appropriate data. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Chu et al. (US 8214308 B2). As per claim 12, the rejection of claim 1 is incorporated, but the combination does not specifically teach the monitoring the deployed model over time at least by determining a first performance of the model at a first time interval, determining a second performance of the model at a second time interval, and comparing the first performance and the second performance. However, Chu teaches monitoring a deployed model over time at least by determining a first performance of the model at a first time interval, determining a second performance of the model at a second time interval, and comparing the first performance and the second performance (e.g. in column 6 lines 28-30 and claim 1, “Charts showing how the model degradation evolves over time are useful for the model monitoring process… comparing, using the one or more data processors, the baseline performance metric [associated with “a time period”] and the updated performance metric [associated with “a new time period that is later than the time period”] to determine an indication of predictive ability decay for the predictive model”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Chu because one of ordinary skill in the art would have recognized the benefit of determining performance decay over time. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Fulshaw et al. (US 20100153298 A1). As per claim 13, the rejection of claim 1 is incorporated, but the combination does not specifically teach wherein the input specifying the first selection is received via a slider provided within a graphical user interface display space; and wherein the slider is configured to adjust the first value at least by a percentage increase or a percentage decrease. However, Fulshaw teaches input specifying a first selection received via a slider provided within a graphical user interface display space and wherein the slider is configured to adjust the first value at least by a percentage increase or a percentage decrease (e.g. in paragraph 73). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Fulshaw because one of ordinary skill in the art would have recognized the benefit of facilitating user input of a value. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1), and Fulshaw et al. (US 20100153298 A1) as applied above, and further in view of Barsoum et al. (US 20140279754 A1). As per claim 14, the rejection of claim 13 is incorporated, but the combination does not specifically teach receiving, in response to receiving the input specifying the first selection via the slider, input specifying training the model; partitioning, in response to receiving the input specifying training the model, the dataset on the first value of the variable; and training, in response to partitioning the dataset, the first submodel on a partition of the dataset including the first value of the variable. However, the combination teaches receiving input including receiving the input specifying a first selection via a slider (e.g. Fulshaw, in paragraph 73) and Barsoum teaches receiving input specifying training a model (e.g. in paragraph 13); partitioning, in response to receiving the input specifying training the model, the dataset on the first value of the variable (e.g. in paragraph 24); and training, in response to partitioning the dataset, a first submodel on a partition of the dataset including the first value of the variable (e.g. in paragraph 13). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Barsoum because one of ordinary skill in the art would have recognized the benefit of facilitating a self-evolving model that remains relevant. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1 London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Heckerman et al. (US 7409371 B1). As per claim 15, the rejection of claim 1 is incorporated and the combination further teaches receiving input specifying an operational constraint (e.g. London, in column 14 lines 47-59, “training request may indicate a training data source 622, a model type 624, and/or one or more constraints or preferences 626 pertaining to the training of the model… the constraints/preferences 626 may indicate that adaptive sampling of the training data is to be used. In at least one embodiment the client may indicate a target resource budget or time budget as a constraint for the training”); and associating the first submodel with the operational constraint (e.g. London, in column 14 lines 47-59, “training request may indicate a training data source 622, a model type 624, and/or one or more constraints or preferences 626 pertaining to the training of the model… the constraints/preferences 626 may indicate that adaptive sampling of the training data is to be used. In at least one embodiment the client may indicate a target resource budget or time budget as a constraint for the training”), wherein the first routing rule further specifies use of the first submodel associated with the operational constraint (e.g. Merrill, in paragraph 101, “a selector determines which model to use based on the input variables according to predetermined rules”), but does not specifically teach and a cost-benefit tradeoff. However, Heckerman teaches a (sub)model being trained with a respective cost-benefit tradeoff (e.g. in column 5 line 58 – column 7 line 56 and claim 39, “a stopping criterion 68 to evaluate the tradeoff between the expected incremental cost of additional training and the expected incremental benefit of increasing the size of the considered data subset by going from subset D.sub.n to subset D.sub.n+1… models 62 trained by…training algorithm… building a refined statistical model…based at least in part on an associated training policy that includes determining acceptability based at least in part on an expected incremental benefit relative to an expected incremental cost associated with increasing the size of the aggregate data set in order to facilitate reducing cost associated with clustering data relative to the computer readable data set”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Heckerman because one of ordinary skill in the art would have recognized the benefit of building more efficient (sub)models. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Pednault et al. (US 20030176931 A1) and Julien et al. (US 20070168334 A1). As per claim 21, the rejection of claim 1 is incorporated, but the combination does not specifically teach providing a prompt to split the model based on the order of priority. However, Pednault teaches splitting a model based on an order of priority (e.g. in paragraphs 32-35 and 184, and figure 1 showing conditional statements with priority order splitting model to form a tree). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Pednault because one of ordinary skill in the art would have recognized the benefit of determining an appropriate entity, but does not specifically teach providing a prompt to split the model. However, Julien teaches providing a prompt to split a model (e.g. in paragraphs 47-48, “user can split one table into multiple smaller tables to improve performance… [provided with] the modeling tool 132 [i.e. prompt] to annotate the… split in the data model”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Julien because one of ordinary skill in the art would have recognized the benefit of facilitating user control. Claims 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Heckerman et al. (US 7409371 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1). As per independent claim 23, Merrill teaches a method comprising: receiving user input specifying a first selection of a first value of a variable of a dataset (e.g. in paragraphs 27-28, “receives a first set of input variable values for a first data set… input variable values by accessing user selection of a set of variables of the first set of input variables to be modified individually”), determining a first routing rule specifying use of a first candidate model associated with the selected first value (e.g. in paragraph 101, “a selector determines which model to use based on the input variables according to predetermined rules”); and deploying a model with the first routing rule (e.g. in paragraph 101, using modeling system “according to predetermined rules”), the model including a set of submodels including the first candidate submodel (e.g. in paragraphs 86 and 101, “Ensemble modeling system which is built upon one or more of submodels… ensembles of models[i.e. submodels]”), but does not specifically teach the variable characterizing a resourcing level of an enterprise associated with an enterprise resource planning system, the resourcing level specifying at least one condition on an output of a predictive model, wherein the output of the predictive model comprises one or more output values; receiving input specifying a cost-benefit tradeoff; training, using the dataset and the resourcing level, a first candidate model and a second candidate model; determining, prior to deployment of the model, a first performance of the first candidate model based on output of the first candidate model when the first value is provided as input to the first candidate model, wherein the first performance is determined based on a first impact; determining, prior to deployment of the model, a second performance of the second candidate model based on output of the second candidate model when the first value is provided as input to the second candidate model, wherein the second performance is determined based on a second impact, wherein the first impact and the second impact are a function of at least one of a number of true positives, a value associated with a true positive, a number of true negatives, a value associated with a true negative, a number of false positives, a value associated with a false positive, a number of false negatives, and a value associated with a false negative; determining that the first performance is greater than the second performance; associating, prior to deployment of the model and in response to determining that the first performance is greater than the second performance, the first candidate model with the first value. However, Morris teaches training, using a dataset, a first candidate model and a second candidate model (e.g. in paragraphs 56, 76, and 112, “data set…used to train and validate 192 the predictive models”); determining, prior to deployment of the model, a first performance of the first candidate model based on output of the first candidate model when a first value is provided as input to the first candidate model (e.g. in paragraphs 73-74, “validation indicates…the performance of the…predictive model” and figure 4 showing a first candidate model), wherein the first performance is determined based on a first impact (e.g. in paragraphs 73-74, 111-112, and 127, “performance of the…predictive model… performance targets (e.g., performance threshold for false negatives and false positives)… one or more model performance parameters associated with each of the plurality of sub-models 310 can be identified. These performance parameters may include, e.g., a false positive or false negative acceptable range”, i.e. impact); determining, prior to deployment of the model, a second performance of the second candidate model based on output of the second candidate model when the first value is provided as input to the second candidate model (e.g. in paragraphs 73-74, “validation indicates…the performance of the…predictive model” and figure 4 showing a second candidate model), wherein the second performance is determined based on a second impact (e.g. in paragraphs 73-74, 111-112, and 127, “performance of the…predictive model… performance targets (e.g., performance threshold for false negatives and false positives)… one or more model performance parameters associated with each of the plurality of sub-models 310 can be identified. These performance parameters may include, e.g., a false positive or false negative acceptable range”, i.e. impact), wherein the first impact and the second impact are a function of at least one of a number of true positives, a value associated with a true positive, a number of true negatives, a value associated with a true negative, a number of false positives, a value associated with a false positive, a number of false negatives, and a value associated with a false negative (e.g. Morris, in paragraphs 73-74, 111-112, and 127, “performance of the…predictive model… performance targets (e.g., performance threshold for false negatives and false positives)… one or more model performance parameters associated with each of the plurality of sub-models 310 can be identified. These performance parameters may include, e.g., a false positive or false negative acceptable range”, i.e. impacts); determining that the first performance is greater than the second performance and associating, prior to deployment of the model and in response to determining that the first performance is greater than the second performance, the first candidate model with the first value (e.g. in paragraphs 74 and 76, “If the validation indicates that the performance of the newly trained predictive model 194 is better for predicting the operational outcome of interest than the predictive model 194 currently in use, this validation can be used as a basis for deploying the newly trained predictive model 194 to replace the predictive model 194 that is currently being used”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Merrill to include the teachings of Morris because one of ordinary skill in the art would have recognized the benefit of using the most effective model, but does not specifically teach the variable characterizing a resourcing level of an enterprise associated with an enterprise resource planning system, the resourcing level specifying at least one condition on an output of a predictive model, wherein the output of the predictive model comprises one or more output values; receiving input specifying a cost-benefit tradeoff; training, using the resourcing level. However, London teaches a variable characterizing a resourcing level and training, using a dataset and the resourcing level, a model (e.g. in column 14 lines 47-59, “training request may indicate a training data source 622, a model type 624, and/or one or more constraints or preferences 626 pertaining to the training of the model… the constraints/preferences 626 may indicate that adaptive sampling of the training data is to be used. In at least one embodiment the client may indicate a target resource budget or time budget as a constraint for the training”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of London because one of ordinary skill in the art would have recognized the benefit of allowing the learning process to be controlled using relevant parameters, but does not specifically teach characterizing a feature including a resourcing level of an enterprise associated with an enterprise resource planning system, the resourcing level specifying at least one condition on an output of a predictive model, wherein the output of the predictive model comprises one or more output values; receiving input specifying a cost-benefit tradeoff. However, Heckerman teaches receiving input specifying a parameter including a cost-benefit tradeoff (e.g. in column 5 line 58 – column 7 line 56 and claim 39, “a stopping criterion 68 to evaluate the tradeoff between the expected incremental cost of additional training and the expected incremental benefit of increasing the size of the considered data subset by going from subset D.sub.n to subset D.sub.n+1… models 62 trained by…training algorithm… building a refined statistical model…based at least in part on an associated training policy that includes determining acceptability based at least in part on an expected incremental benefit relative to an expected incremental cost associated with increasing the size of the aggregate data set in order to facilitate reducing cost associated with clustering data relative to the computer readable data set”; note: the stop criterion affects how the model performs, e.g. refines model, goes from a crude model to an acceptable model, etc.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Heckerman because one of ordinary skill in the art would have recognized the benefit of building more efficient (sub)models, but does not specifically teach characterizing a feature including a resourcing level of an enterprise associated with an enterprise resource planning system, the resourcing level specifying at least one condition on an output of a predictive model, wherein the output of the predictive model comprises one or more output values. However, Morris2 teaches characterizing a feature including a resourcing level of an enterprise associated with an enterprise resource planning system (e.g. in column 1 lines 53-63, column 4 lines 34-67, column 8 lines 38-67, column 10 lines 47-60, and column 16 lines 30-35, “take into account both objective (hard) constraints (e.g., available resources, required start/end dates, risk, criticality, cost and return on investment weightings, etc.) and subjective (soft) constraints (e.g., specified named resources to work on projects, tolerances in delivering on all projects and meeting the required resourcing levels, etc.) to ensure that any outcome is fully aligned with the needs of the business… a "resource vs cost" criterion (293) for allowing a user to instruct the system to allocate resources and/or schedule projects without considering cost or resource utilization… an "individual resource utilization" criterion (295) for allowing a user to specify the percentage of resource utilization per project, and a "total resource utilization" criterion (296) for allowing a user to specify the percentage of resource utilization across all projects in a portfolio [etc.]… scenarios” and figure 1B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Morris2 because one of ordinary skill in the art would have recognized the benefit of facilitating optimization of other well-known resources and/or producing outcomes that align with enterprise needs (note: also amounts to a simple substitution that yields predictable results; e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP § 2143(B); condition characterizing a resourcing level of the enterprise associated with the enterprise resource planning system as the condition), but does not specifically teach, as a whole, the feature specifying at least one condition on an output of a predictive model, wherein the output of the predictive model comprises one or more output values. However, Zhang teaches a feature specifying at least one condition on an output of a model, wherein the output of the model comprises one or more output values (e.g. in column 8 line 61 – column 9 line 16 and column 10 lines 54-59, “constraint may limit or constrain an execution of the model… limit…to a particular range of values… constrain one or more outputs of the model… model…generating output values based on the set of input values”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Zhang because one of ordinary skill in the art would have recognized the benefit of fully controlling constraints on a model. As per claim 24, the rejection of claim 23 is incorporated and the combination further teaches wherein the model further includes a second submodel, and wherein the first routing rule indicates that the first candidate submodel is to be used to assess input data for the first value of the variable and that the second submodel is to be used to assess input data for a second value of the variable (e.g. Merrill, in paragraph 101, “a selector determines which model to use based on the input variables according to predetermined rules”; Morris, in paragraphs 29, 74, and 125, “context data values relating to Summer months can be removed if it is Winter… a subsequent modeling testing procedure applied to operational data collected over a period of time indicates that the predictive model is no longer accurate”). Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), Heckerman et al. (US 7409371 B1), London (US 11200511 B1), and Zhang et al. (US 10719645 B1). As per independent claim 25, Merrill teaches a method comprising: receiving input specifying a first selection of a first value of a variable of a dataset (e.g. in paragraph 27, “receives a first set of input variable values for a first data set”), the variable including a set of values associated with a predictive model of an enterprise system (e.g. in paragraphs 3 and 27, “modeling system receives [the] first set of input variable values” and “Business… ensembled models… predictive power”), the predictive model including a set of submodels, the set of submodels including a first submodel, the first value associated with the first submodel (e.g. in paragraphs 27, 86 and 101-102, “modeling system receives [the] first set of input variable values… Ensemble modeling system which is built upon one or more of submodels… ensembles of models[i.e. submodels]”); determining a first routing rule specifying use of the first submodel associated with the selected first value when the model receives the selected first value as input (e.g. in paragraphs 101-102, “a selector determines which model to use based on the input variables according to predetermined rules”), wherein the variable identifies a subgroup of a population within the dataset (e.g. in paragraph 100, “a subset of input variable values”); and deploying the model with the first routing rule (e.g. in paragraph 101, using modeling system “according to predetermined rules”), but does not specifically teach predictive model of an enterprise resource planning system, wherein each submodel in the set of submodels is trained with a respective cost-benefit tradeoff and a resourcing level specifying at least one condition of an output of the predictive model, wherein the output of the predictive model comprises one or more output values and the first routing rule characterizing an order of priority for the set of submodels such that the first submodel has a higher priority for the subgroup of the population than a second submodel for the subgroup of the population; assessing, after deployment of the model, performance of the first submodel and the second submodel with respect to the subgroup of the population; and modifying, after deployment of the model, the first routing rule to adjust the order of priority for the set of submodels based on the second submodel outperforming the first submodel for the subgroup of the population. However, Morris teaches a predictive model of an enterprise resource planning system (e.g. in paragraphs 20, 27, and 35, “business-related… prediction [associated with] resource utilization, operational costs”, etc.), wherein each submodel in a set of submodels is trained (e.g. in paragraph 73, “predictive models 194 are…trained 192 using data”) and determining a first routing rule characterizing an order of priority for a set of submodels such that a first submodel has a higher priority for a subgroup of a population than a second submodel for the subgroup of the population (e.g. in paragraphs 73-74, 80, and 84-85, “indicates that the performance of the newly trained predictive model 194 is better for predicting the operational outcome of interest than the predictive model 194 currently in use [i.e. priority order]… allow the selection of the sub-model or combination of sub-models that perform best at modeling the outcome of interest… where only a subset of parameters (e.g., a subset of sensor data type) from a complete set of parameters available that provides the greatest sensitivity in predicting operational outcomes of interest may be used” and/or “combine the outputs of the sub-models using various weights…for the purposes of predicting any variety of operational outcomes of interest [i.e. priority order]”), assessing, after deployment of the model, performance of the first submodel and the second submodel with respect to the subgroup of the population and modifying, after deployment of the model, the first routing rule to adjust the order of priority for the set of submodels based on the second submodel outperforming the first submodel for the subgroup of the population (e.g. in paragraphs 74, 88-89, 103, 118, and 125, “super-model may be deployed… continuously or periodically update the sub-models and/or the super-model based on features evaluated… test/tune the sub-models and/or the weights ascribed to the sub-models for the super-model” with “indicates that the performance of…predictive model 194 is better for predicting the operational outcome of interest than the predictive model 194 currently in use” and/or “plurality of super-models 330 can be deployed to provide predictions regarding the operational outcome of interest. In various embodiments, the super-models can compete with each other over some interval of time to determine which performs the best and select that super-model 330 for use. The competition can be repeated at defined intervals to make sure the most effective super-model 330 is being used to provide the predicted output”; note: in this case, “model” is interpreted as the plurality of super-models and “submodel” is interpreted as a super-model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Merrill to include the teachings of Morris because one of ordinary skill in the art would have recognized the benefit of using the most effective (sub)model, but does not specifically teach trained with a respective cost-benefit tradeoff and a resourcing level specifying at least one condition of an output of the predictive model, wherein the output of the predictive model comprises one or more output values. However, Heckerman teaches a (sub)model being trained with a respective cost-benefit tradeoff (e.g. in column 5 line 58 – column 7 line 56 and claim 39, “a stopping criterion 68 to evaluate the tradeoff between the expected incremental cost of additional training and the expected incremental benefit of increasing the size of the considered data subset by going from subset D.sub.n to subset D.sub.n+1… models 62 trained by…training algorithm… building a refined statistical model…based at least in part on an associated training policy that includes determining acceptability based at least in part on an expected incremental benefit relative to an expected incremental cost associated with increasing the size of the aggregate data set in order to facilitate reducing cost associated with clustering data relative to the computer readable data set”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Heckerman because one of ordinary skill in the art would have recognized the benefit of building more efficient (sub)models, but does not specifically teach trained with a feature including a resourcing level specifying at least one condition of an output of the predictive model, wherein the output of the predictive model comprises one or more output values. However, London teaches training with a feature including a resourcing level specifying at least one condition (e.g. in column 14 lines 47-59, “training request may indicate a training data source 622, a model type 624, and/or one or more constraints or preferences 626 pertaining to the training of the model… the constraints/preferences 626 may indicate that adaptive sampling of the training data is to be used. In at least one embodiment the client may indicate a target resource budget or time budget as a constraint for the training”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of London because one of ordinary skill in the art would have recognized the benefit of allowing the learning process to be controlled, but does not specifically teach, as a whole, wherein the feature specifying at least one condition of an output of the predictive model, wherein the output of the predictive model comprises one or more output values. However, Zhang teaches a feature specifying at least one condition on an output of a model, wherein the output of the model comprises one or more output values (e.g. in column 8 line 61 – column 9 line 16 and column 10 lines 54-59, “constraint may limit or constrain an execution of the model… limit…to a particular range of values… constrain one or more outputs of the model… model…generating output values based on the set of input values”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Zhang because one of ordinary skill in the art would have recognized the benefit of fully controlling constraints on a model. Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Farrar et al. (US 20050203940 A1). As per claim 26, the rejection of claim 1 is incorporated, but the combination does not specifically teach wherein the variable comprises a column of the data set. However, Farrar teaches a variable comprising a column of a data set (e.g. in paragraphs 10 and 53, “an index created on columns or groups of columns in a table may enable the page containing rows that match a certain condition imposed on the index columns to be located” corresponding to respective attributes, i.e. variables). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Farrar because one of ordinary skill in the art would have recognized the benefit of storing and/or retrieving relevant information. Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Daoud et al. (US 20070219661 A1). As per claim 27, the rejection of claim 1 is incorporated, but the combination does not specifically teach wherein the resourcing level comprises a cost to pursue. However, Daoud teaches a resourcing level comprising a cost to pursue (e.g. in paragraphs 22, 33, and 41, “a market based target cost … business enterprises budgetary and/or resource constraints, it may be too costly to pursue each…project… applying the budgetary constraints to the fixed costs”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Daoud because one of ordinary skill in the art would have recognized the benefit of incorporating relevant resourcing information (further amounting to a simple substitution that yields predictable results; e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP § 2143(B)). Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Merrill et al. (US 20180322406 A1) in view of Morris et al. (US 20160350671 A1), London (US 11200511 B1), Morris et al. (US 7991632 B1, hereinafter, “Morris2”), and Zhang et al. (US 10719645 B1) as applied above, and further in view of Price et al. (US 10339486 B1). As per claim 28, the rejection of claim 1 is incorporated, but the combination does not specifically teach wherein the resourcing level comprises a lead pursuit capacity. However, Price teaches a resourcing level comprising a lead pursuit capacity (e.g. in column 8 line 65 – column 9 line 7, “The total leads utilized may refer to the maximum leads that an agency can pursue, given producer capacity constraints. Because the total leads generated 362 exceeds the total producer capacity 363, and because the agency cannot utilize leads beyond its capacity, the total leads utilized 364 in the provided example is equal to the total producer capacity”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Price because one of ordinary skill in the art would have recognized the benefit of incorporating relevant resourcing information (further amounting to a simple substitution that yields predictable results; e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP § 2143(B)). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For example, An et al. (US 20100088138 A1) teaches “The resource action plan needs to respect multiple operational constraints and business rules, such as, the training cost and lead time, the minimal residence time that a resource-unit needs to spend in any skill type upon being cross-trained or hired into, minimal acceptable duration for any contracted skill type, and the stochastic nature associated with any potential opportunity that has not resulted in a signed contract… long-term resource action planning model also may consider multiple objectives such as gap minimization, glut minimization, cost minimization with demand fulfillment constraints, and weighted combinations of the above” (e.g. in paragraph 14 and 17). Dirac et al. (US 20150379427 A1) teaches “feature processing cost-benefit tradeoffs may be used for a variety of model types, including for example classification models, regression models, clustering models, natural language processing models and the like, and for a variety of problem domains in different embodiments” (e.g. in paragraph 208). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM WONG whose telephone number is (571)270-1399. The examiner can normally be reached Monday-Friday 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /W.W/Examiner, Art Unit 2144 02/28/2026 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Sep 19, 2019
Application Filed
Oct 07, 2019
Response after Non-Final Action
Oct 31, 2019
Response after Non-Final Action
May 08, 2021
Non-Final Rejection — §103, §112
Aug 10, 2021
Applicant Interview (Telephonic)
Aug 14, 2021
Examiner Interview Summary
Sep 14, 2021
Response Filed
Jan 05, 2022
Final Rejection — §103, §112
Jun 07, 2022
Applicant Interview (Telephonic)
Jun 08, 2022
Examiner Interview Summary
Jun 10, 2022
Response after Non-Final Action
Jun 30, 2022
Request for Continued Examination
Jul 07, 2022
Response after Non-Final Action
Jul 30, 2022
Non-Final Rejection — §103, §112
Jan 13, 2023
Applicant Interview (Telephonic)
Jan 14, 2023
Examiner Interview Summary
Feb 10, 2023
Response Filed
Mar 19, 2023
Final Rejection — §103, §112
Sep 25, 2023
Applicant Interview (Telephonic)
Sep 25, 2023
Examiner Interview Summary
Sep 27, 2023
Request for Continued Examination
Sep 28, 2023
Response after Non-Final Action
Dec 13, 2023
Non-Final Rejection — §103, §112
Jun 04, 2024
Applicant Interview (Telephonic)
Jun 05, 2024
Examiner Interview Summary
Jun 13, 2024
Response Filed
Oct 08, 2024
Final Rejection — §103, §112
Apr 14, 2025
Request for Continued Examination
Apr 14, 2025
Applicant Interview (Telephonic)
Apr 15, 2025
Examiner Interview Summary
Apr 20, 2025
Response after Non-Final Action
May 03, 2025
Non-Final Rejection — §103, §112
Nov 03, 2025
Applicant Interview (Telephonic)
Nov 04, 2025
Examiner Interview Summary
Nov 10, 2025
Response Filed
Feb 28, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572252
CONTROLLING A 2D SCREEN INTERFACE APPLICATION IN A MIXED REALITY APPLICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12530707
CUSTOMER EFFORT EVALUATION IN A CONTACT CENTER SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12511846
XR DEVICE-BASED TOOL FOR CROSS-PLATFORM CONTENT CREATION AND DISPLAY
2y 5m to grant Granted Dec 30, 2025
Patent 12504944
METHODS AND USER INTERFACES FOR SHARING AUDIO
2y 5m to grant Granted Dec 23, 2025
Patent 12423561
METHOD AND APPARATUS FOR KEEPING STATISTICAL INFERENCE ACCURACY WITH 8-BIT WINOGRAD CONVOLUTION
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
30%
Grant Probability
57%
With Interview (+26.9%)
4y 11m
Median Time to Grant
High
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month