Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This is the initial office action based on the application filed on June 30 , 202 3 , which claims 1- 2 5 are presented for examination. Status of Claims 3. Claims 1- 2 5 are pending, of which claims, of which claim 1, 14 , 17, 21 and 2 5 are in independent form. Priority 4. No priority has been considered for this application. Information Disclosure Statement 5. Information disclosure statement filed on 0 6 / 30 /202 3 and 0 7 / 05 /202 3 , have been reviewed and considered by Examiner. The Office's Note: 6. The Office has cited particular paragraphs / columns and line numbers in the reference(s) applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim(s), other passages and figures may apply as well. It is respectfully requested from the Applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the cited passages as taught by the prior art or relied upon by the Examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 7. Claims 1- 2 5 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 1- 2 5 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more . 7A> Claims 1, 14 and 1 7 recites “ determining a set of benchmarks for a course of action defined for an entity; obtaining data from one or more exogenous sources relating to environmental conditions; evaluating, using at least one artificial intelligence agent executing on at least one computing device of the computing environment, one or more benchmarks of the set of benchmarks based, at least, on the data obtained from the one or more exogenous sources; optimizing, using the at least one artificial intelligence agent, the set of benchmarks, based on the evaluating the one or more benchmarks, to obtain a revised set of benchmarks; outputting the revised set of benchmarks to be used to evaluate one or more components of the course of action defined for the entity; and repeating the obtaining, the evaluating, the optimizing and the outputting at a plurality of selected times to optimize the revised set of benchmarks, wherein the set of benchmarks for the repeating is the revised set of benchmarks, and wherein the revised set of benchmarks dynamically change over the plurality of selected times as the course of action evolves over time . ” as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the function through observation, evaluation judgment and /or opinion, or even with the aid of pen and paper. Thus, this limitation recites and falls within the “Mental Processes” grouping of abstract ideas under Prong 1. Under Prong 2, this judicial exception is not integrated into a practical application. The additional elements ““memory”, and “processor” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using generic computer, and/or mere computer components, and “ outputting the revised set of benchmarks to be used to evaluate one or more components of the course of action defined for the entity ; ” do nothing more than add insignificant extra solution activity to the judicial exception of merely gathering, displaying, updating, transmitting and storing data/information. Accordingly, the additional elements do not integrate the recited judicial exception into a practical application and the claim is therefore directed to the judicial exception. See MPEP 2106.05(g). Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of ““memory,” and “processor” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using generic computer, and/or mere computer components, and “ outputting the revised set of benchmarks to be used to evaluate one or more components of the course of action defined for the entity ”, the courts have identified merely gathering, displaying, updating, transmitting and storing data/information on a display is well-understood, routine and conventional activity. See MPEP 2106.05(d). The recitation of generic computer instruction and computer components to apply the judicial exception, and merely displaying data do not amount to significantly more, thus, cannot provide an inventive concept. Accordingly, the claims are not patent eligible under 35 USC 101. In conclusion, claims 1- 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 7B> Claims 21 and 25 recites “ determining a set of benchmarks for the course of action defined for the entity; obtaining data to be used in evaluating the set of benchmarks; evaluating, using at least one artificial intelligence agent executing on at least one computing device of the computing environment, one or more benchmarks of the set of benchmarks based, at least, on the data obtained from the one or more exogenous sources; optimizing, using the at least one artificial intelligence agent, the set of benchmarks, based on the evaluating the one or more benchmarks, to obtain a revised set of benchmarks; outputting the revised set of benchmarks to be used to evaluate one or more components of the course of action defined for the entity; and repeating the obtaining, the evaluating, the optimizing and the outputting at a plurality of selected times to optimize the revised set of benchmarks, wherein the set of benchmarks for the repeating is the revised set of benchmarks, and wherein the revised set of benchmarks dynamically change over the plurality of selected times as the course of action evolves over time . ” as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the function through observation, evaluation judgment and /or opinion, or even with the aid of pen and paper. Thus, this limitation recites and falls within the “Mental Processes” grouping of abstract ideas under Prong 1. Under Prong 2, this judicial exception is not integrated into a practical application. The additional elements ““memory”, and “processor” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using generic computer, and/or mere computer components, and “ outputting the revised set of benchmarks to be used to evaluate one or more components of the course of action defined for the entity ; ” do nothing more than add insignificant extra solution activity to the judicial exception of merely gathering, displaying, updating, transmitting and storing data/information. Accordingly, the additional elements do not integrate the recited judicial exception into a practical application and the claim is therefore directed to the judicial exception. See MPEP 2106.05(g). Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of ““memory,” and “processor” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using generic computer, and/or mere computer components, and “ outputting the revised set of benchmarks to be used to evaluate one or more components of the course of action defined for the entity ”, the courts have identified merely gathering, displaying, updating, transmitting and storing data/information on a display is well-understood, routine and conventional activity. See MPEP 2106.05(d). The recitation of generic computer instruction and computer components to apply the judicial exception, and merely displaying data do not amount to significantly more, thus, cannot provide an inventive concept. Accordingly, the claims are not patent eligible under 35 USC 101. In conclusion, claims 2 1- 2 5 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8 . Claims 1- 2 5 are rejected under 35 U.S.C. 103 as being unpatentable over Farooq (US 20200160237 – hereinafter Farooq – IDS of records ) and further in view of de Oliveira ( US 20110040440 – hereinafter de Oliveira ). Claim 1 rejected, Farooq teaches a computer-implemented method of dynamic processing within a computing environment, the computer-implemented method comprising (Farooq, abstract and summary) : determining a set of benchmarks for a course of action defined for an entity ( Farooq , US 20200160237, fig. 9 and para [0135 -0138 ], An initial benchmark value for the resource is determined in step 906 . Fig. 1 and para [0026 -0028 ], The business models also identify customer segments and partner services. Further, the business models can reference existing, domain models or templates and their associated industry performance benchmarks and business capability levels. In the benchmark stage 16, similar business domain models having related structure and historical data are used for analyzing and benchmarking to future transformation goals. ) ; obtaining data from one or more exogenous sources relating to environmental conditions ( Farooq , fig. 9 and para [0135 -0138], In step 900 , a first set of data representing a business strategy, a business goal and a constraint is input into a computer. In step 902, a second set of data representing relationships between the input business strategy, business goal and constraint is input into the computer. In step 904, a third set of data is input into the computer to define a business model that includes at least one resource. Fig. 1 and para [0026-0028], In the strategize stage 12 , balanced scorecard techniques and best practices are used to identify the business strategies and their associated business drivers, goals, objectives, and key performance indicators (KPIs). ) ; evaluating, using at least one artificial intelligence agent executing on at least one computing device of the computing environment, one or more benchmarks of the set of benchmarks based, at least, on the data obtained from the one or more exogenous sources ( Farooq , fig. 9 and para [0135-0138], In step 908 , a model optimization engine resident in the computer is processed based on the defined business model, the input business strategy, business goal and constraint to generate an output benchmark value.) ; optimizing, using the at least one artificial intelligence agent, the set of benchmarks, based on the evaluating the one or more benchmarks, to obtain a revised set of benchmarks ( Farooq , fig. 9 and para [0135-0138], . In step 908, a model optimization engine resident in the computer is processed based on the defined business model, the input business strategy, business goal and constraint to generate an output benchmark value. In step 910, a difference between the benchmark values (e.g., the difference between the initial/updated benchmark value and the output benchmark value). In step 912, the difference is compared to a threshold difference. If the difference is not less than the threshold difference, the flowchart loops back to step 908. If the difference is less than the threshold difference, benchmark value is updated based on the output benchmark value, and the defined business model is updated in step 914. ) ; outputting the revised set of benchmarks to be used to evaluate one or more components of the course of action defined for the entity ( Farooq , fig. 9 and para [0135-0138], . In step 908, a model optimization engine resident in the computer is processed based on the defined business model, the input business strategy, business goal and constraint to generate an output benchmark value. In step 910, a difference between the benchmark values (e.g., the difference between the initial/updated benchmark value and the output benchmark value). In step 912, the difference is compared to a threshold difference. If the difference is not less than the threshold difference, the flowchart loops back to step 908. If the difference is less than the threshold difference, benchmark value is updated based on the output benchmark value, and the defined business model is updated in step 914.) ; and repeating the obtaining, the evaluating, the optimizing and the outputting at a plurality of selected times to optimize the revised set of benchmarks, wherein the set of benchmarks for the repeating is the revised set of benchmarks, and wherein the revised set of benchmarks dynamically change over the plurality of selected times as the course of action evolves over time ( Farooq , fig. 9 and para [0135-0138]. Fig. 1 and para [0026-0027], The strategize, model and benchmark stages 12, 14, 16 define a development loop 30 that can be implemented to develop a new business model or models, and/or to modify an existing business model or models prior to operationalizing the business model(s). More specifically, several iterations of the development loop 30 can be executed in order to develop a business transformation roadmap, calibr ate performance benchmarks, and/or update performance targets. Para [0028]. In the analyze stage 24, the actual operational data becomes part of the historical data that is used for the optimization techniques to re calibr ate the original goals and targets. In the transform stage 26, the transformations required to optimize the execution towards business goals are identified. Change management is performed to execute those changes and monitor against the defined performance goals. ) . The Office would like to use prior art de Oliveira to back up Farooq to further teach limitation optimizing, using the at least one artificial intelligence agent, the set of benchmarks, based on the evaluating the one or more benchmarks, to obtain a revised set of benchmarks ( de Oliveira , para [ 0133- 0137], Thus, a system is provided where monitored industrial vehicles populate local databases with industrial vehicle information. That information can then be utilized to populate a single source, e.g., the industrial vehicle manufacturer server 30, that collects and aggregates the industrial vehicle information from across multiple enterprises. That aggregated information can then be used to derive benchmarks that are fed back to the enterprises and local systems to monitor local fleets. This may result in changes to fleet performance, which folds back to the local vehicle application server 14, which in turn feeds back to the manufacturer server 30. Due to the feedback and fold-back mechanisms, dynamic adjustments of benchmarks can be realized in a manner that tracks or otherwise converges towards practical performance measures.). It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate de Oliveira into Farooq to automat e prioritization of timely service to fleet of monitored industrial vehicle . Since the dashboard information is communicated to user interface based upon state of summary and intermediate status indicators, the efficiency of automating prioritization of timely service to fleet of monitored industrial vehicle is enhance as suggested by de Oliveira (See abstract and summary). Claim 2 is rejected for the reasons set forth hereinabove for claim 1, Farooq and de Oliveira teach the computer-implemented method of claim 1, wherein the at least one artificial intelligence agent is trained for the course of action defined for the entity ( Farooq , para [0121 -0125 ], The operations performance management module 308 enables the user to input operations performance data, and enables the user to link the operations performance data with other data that is input using the various modules, and sub-modules described herein. The operations performance management module 308 has a plurality of sub-modules that can include, but are not limited to, a metrics sub-module 348, a benchmark inventory sub-module 350, a decision inventory sub-module 352, and a functional performance category submodule 354. The metrics sub-module 348 provides a repository of metrics that can measure financial, non-financial, quantifiable, or non-quantifiable aspects of business performance. The benchmark inventory sub-module 350 provides industry standards of values assigned to metrics that measure business performance. The decision inventory sub-module 352 provides an inventory of intel ligent decisions that have been successful in solving problems (e.g., a handbook of quick problem solving). The functional performance category sub-module 354 provides categories of performance defined as levels (e.g., level 1 to level 5 as increasing order), in which a particular function's performance falls. ) . Claim 3 is rejected for the reasons set forth hereinabove for claim 2 , Farooq and de Oliveira teach the computer-implemented method of claim 2, further comprising retraining the at least one artificial intelligence agent based on feedback relating to the set of benchmarks ( Farooq , fig. 9 and para [0135-0138]. Fig. 1 and para [0026-0027], The strategize, model and benchmark stages 12, 14, 16 define a development loop 30 that can be implemented to develop a new business model or models, and/or to modify an existing business model or models prior to operationalizing the business model(s). More specifically, several iterations of the development loop 30 can be executed in order to develop a business transformation roadmap, calibr ate performance benchmarks, and/or update performance targets. . ) . Claim 4 is rejected for the reasons set forth hereinabove for claim 1, Farooq and de Oliveira teach the computer-implemented method of claim 1, wherein a benchmark is used to measure effectiveness of a component of the one or more components of the course of action, the component including one or more tasks to be performed to implement the course of action defined for the entity ( Farooq , para [0040 -0045 ], ] One way of linking strategic management and operational execution for effecti ve decision making is through the delivery of relevant performance benchmarks. Using the development loop 30, performance benchmarks can be simulated on strategies by simulating against the developed business model(s).) . Claim 5 is rejected for the reasons set forth hereinabove for claim 1, Farooq and de Oliveira teach the computer-implemented method of claim 1, wherein the obtaining data from the one or more exogenous sources comprises retrieving, via one or more communication networks, data relating to one or more current exogenous conditions ( Farooq , fig. 9 and para [0135-0138], Referring now to FIG. 9, exemplar steps that can be executed by another implementation of integrated framework of the present disclosure will, be described. In the implementation of FIG. 9, the flowchart illustrates exemplar steps for developing an optimized business model for an enterprise. In step 900 , a first set of data representing a business strategy, a business goal and a constraint is input into a computer. In step 902, a second set of data representing relationships between the input business strategy, business goal and constraint is input into the computer. In step 904, a third set of data is input into the computer to define a business model that includes at least one resource . ) Claim 6 is rejected for the reasons set forth hereinabove for claim 1, Farooq and de Oliveira teach the computer-implemented method of claim 1, wherein the optimizing the set of benchmarks comprises adding a new benchmark to the set of benchmarks ( de Oliveira , para [ 0133- 0137], Thus, a system is provided where monitored industrial vehicles populate local databases with industrial vehicle information. That information can then be utilized to populate a single source, e.g., the industrial vehicle manufacturer server 30, that collects and aggregates the industrial vehicle information from across multiple enterprises. That aggregated information can then be used to derive benchmarks that are fed back to the enterprises and local systems to monitor local fleets. This may result in changes to fleet performance, which folds back to the local vehicle application server 14, which in turn feeds back to the manufacturer server 30. Due to the feedback and fold-back mechanisms, dynamic adjustments of benchmarks can be realized in a manner that tracks or otherwise converges towards practical performance measures. ) . Claim 7 is rejected for the reasons set forth hereinabove for claim 1, Farooq and de Oliveira teach the computer-implemented method of claim 1, wherein the optimizing the set of benchmarks comprises deleting a benchmark from the set of benchmarks ( ( de Oliveira , para [ 0133- 0137], Thus, a system is provided where monitored industrial vehicles populate local databases with industrial vehicle information. That information can then be utilized to populate a single source, e.g., the industrial vehicle manufacturer server 30, that collects and aggregates the industrial vehicle information from across multiple enterprises. That aggregated information can then be used to derive benchmarks that are fed back to the enterprises and local systems to monitor local fleets. This may result in changes to fleet performance, which folds back to the local vehicle application server 14, which in turn feeds back to the manufacturer server 30. Due to the feedback and fold-back mechanisms, dynamic adjustments of benchmarks can be realized in a manner that tracks or otherwise converges towards practical performance measures.). Claim 8 is rejected for the reasons set forth hereinabove for claim 1, Farooq and de Oliveira teach the computer-implemented method of claim 1, wherein the optimizing the set of benchmarks comprises modifying a benchmark of the set of benchmarks ( de Oliveira , para [ 0133- 0137], Thus, a system is provided where monitored industrial vehicles populate local databases with industrial vehicle information. That information can then be utilized to populate a single source, e.g., the industrial vehicle manufacturer server 30, that collects and aggregates the industrial vehicle information from across multiple enterprises. That aggregated information can then be used to derive benchmarks that are fed back to the enterprises and local systems to monitor local fleets. This may result in changes to fleet performance, which folds back to the local vehicle application server 14, which in turn feeds back to the manufacturer server 30. Due to the feedback and fold-back mechanisms, dynamic adjustments of benchmarks can be realized in a manner that tracks or otherwise converges towards practical performance measures.). Claim 9 is rejected for the reasons set forth hereinabove for claim 1, Farooq and de Oliveira teach the computer-implemented method of claim 1, wherein the obtaining the data, the evaluating the one or more benchmarks, and the optimizing the set of benchmarks are executed as part of an intelligent workflow generated to implement the course of action defined for the entity ( Farooq , para [ 0085- 0090], Once the strategic operations and initiatives have been put into place based on the operations model(s), the enterprise operations are executed. That is to say that the enterprise operations are put into action in support of the enterprise's mission and vision, and to meet the goals and objectives set by the enterprise. Upon executing the enterprise operations in accordance with the developed business model(s), the integrated framework of the present disclosure enables the performance of the enterprise to be monitored and governed. Monitoring can occur using the defined SLAs as its foundation. Governance deals with optimization and forecasting using the performance data that is collected during monitoring. Furthermore, although governance deals with change management, governance also concerns how to make changes to the system in an appropriate workflow or governing process once the changes are identified. Para [0130 -0132 ], workflow.) . Claim 10 is rejected for the reasons set forth hereinabove for claim 1, Farooq and de Oliveira teach the computer-implemented method of claim 1, further comprising: performing analysis, using the at least one artificial intelligence agent, to identify deficiencies in one or more processes of the entity ( Farooq , para [0121 -0125 ], The operations performance management module 308 enables the user to input operations performance data, and enables the user to link the operations performance data with other data that is input using the various modules, and sub-modules described herein. The operations performance management module 308 has a plurality of sub-modules that can include, but are not limited to, a metrics sub-module 348, a benchmark inventory sub-module 350, a decision inventory sub-module 352, and a functional performance category submodule 354. The metrics sub-module 348 provides a repository of metrics that can measure financial, non-financial, quantifiable, or non-quantifiable aspects of business performance. The benchmark inventory sub-module 350 provides industry standards of values assigned to metrics that measure business performance. The decision inventory sub-module 352 provides an inventory of intel ligent decisions that have been successful in solving problems (e.g., a handbook of quick problem solving). The functional performance category sub-module 354 provides categories of performance defined as levels (e.g., level 1 to level 5 as increasing order), in which a particular function's performance falls. ) ; and providing, using the at least one artificial intelligence agent, one or more modifications to the one or more processes to improve use of resources of the entity ( Farooq , fig. 5 and para [ 0130- 0132], augmenting’s resources capacity ) . Claim 11 is rejected for the reasons set forth hereinabove for claim 1 0 , Farooq and de Oliveira teach the computer-implemented method of claim 10, wherein the resources of the entity include computer resources of the entity, and wherein the one or more modifications improve processing speed within one or more computing devices of the computing environment ( Farooq , fig. 5 and para [0132 -0135 ] , The bottleneck framework sub-module 542 provides a framework that tracks, sends alerts, and reports the resources that are either a current bottleneck or are likely to become a bottleneck in near future. The corrective actions sub-module 544 executes actions that can correct a situation that needs correction (e.g., augmenting a resource's capacity before it becomes a bottleneck, hiring few more labor people to speed up the work). The cost/schedule alignment sub-module 546 executes corrective actions to align expenditures and their schedule, and the strategy/operations alignment sub-module 548 executes corrective actions that align long term goals/strategies and short term goals/strategies. The functions organization alignment sub-module 550 executes corrective actions that align an organization's higher level goals and routine functions. The business resource alignment sub-module 552 executes corrective actions that align business need and resources, and the risk management sub-module 554 provides a layer above a bottleneck that manages risks identified by the bottleneck framework, and that takes appropriate corrective action(s). ) . Claim 1 2 is rejected for the reasons set forth hereinabove for claim 1 0 , Farooq and de Oliveira teach the computer-implemented method of claim 10, wherein the obtaining the data, the evaluating the one or more benchmarks, the optimizing the set of benchmarks, the performing analysis and the providing the one or more modifications are executed as part of an intelligent workflow generated to implement the course of action defined for the entity ( Farooq , para [0085-0090], Once the strategic operations and initiatives have been put into place based on the operations model(s), the enterprise operations are executed. That is to say that the enterprise operations are put into action in support of the enterprise's mission and vision, and to meet the goals and objectives set by the enterprise. Upon executing the enterprise operations in accordance with the developed business model(s), the integrated framework of the present disclosure enables the performance of the enterprise to be monitored and governed. Monitoring can occur using the defined SLAs as its foundation. Governance deals with optimization and forecasting using the performance data that is collected during monitoring. Furthermore, although governance deals with change management, governance also concerns how to make changes to the system in an appropriate workflow or governing process once the changes are identified. Para [0130-0132], The performance change management sub-module 530 executes management of performance expectations and targets. Management can include, but is not limited to, changing or setting levels to be achieved for each and every KPI, and metrics that measure some aspect of business performance. The change workflow sub-module 532 executes management of flow of work, or streamlining to remove bottlenecks, routing, and re-routing. The change rules sub-module 534 adjusts the management of internal rules and regulations that govern day-to-day operations and long term strategies. The change policies sub-module 536 executes the management of external rules and regulations imposed on the business (i.e., policies). The policies can be changed after a recommendation, which based on a rigorous analysis, has been accepted. ) . Claim 13 is rejected for the reasons set forth hereinabove for claim 1, Farooq and de Oliveira teach the computer-implemented method of claim 1, further comprises obtaining historical data relating to the course of action defined for the entity, and wherein the evaluating uses the historical data and the data obtained from the one or more exogenous sources ( Farooq , fig. 1 and para [0026 -0030 ], The integrated framework 10 provides a plurality of stages including a strategize stage 12, a model stage 14, a benchmark stage 16, an operationalize stage 18, an execute stage 20, a monitor stage 22, an analyze stage 24, and a transform stage 26. In the strategize stage 12, balanced scorecard techniques and best practices are used to identify the business strategies and their associated business drivers, goals, objectives, and key performance indicators (KPIs). In the model stage 14, a new business modeling technique with templates is used to identify the as-is and to-be business models associated with the identified strategies. Business models identify the enterprise operations, which include business components, business component groups, service groups, business services, organization, business processes, and assets. The business models also identify customer segments and partner services. Further, the business models can reference existing, domain models or templates and their associated industry performance benchmarks and business capability levels. In the benchmark stage 16, similar business domain models having related structure and histori cal data are used for analyzing and benchmarking to future transformation goals. As discussed in further detail below, an optimization technique can be applied to particular business models given various input constraints such as budget, timeline, and resources. This optimization can graphically depict the outcome of applying the input constraints on a particular business model. These results are used in a closed-loop fashion to let management effectively choose a transformation or business model with their target capability levels, prior to operationalizing the business model. ) . As per claim 1 4 , this is the system claim to method claim 1. Therefore, it is rejected for the same reasons as above. As per claim 1 5 , this is the system claim to method claim 2 . Therefore, it is rejected for the same reasons as above. As per claim 1 6 , this is the system claim to method claim 1 0 . Therefore, it is rejected for the same reasons as above. As per claim 1 7 , this is the medium claim to method claim 1. Therefore, it is rejected for the same reasons as above. As per claim 1 8 , this is the medium claim to method claim 1 2 . Therefore, it is rejected for the same reasons as above. As per claim 1 9 , this is the medium claim to method claim 1 0 . Therefore, it is rejected for the same reasons as above. As per claim 20 , this is the medium claim to method claim 1 3 . Therefore, it is rejected for the same reasons as above. Claim 2 1 rejected, Farooq teaches a computer-implemented method of dynamic processing within a computing environment, the computer-implemented method comprising ( Farooq , abstract and summary) : executing an intelligent workflow to implement a course of action defined for an entity, wherein the executing comprises ( Farooq , para [0085-0090], Once the strategic operations and initiatives have been put into place based on the operations model(s), the enterprise operations are executed. That is to say that the enterprise operations are put into action in support of the enterprise's mission and vision, and to meet the goals and objectives set by the enterprise. Upon executing the enterprise operations in accordance with the developed business model(s), the integrated framework of the present disclosure enables the performance of the enterprise to be monitored and governed. Monitoring can occur using the defined SLAs as its foundation. Governance deals with optimization and forecasting using the performance data that is collected during monitoring. Furthermore, although governance deals with change management, governance also concerns how to make changes to the system in an appropriate workflow or governing process once the changes are identified. Para [0130-0132], The performance change management sub-module 530 executes management of performance expectations and targets. Management can include, but is not limited to, changing or setting levels to be achieved for each and every KPI, and metrics that measure some aspect of business performance. The change workflow sub-module 532 executes management of flow of work, or streamlining to remove bottlenecks, routing, and re-routing. The change rules sub-module 534 adjusts the management of internal rules and regulations that govern day-to-day operations and long term strategies. The change policies sub-module 536 executes the management of external rules and regulations imposed on the business (i.e., policies). The policies can be changed after a recommendation, which based on a rigorous analysis, has been accepted. ) : determining a set of benchmarks for the course of action defined for the entity ( Farooq , US 20200160237, fig. 9 and para [0135-0138], An initial benchmark value for the resource is determined in step 906 . Fig. 1 and para [0026-0028], The business models also identify customer segments and partner services. Further, the business models can reference existing, domain models or templates and their associated industry performance benchmarks and business capability levels. In the benchmark stage 16, similar business domain models having related structure and historical data are used for analyzing and benchmarking to future transformation goals.); obtaining data to be used in evaluating the set of benchmarks ( Farooq , fig. 9 and para [0135-0138], In step 900 , a first set of data representing a business strategy, a business goal and a constraint is input into a computer. In step 902, a second set of data representing relationships between the input business strategy, business goal and constraint is input into the computer. In step 904, a third set of data is input into the computer to define a business model that includes at least one resource. Fig. 1 and para [0026-0028], In the strategize stage 12 , balanced scorecard techniques and best practices are used to identify the business strategies and their associated business drivers, goals, objectives, and key performance indicators (KPIs).) ; evaluating, using at least one artificial intelligence agent executing on at least one computing device of the computing environment, one or more benchmarks of the set of benchmarks based, at least, on the data obtained from the one or more exogenous sources ( Farooq , fig. 9 and para [0135-0138], In step 908 , a model optimization engine resident in the computer is processed based on the defined business model, the input business strategy, business goal and constraint to generate an output benchmark value.); optimizing, using the at least one artificial intelligence agent, the set of benchmarks, based on the evaluating the one or more benchmarks, to obtain a revised set of benchmarks ( Farooq , fig. 9 and para [0135-0138], . In step 908, a model optimization engine resident in the computer is processed based on the defined business model, the input business strategy, business goal and constraint to generate an output benchmark value. In step 910, a difference between the benchmark values (e.g., the difference between the initial/updated benchmark value and the output benchmark value). In step 912, the difference is compared to a threshold difference. If the difference is not less than the threshold difference, the flowchart loops back to step 908. If the difference is less than the threshold difference, benchmark value is updated based on the output benchmark value, and the defined business model is updated in step 914.); outputting the revised set of benchmarks to be used to evaluate one or more components of the course of action defined for the entity ( Farooq , fig. 9 and para [0135-0138], . In step 908, a model optimization engine resident in the computer is processed based on the defined business model, the input business strategy, business goal and constraint to generate an output benchmark value. In step 910, a difference between the benchmark values (e.g., the difference between the initial/updated benchmark value and the output benchmark value). In step 912, the difference is compared to a threshold difference. If the difference is not less than the threshold difference, the flowchart loops back to step 908. If the difference is less than the threshold difference, benchmark value is updated based on the output benchmark value, and the defined business model is updated in step 914.); and repeating the obtaining, the evaluating, the optimizing and the outputting at a plurality of selected times to optimize the revised set of benchmarks, wherein the set of benchmarks for the repeating is the revised set of benchmarks, and wherein the revised set of benchmarks dynamically change over the plurality of selected times as the course of action evolves over time ( Farooq , fig. 9 and para [0135-0138]. Fig. 1 and para [0026-0027], The strategize, model and benchmark stages 12, 14, 16 define a development loop 30 that can be implemented to develop a new business model or models, and/or to modify an existing business model or models prior to operationalizing the business model(s). More specifically, several iterations of the development loop 30 can be executed in order to develop a business transformation roadmap, calibr ate performance benchmarks, and/or update performance targets. Para [0028]. In the analyze stage 24, the actual operational data becomes part of the historical data that is used for the optimization techniques to re calibr ate the original goals and targets. In the transform stage 26, the transformations required to optimize the execution towards business goals are identified. Change management is performed to execute those changes and monitor against the defined performance goals.). optimizing, using the at least one artificial intelligence agent, the set of benchmarks, based on the evaluating the one or more benchmarks, to obtain a revised set of benchmarks ( de Oliveira , para [ 0133- 0137], Thus, a system is provided where monitored industrial vehicles populate local databases with industrial vehicle information. That information can then be utilized to populate a single source, e.g., the industrial vehicle manufacturer server 30, that collects and aggregates the industrial vehicle information from across multiple enterprises. That aggregated information can then be used to derive benchmarks that are fed back to the enterprises and local systems to monitor local fleets. This may result in changes to fleet performance, which folds back to the local vehicle application server 14, which in turn feeds back to the manufacturer server 30. Due to the feedback and fold-back mechanisms, dynamic adjustments of benchmarks can be realized in a manner that tracks or otherwise converges towards practical performance measures.). It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate de Oliveira into Farooq to automate prioritization of timely service to fleet of monitored industrial vehicle. Since the dashboard information is communicated to user interface based upon state of summary and intermediate status indicators, the efficiency of automating prioritization of timely service to fleet of monitored industrial vehicle is enhance as suggested by de Oliveira (See abstract and summary). Claim 2 2 is rejected for the reasons set forth hereinabove for claim 2 1, Farooq and de Oliveira teach the computer-implemented method of claim 21, wherein the executing further comprises: performing analysis, using the at least one artificial intelligence agent, to identify deficiencies in one or more processes of the entity ( Farooq , para [0121 -0125 ], The operations performance management module 308 enables the user to input operations performance data, and enables the user to link the operations performance data with other data that is input using the various modules, and sub-modules described herein. The operations performance management module 308 has a plurality of sub-modules that can include, but are not limited to, a metrics sub-module 348, a benchmark inventory sub-module 350, a decision inventory sub-module 352, and a functional performance category submodule 354. The metrics sub-module 348 provides a repository of metrics that can measure financial, non-financial, quantifiable, or non-quantifiable aspects of business performance. The benchmark inventory sub-module 350 provides industry standards of values assigned to metrics that measure business performance. The decision inventory sub-module 352 provides an inventory of intel ligent decisions that have been successful in solving problems (e.g., a handbook of quick problem solving). The functional performance category sub-module 354 provides categories of performance defined as levels (e.g., level 1 to level 5 as increasing order), in which a particular function's performance falls.); and providing, using the at least one artificial intelligence agent, one or more modifications to the one or more processes to improve use of resources of the entity ( Farooq , fig. 5 and para [0130-0132], augmenting’s resources capacity). Claim 2 3 is rejected for the reasons set forth hereinabove for claim 22 , Farooq and de Oliveira teach the computer-implemented method of claim 22, wherein the executing further comprises: obtaining feedback relating to implementation of the course of action ( Farooq , para [ 0085- 0091] . Farooq , para [0040 -0045 ], One way of linking strategic management and operational execution for effecti ve decision making is through the delivery of relevant performance benchmarks. Using the development loop 30, performance benchmarks can be simulated on strategies by simulating against the developed business model(s). Para [0099 and 0109], feedback. ) ; and performing one or more actions based on the feedback to revise the intelligent workflow ( Farooq , para [0040 -0045 ], ] One way of linking strategic management and operational execution for effecti ve decision making is through the delivery of relevant performance benchmarks. Using the development loop 30, performance benchmarks can be simulated on strategies by simulating against the developed business model(s). Para [0099 and 0109], feedback. ) . Claim 2 4 is rejected for the reasons set forth hereinabove for claim 23 , Farooq and de Oliveira teach the computer-implemented method of claim 23, wherein the performing the one or more actions comprises optimizing the set of benchmarks ( Farooq , para [0085-0090], Once the strategic operations and initiatives have been put into place based on the operations model(s), the enterprise operations are executed. That is to say that the enterprise operations are put into action in support of the enterprise's mission and vision, and to meet the goals and objectives set by the enterprise. Upon executing the enterprise operations in accordance with the developed business model(s), the integrated framework of the present disclosure enables the performance of the enterprise to be monitored and governed. Monitoring can occur using the defined SLAs as its foundation. Governance deals with optimization and forecasting using the performance data that is collected during monitoring. Furthermore, although governance deals with change management, governance also concerns how to make changes to the system in an appropriate workflow or governing process once the changes are identified. Para [0130-0132], The performance change management sub-module 530 executes management of performance expectations and targets. Management can include, but is not limited to, changing or setting levels to be achieved for each and every KPI, and metrics that measure some aspect of business performance. The change workflow sub-module 532 executes management of flow of work, or streamlining to remove bottlenecks, routing, and re-routing. The change rules sub-module 534 adjusts the management of internal rules and regulations that govern day-to-day operations and long term strategies. The change policies sub-module 536 executes the management of external rules and regulations imposed on the business (i.e., policies). The policies can be changed after a recommendation, which based on a rigorous analysis, has been accepted.). As per claim 25 , this is the medium claim to method claim 2 1. Therefore, it is rejected for the same reasons as above. Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT DUY KHUONG THANH NGUYEN whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-7139 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 8 to 5 . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Lewis Bullock can be reached on FILLIN "SPE Phone?" \* MERGEFORMAT 5712723759 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx