Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to claims filed on 09/17/2025
Claims 1-20 are pending.
Claims 1, 4, 6, 10, 13, 15, and 19 were amended.
Claim Rejections - 35 USC § 112
Applicant’s arguments, see page 11, filed 09/17/2025, with respect to the claim rejection the amendment has been fully considered and are persuasive. The claim rejection has been withdrawn.
Claim Rejections - 35 USC § 101
Applicant’s arguments, see page 12-15, filed 09/17/2025, with respect to the claim rejection the amendment has been fully considered and are persuasive. The claim rejection has been withdrawn.
Claim Rejections - 35 USC § 102
Applicant’s amendments and arguments, see remarks page 15-17, filed 09/17/2025, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection necessitated by amendments is made in view of US 2016/0283271 A1, John Wiley Ashby, JR.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Razin Sergey, WO 2016/007824 A1, Published: 14 January 2016, (hereafter Sergey), in view of Ashby, Jr. et al. US 2016/0283271 A1 (hereafter Ashby).
Regarding claim 1. Sergey teaches a method for providing predictive cost and performance analytics to facilitate benchmarking of an application host, the method being implemented by at least one processor (Fig 1B, element 42) (Fig 2, Performance) (Fig 4, Performance, cost) (Par 58-59, performance element, cost graph showing trending increase or decrease), the method comprising:
receiving, by the at least one processor via a graphical user interface, at least one input, the at least one input including a request to benchmark at least one networked environment to host an application (Par 79, GUI, element 302, user input to view specific environment);
retrieving, by the at least one processor from a repository based on the at least one input, at least one data storage object that corresponds to the application, the at least one data storage object including at least one from among a deployment artifact and a performance script (Par 82, GUI relative to the API calls, continuously update the GUI) (Fig 4, performance trends, CPU utilization, ready time, host memory, compute performance);
simulating, by the at least one processor via a load performance and based on the retrieved at least one data storage object, deployment of the application in the at least one networked environment (Fig 9, based on the selection Group by application group, GUI displays, VM) (Fig 1 A, network computer environment resources) (Par 77, response to receiving a selection);
collecting, by the at least one processor via a listening and from the at least one networked environment, a result of the simulation, the result including at least one metric that corresponds to the application (Par 77, investigate the issue to determine the root cause of the degraded and failed hosts); and
determining, by the at least one processor using at least one model and based on the result of the simulation (Par 4, right sizing the VMs to the workload, therefore sizing the proper virtual machine for the workload, thus modeling the workload processing using the VMs for forecasting system performance), predicted implementation information that corresponds to at least one from among a predicted cost and predicted performance associated with hosting the application in the at least one networked environment at a future timepoint (Par 35, based upon a given window of time, forecasting into the future, for a network performance element)(Fig 7B, money waste, potential savings)(Par 75, idle VMs to be deleted, take appropriate action).
Sergey does not teach, simulating, by the at least one processor via a load performance server, wherein the load performance server simulates a workload of the application based on the at least one data storage object and the received at least one input, collecting, by the at least one processor via a listening server, wherein the listening server monitors messages from the at least one network environment to compile the at least one metric.
Ashby teaches simulating, by the at least one processor via a load performance server (Ashby, Par 71, servers, selected to simulate or model capacity planning and consolidation scenarios), wherein the load performance server simulates a workload of the application based on the at least one data storage object and the received at least one input (Ashby, Fig 5, simulations, current state)(Fig 6, input selection)(Par 63, modeling projections), collecting, by the at least one processor via a listening server (Ashby, Par 35-36, monitoring data, receives data from virtual processors, according with monitoring application)(Par 24, execute on a server), wherein the listening server monitors messages from the at least one network environment to compile the at least one metric (Ashby, Par 35-36, receives system monitoring data, from servers, virtual resources, receives CPU capacity utilization data, generates resource capacity and consumption metrics).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Sergey to incorporate the teachings of Ashby to simulate the workload via a performance server based on a data storage object server and a received input and to monitor messages from the network environment to obtain a metric because having a capacity management system assists administrators by interpreting capacity modeling results into actionable analysis to optimize workload placement (Ashby, Par 28).
Regarding claim 2. Sergey and Ashby teach the method of claim 1,
further comprising:
retrieving, by the at least one processor from the at least one model, the predicted implementation information (Sergey, Fig 7B, waste selection, retrieves the data related);
retrieving, by the at least one processor from the at least one networked environment, performance data and hardware data (Sergey, Fig 7B, retrieves vCPU data, state, and utilization, Selection options Compute, Storage, Network);
deriving, by the at least one processor, pricing information for the at least one networked environment based on the retrieved performance data and the retrieved hardware data (Sergey, Fig 7B, Money Waste, potential savings, based on the retrieved information), the pricing information including at least one from among daily pricing information for the at least one networked environment and monthly pricing information for the at least one networked environment (Sergey, Par 49, Waste cost, includes CPU and memory)( Sergey, Fig 3, element 304)(Sergey, Par 81, select from time ranges, 30 days); and
displaying, by the at least one processor via the graphical user interface, at least one from among the predicted implementation information, the performance data, the hardware data, and the derived pricing information in response to the at least one input (Sergey, Fig 7B, displays the information of the process, utilization, of compute, storage, network, and the cost waste).
Regarding claim 3. Sergey and Ashby teach the method of claim 2, wherein the graphical user interface includes at least one dashboard that presents a unified set of data about a series of disparate topics (Sergey, Fig 7B, dashboards, having a set of data of different views).
Regarding claim 4. Sergey and Ashby teach the method of claim 2, wherein the performance data includes an application latency value that corresponds to deployment of the data object in the at least one networked environment (Sergey, Fig 5, storage performance, avg Latency, Latency deviation).
Regarding claim 5. Sergey and Ashby teach the method of claim 2, wherein the hardware data includes a provisioned hardware value relating to an amount of hardware that was dynamically provisioned to achieve a desired application latency value for the at least one networked environment (Sergey, Par 57 identifies base candidate VMs for storage level caching, to improve the performance and eliminate the potential bottleneck in delivering application service level, thus provisioning to achieve performance, utilization latency).
Regarding claim 6. Sergey and Ashby teach the method of claim 1,
further comprising:
collecting, by the at least one processor in real-time, at least one metric from the at least one networked environment, the at least one metric including at least one real-time infrastructure metric and at least one real-time application performance metric (Sergey, fig 7A, reliability overview, efficiency overview, compute efficiency, compute performance, compute capacity) (Sergey, Par 82, update GUI in real time ); and
storing, by the at least one processor, the at least one metric in a database (Ashby, Par 36, store results in a database),
wherein the at least one networked environment includes at least one from among a public cloud network, a private cloud network, and an on-premise network, the on-premise network including a locally hosted computing infrastructure (Sergey, Fig 1A, computer environment resources, server, storage, network) (Sergey, Par 27, obtain data via public API calls relating to network attributes of the computer infrastructure).
Regarding claim 7. Sergey and Ashby teach the method of claim 1, wherein the deployment artifact in the retrieved at least one data storage object is provisioned according to the at least one networked environment prior to the simulation (Sergey, Par 30, forecast trends associated with attributes and metrics, thus the resources are assigned prior to the request) (Sergey, Par 34, storage attributes on the previous day, thus data of the resource before the request time).
Regarding claim 8. Sergey and Ashby teach the method of claim 1, wherein the at least one model includes at least one from among a performance model and a pricing model (Sergey, fig 7A and 7B, compute performance and waste cost).
Regarding claim 9. Sergey and Ashby teach the method of claim 1, wherein the predicted implementation information includes at least one from among a predicted cost to host the application in the at least one networked environment and a predicted performance of the application in the at least one networked environment (Sergey, fig 7A and 7B, predicted waste cost of the computer environment for the application being hosted)(Sergey, Par 49, trending, showing the percentage increase or decrease in the waste cost over the last time period).
Regarding claim 10. Sergey teaches a computing device configured to implement an execution of a method for providing predictive cost and performance analytics to facilitate benchmarking of an application host (Fig 1B, element 42)(Fig 2, Performance)(Fig 4, Performance, cost)(Par 58-59, performance element, cost graph showing trending increase or decrease), the computing device comprising:
a processor (Par 24, host device, having processor);
a memory (Par 24, host device, having memory); and
a communication interface coupled to each of the processor and the memory (Par 24, host device, communication with computer infrastructure),
wherein the processor is configured to:
receive, via a graphical user interface, at least one input, the at least one input including a request to benchmark at least one networked environment to host an application (Par 79, GUI, element 302, user input to view specific environment);
retrieve, from a repository based on the at least one input, at least one data storage object that corresponds to the application, the at least one data storage object including at least one from among a deployment artifact and a performance script (Par 82, GUI relative to the API calls, continuously update the GUI) (Fig 4, performance trends, CPU utilization, ready time, host memory, compute performance);
simulate, via a load performance and based on the retrieved at least one data storage object, deployment of the application in the at least one networked environment (Fig 9, based on the selection Group by application group, GUI displays, VM) (Fig 1 A, network computer environment resources) (Par 77, response to receiving a selection);
collect, via a listening and from the at least one networked environment, a result of the simulation, the result including at least one metric that corresponds to the application (Par 77, investigate the issue to determine the root cause of the degraded and failed hosts); and
determine, by using at least one model and based on the result of the simulation (Par 4, right sizing the VMs to the workload, therefore sizing the proper virtual machine for the workload, thus modeling the workload processing using the VMs for forecasting system performance), predicted implementation information that corresponds to at least one from among a predicted cost and predicted performance associated with hosting the application in the at least one networked environment at a future timepoint (Par 35, based upon a given window of time, forecasting into the future, for a network performance element)(Fig 7B, money waste, potential savings) (Par 75, idle VMs to be deleted, take appropriate action).
Sergey does not teach, simulating, by the at least one processor via a load performance server, wherein the load performance server simulates a workload of the application based on the at least one data storage object and the received at least one input, collecting, by the at least one processor via a listening server, wherein the listening server monitors messages from the at least one network environment to compile the at least one metric.
Ashby teaches simulating, by the at least one processor via a load performance server (Ashby, Par 71, servers, selected to simulate or model capacity planning and consolidation scenarios), wherein the load performance server simulates a workload of the application based on the at least one data storage object and the received at least one input (Ashby, Fig 5, simulations, current state)(Fig 6, input selection)(Par 63, modeling projections), collecting, by the at least one processor via a listening server (Ashby, Par 35-36, monitoring data, receives data from virtual processors, according with monitoring application) (Par 24, execute on a server), wherein the listening server monitors messages from the at least one network environment to compile the at least one metric (Ashby, Par 35-36, receives system monitoring data, from servers, virtual resources, receives CPU capacity utilization data, generates resource capacity and consumption metrics).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Sergey to incorporate the teachings of Ashby to simulate the workload via a performance server based on a data storage object server and a received input and to monitor messages from the network environment to obtain a metric because having a capacity management system assists administrators by interpreting capacity modeling results into actionable analysis to optimize workload placement (Ashby, Par 28).
Regarding claim 11. Sergey and Ashby teach the computing device of claim 10, wherein the processor is further configured to:
retrieve, from the at least one model, the predicted implementation information (Sergey, Fig 7B, waste selection, retrieves the data related);
retrieve, from the at least one networked environment, performance data and hardware data (Sergey, Fig 7B, retrieves vCPU data, state, and utilization, Selection options Compute, Storage, Network);
derive pricing information for the at least one networked environment based on the retrieved performance data and the retrieved hardware data (Sergey, Fig 7B, Money Waste, potential savings, based on the retrieved information), the pricing information including at least one from among daily pricing information for the at least one networked environment and monthly pricing information for the at least one networked environment (Sergey, Par 49, Waste cost, includes CPU and memory)(Sergey, Fig 3, element 304)(Sergey, Par 81, select from time ranges, 30 days); and
display, via the graphical user interface, at least one from among the predicted implementation information, the performance data, the hardware data, and the derived pricing information in response to the at least one input (Sergey, Fig 7B, displays the information of the process, utilization, of compute, storage, network, and the cost waste).
Regarding claim 12. Sergey and Ashby teach the computing device of claim 11, wherein the graphical user interface includes at least one dashboard that presents a unified set of data about a series of disparate topics (Sergey, Fig 7B, dashboards, having a set of data of different views).
Regarding claim 13. Sergey and Ashby teach the computing device of claim 11, wherein the performance data includes an application latency value that corresponds to deployment of the data object in the at least one networked environment (Sergey, Fig 5, storage performance, avg Latency, Latency deviation).
Regarding claim 14. Sergey and Ashby teach the computing device of claim 11, wherein the hardware data includes a provisioned hardware value relating to an amount of hardware that was dynamically provisioned to achieve a desired application latency value for the at least one networked environment (Sergey, Par 57 identifies base candidate VMs for storage level caching, to improve the performance and eliminate the potential bottleneck in delivering application service level, thus provisioning to achieve performance, utilization latency).
Regarding claim 15. Sergey and Ashby teach the computing device of claim 10, wherein the processor is further configured to:
collect, in real-time, at least one metric from the at least one networked environment, the at least one metric including at least one real-time infrastructure metric and at least one real-time application performance metric (Sergey, fig 7A, reliability overview, efficiency overview, compute efficiency, compute performance, compute capacity) (Sergey, Par 82, update GUI in real time); and
store the at least one metric in a database (Ashby, Par 36, store results in a database),
wherein the at least one networked environment includes at least one from among a public cloud network, a private cloud network, and an on-premise network, the on-premise network including a locally hosted computing infrastructure (Sergey, Fig 1A, computer environment resources, server, storage, network) (Sergey, Par 27, obtain data via public API calls relating to network attributes of the computer infrastructure).
Regarding claim 16. Sergey and Ashby teach the computing device of claim 10, wherein the processor is further configured to provision the deployment artifact in the retrieved at least one data storage object according to the at least one networked environment prior to the simulation (Sergey, Par 30, forecast trends associated with attributes and metrics, thus the resources are assigned prior to the request) (Sergey, Par 34, storage attributes on the previous day, thus data of the resource before the request time).
Regarding claim 17. Sergey and Ashby teach the computing device of claim 10, wherein the at least one model includes at least one from among a performance model and a pricing model (Sergey, fig 7A and 7B, compute performance and waste cost).
Regarding claim 18. Sergey and Ashby teach the computing device of claim 10, wherein the predicted implementation information includes at least one from among a predicted cost to host the application in the at least one networked environment and a predicted performance of the application in the at least one networked environment (Sergey, fig 7A and 7B, predicted waste cost of the computer environment for the application being hosted)(Sergey, Par 49, trending, showing the percentage increase or decrease in the waste cost over the last time period).
Regarding claim 19. Sergey teaches a non-transitory computer readable storage medium storing instructions for providing predictive cost and performance analytics to facilitate benchmarking of an application host (Fig 1B, element 42) (Fig 2, Performance) (Fig 4, Performance, cost) (Par 58-59, performance element, cost graph showing trending increase or decrease), the storage medium (Par 24, host device, having memory) comprising executable code which, when executed by a processor, causes the processor to:
receive, via a graphical user interface, at least one input, the at least one input including a request to benchmark at least one networked environment to host an application (Par 79, GUI, element 302, user input to view specific environment);
retrieve, from a repository based on the at least one input, at least one data storage object that corresponds to the application, the at least one data storage object including at least one from among a deployment artifact and a performance script (Par 82, GUI relative to the API calls, continuously update the GUI) (Fig 4, performance trends, CPU utilization, ready time, host memory, compute performance);
simulate, via a load performance and based on the retrieved at least one data storage object, deployment of the application in the at least one networked environment (Fig 9, based on the selection Group by application group, GUI displays, VM) (Fig 1 A, network computer environment resources) (Par 77, response to receiving a selection);
collect, via a listening and from the at least one networked environment, a result of the simulation, the result including at least one metric that corresponds to the application (Par 77, investigate the issue to determine the root cause of the degraded and failed hosts); and
determine, by using at least one model and based on the result of the simulation (Par 4, right sizing the VMs to the workload, therefore sizing the proper virtual machine for the workload, thus modeling the workload processing using the VMs for forecasting system performance), predicted implementation information that corresponds to at least one from among a predicted cost and predicted performance associated with hosting the application in the at least one networked environment at a future timepoint (Par 35, based upon a given window of time, forecasting into the future, for a network performance element)(Fig 7B, money waste, potential savings)(Par 75, idle VMs to be deleted, take appropriate action).
Sergey does not teach, simulating, by the at least one processor via a load performance server, wherein the load performance server simulates a workload of the application based on the at least one data storage object and the received at least one input, collecting, by the at least one processor via a listening server, wherein the listening server monitors messages from the at least one network environment to compile the at least one metric.
Ashby teaches simulating, by the at least one processor via a load performance server (Ashby, Par 71, servers, selected to simulate or model capacity planning and consolidation scenarios), wherein the load performance server simulates a workload of the application based on the at least one data storage object and the received at least one input (Ashby, Fig 5, simulations, current state)(Fig 6, input selection)(Par 63, modeling projections), collecting, by the at least one processor via a listening server (Ashby, Par 35-36, monitoring data, receives data from virtual processors, according with monitoring application)(Par 24, execute on a server), wherein the listening server monitors messages from the at least one network environment to compile the at least one metric (Ashby, Par 35-36, receives system monitoring data, from servers, virtual resources, receives CPU capacity utilization data, generates resource capacity and consumption metrics).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Sergey to incorporate the teachings of Ashby to simulate the workload via a performance server based on a data storage object server and a received input and to monitor messages from the network environment to obtain a metric because having a capacity management system assists administrators by interpreting capacity modeling results into actionable analysis to optimize workload placement (Ashby, Par 28).
Regarding claim 20. Sergey and Ashby teach the storage medium of claim 19, wherein the predicted implementation information includes at least one from among a predicted cost to host the application in the at least one networked environment and a predicted performance of the application in the at least one networked environment (Sergey, fig 7A and 7B, predicted waste cost of the computer environment for the application being hosted)( Sergey, Par 49, trending, showing the percentage increase or decrease in the waste cost over the last time period).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGEL JAVIER CALLE whose telephone number is (571)272-0463. The examiner can normally be reached Monday - Friday 7:30 a.m. - 5 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rehana Perveen can be reached at (571)-272-3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.C./Examiner, Art Unit 2189
/REHANA PERVEEN/Supervisory Patent Examiner, Art Unit 2189