Prosecution Insights
Last updated: April 19, 2026
Application No. 17/506,494

ADJUSTING RESOURCES WITHIN A HYPERCONVERGED INFRASTRUCTURE SYSTEM BASED ON ENVIRONMENTAL INFORMATION

Non-Final OA §103§112
Filed
Oct 20, 2021
Examiner
LIN, HSING CHUN
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
5 (Non-Final)
59%
Grant Probability
Moderate
5-6
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
64 granted / 108 resolved
+4.3% vs TC avg
Strong +80% interview lift
Without
With
+79.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
145
Total Applications
across all art units

Statute-Specific Performance

§101
17.1%
-22.9% vs TC avg
§103
35.8%
-4.2% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
34.0%
-6.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-9 and 11-26 are pending in this application. Response to Arguments Applicant’s arguments regarding the rejections of claims 1-9 and 11-26 under 35 U.S.C. 112b have been fully considered and are persuasive. The rejections have been withdrawn. However, new 35 U.S.C. 112b rejections are applied to claims 1-9 and 11-26. Applicant’s arguments regarding the rejection of claim 22 under 35 U.S.C. 112d has been fully considered and is persuasive. The rejection has been withdrawn. Applicant's arguments regarding the 35 U.S.C. 101 rejections of claims 25 and 26 have been fully considered and are persuasive. Applicant's arguments regarding the 35 U.S.C. 103 rejections of claims 1-9 and 11-26 have been fully considered but they are either moot or unpersuasive. The 35 U.S.C. 103 rejections of claims 1-9 and 11-24 are moot in light of the references being applied in the current rejection. The 35 U.S.C. 103 rejections of claims 25-26 are unpersuasive. In the remarks regarding the 35 U.S.C. 103 rejection of claim 25, Applicant refers to the part of the final office action that cited Chen to teach capping, by the computer, the amount of resources allocated to the one of the user-side applications at a predetermined resource threshold and argues that the cited portion of Chen does not teach this limitation. However, Singh does teach this limitation. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 23 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As per claim 23: Lines 30-32 recite “an amount of merged resources currently allocated to all of the user-side applications running on the node are reduced in response to the risk score exceeding a predetermined threshold”, but this is not supported by the specification. The specification recites in [0096] “in response to determining that the risk score for a user-side application exceeds the predetermined threshold, the user-side application may be identified as risky (e.g., using metadata such as a tag, etc.) and an amount of merged resources currently allocated to the risky user-side application (e.g., one or more of computing, storage, memory, and networking resources, etc.) may be reduced within the HCI system.” The specification does not support that the all the user-side applications running on the node are reduced when the risk score exceeds a predetermined threshold. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-9 and 11-26 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per claims 1, 13, and 23 (line numbers refer to claim 1): Lines 14-17 recite “the increased amount of the one or more resources allocated to the one or more system-side applications…are retrieved from resources allocated to other system-side applications” but it is unclear if “other system-side applications” refer to system-side applications beside “the one or more system-side applications”. As per claim 23: Lines 20-21 recite “wherein the one or more resources are retrieved from resources allocated to other system-side applications”. However, lines 13-14 recite “adjust one or more resources allocated to the one or more system-side applications” and lines 17-19 recite “the adjusting includes allocating an increased amount of the one or more resources to the one or more system-side applications”. It doesn’t make sense for the one or more resources to be retrieved from resources allocated to other system-side applications. Rather it is the increased amount that is retrieved from resources allocated to other system-side applications. As per claim 24: Line 24 recites “the resources” but it is unclear if this refers to “the predetermined resources”. As per claim 25: Lines 11-12 recite “a first of the nodes” but it is unclear what “first” refers to (A first node?). Claims 2-9, 11-12 and 14-22, and 26 are dependent claims of claims 1, 13, and 25 and fail to resolve the deficiencies of claims 1, 13, and 25, so they are rejected for similar reasons. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9 and 11-22 are rejected under 35 U.S.C. 103 as being unpatentable over Vohra et al. (US 20220253255 A1 hereinafter Vohra), in view of Memon et al. (US 20200028894 A1 hereinafter Memon), and further in view of Singh et al. (LEASH: Enhancing Micro-architectural Attack Detection with a Reactive Process Scheduler hereinafter Singh). As per claim 1, Vohra teaches a computer-implemented method, comprising: identifying, by a computer, environmental information for a hyper-converged infrastructure (HCI) system ([0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0185] The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0116] Such data analytics applications may be configured, for example, to receive telemetry data phoned home by the storage system 306. Such telemetry data may describe various operating characteristics of the storage system 306 and may be analyzed, for example, to determine the health of the storage system 306 ); and in response to determining that the environmental information indicates a need to perform data recovery operations within the HCI system due to failure of one or more hardware storage resources ([0288] In this example cloud storage architecture of the cloud-based storage system (1002), recovery from data loss may be implemented in multiple ways. As one example, within the non-durable cloud storage layer (1004), one or more of the cloud computing instances (424a-424n) may fail, or portions of storage (414, 426 . . . 422, 430) for one or more of the cloud computing instances (424a-424n) may fail; [0213] The cloud computing instances (424a, 424b, 424n) with local storage (414, 418, 422) may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 13 instances that include one or more SSDs, and so on. In some embodiments, the local storage (414, 418, 422) must be embodied as solid-state storage (e.g., SSDs); [0292] loss of one or more cloud computing instances, such as the loss of cloud computing instance (424a); [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0144] As such, one or more modules of computer program instructions that are executing within the cloud-based storage system 318 (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances): identifying, by the computer, one or more system-side applications already running within the HCI system that are needed to perform the data recovery operations within the HCI system, and increasing, by the computer, an amount of one or more resources allocated to the one or more system-side applications already running within the HCI system needed to perform the data recovery operations within the HCI system for reducing a time for performing the data recovery operations, wherein the increased amount of the one or more resources allocated to the one or more system-side applications needed to perform the data recovery operations within the HCI system are retrieved, wherein the one or more system-side applications perform the data recovery operations using at least the increased amount of the one or more resources allocated to the one or more system-side applications ([0144] As such, one or more modules of computer program instructions that are executing within the cloud-based storage system 318 (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances; [0148] Readers will appreciate that various performance aspects of the cloud-based storage system 318 may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system 318 can be scaled-up or scaled-out as needed; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0279] In some implementations, the cloud-based storage system (1002) may be designed to recover from a data loss by scaling up the cloud architecture used to initially store the data to recover data more quickly than if the original cloud architecture were used.). Vohra fails to teach wherein the increased amount of the one or more resources allocated to the one or more system-side applications are retrieved from resources allocated to other system-side applications within the HCI system, in response to determining that the environmental information includes an existence of a security threat associated with a compromised user-side application, reducing an amount of merged resources currently allocated to the compromised user-side application. However, Memon teaches wherein the increased amount of the one or more resources allocated to the one or more system-side applications are retrieved from resources allocated to other system-side applications within the HCI system ([0089] a load imbalance for the then-current hosting virtualized controller (e.g., virtualized controller 126.sub.1) might be triggered by a threshold breach associated with one or more load metrics (e.g., CPU utilization, CPU processes, storage I/O bandwidth, storage IOPS, etc.), and/or by satisfying a certain sets of rules. The leader virtualized controller can then select an alternative virtualized controller (e.g., a VC unloaded or less loaded as compared to the then-current hosting VC) to which the computing device 112 can be redirected for attaching to the storage target (operation 404); [0090] According to the herein disclosed techniques, some or all sessions on a then-current hosting virtualized controller can be migrated to the alternative virtualized controller. Specifically, the leader virtualized controller can issue a storage target connection migrate command to the then-current hosting virtualized controller (e.g., virtualized controller 126.sub.1) to migrate one or more of its connections (message 406); [0036] As shown, the computing device 112 can attach to the storage targets 122 by connecting to the selected virtualized controller 197 (at step 3); [0037] In highly dynamic distributed storage systems, changes to the environment can occur such that the leader virtualized controller 195 might no longer identify the selected virtualized controller 197 as the virtualized controller to serve the computing device 112… The leader virtualized controller 195 can then initiate rebalancing (step 5) by issuing a redirect message to the computing device 112, which redirect message identifies an alternative virtualized controller 199 that has been selected on the basis of various policies or criteria (step 6). The alternative virtualized controller 199 can serve as the controller for attaching the storage targets 122 to the computing device 112. As shown, the computing device 112 can attach to the storage targets 122 by connecting to the alternative virtualized controller 199 (at step 7); [0135] A hyperconverged system coordinates the efficient use of compute and storage resources by and between the components of the distributed system; [0083] Specifically, the herein disclosed techniques can be used to attach the computing device 112 to a storage target in the distributed storage system 104 using a highly available virtual portal with a protocol redirect (at grouping 2182). For example, as shown, the redirect might be to virtualized controller 126.sub.1 (e.g., VC1). After a time lapse 140.sub.2, the virtualized controller 126.sub.1 selected to host the session for the storage target (e.g., iSCSI target) might fail. In such cases, the broken connection 302 can trigger a TCP reset detected by the computing device 112 (operation 304). Responsive to the TCP reset, a login (e.g., re-login) from the computing device 112 can be received by the leader virtualized controller hosting the virtual IP address (message 308). For example, virtualized controller 126.sub.2 might be the leader virtualized controller to receive the login at <vIPa>:3260. Upon receiving the login, the leader virtualized controller will detect that the preferred virtualized controller for the storage target (e.g., VC1) is down (operation 309). The leader virtualized controller can then select a healthy failover virtualized controller to which the login can be redirected (operation 310). For example, virtualized controller 126.sub.32 (e.g., VC32) might be selected as the failover virtualized controller. The leader virtualized controller can then issue a login redirection response to the computing device 112 comprising identifying information (e.g., IP address, port, etc.) pertaining to the failover virtualized controller (message 312). The computing device 112 can respond to the redirect by logging into the failover virtualized controller (message 314). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Vohra with the teachings of Memon to improve performance (see Memon [0086] For example, the virtualized controller host redistribution might be implemented so as to improve a degraded storage access performance resulting from the event.). Vohra and Memon fail to teach in response to determining that the environmental information includes an existence of a security threat associated with a compromised user-side application, reducing an amount of merged resources currently allocated to the compromised user-side application. However, Singh teaches in response to determining that the environmental information includes an existence of a security threat associated with a compromised user-side application, reducing an amount of merged resources currently allocated to the compromised user-side application (Section 2.2 paragraph 1 Most modern processors have a Performance Monitoring Unit on-chip to monitor micro-architectural events of running applications. Each logical core has a dedicated set of 4 to 8 configurable registers that can count the number of times a particular event occurs in a given duration. These registers are called Hardware Performance Counters (HPCs) and can be used to monitor a wide range of events like CPU-cycles, cache accesses, context-switches, and page faults; pg. 3 left column paragraph 2 LEASH, similarly uses HPCs to compute a threat index for a thread. Threads which depict a micro-architectural attack like behavior are given a high threat index and are throttled by reducing their CPU time; pg. 3 4. The LEASH Framework paragraph 1 Micro-architectural attacks depend considerably on CPU resources. If the attack programs are starved of the CPU, the success drops considerably; Fig. 1a caption LEASH stymies micro-architectural attacks by detecting malicious behavior in programs and reducing its CPU-share, thereby reducing the leakage from the shared resource; Section 1 paragraph 2 In a typical micro-architectural attack, the attacker runs a program called the spy that contends with a victim program for shared hardware resources). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Vohra and Memon with the teachings of Singh to prevent attacks (see Singh pg. 2 left column paragraph 2 If the spy gets insufficient time to execute on the CPU, then the information leakage is reduced, stymieing the attack). As per claim 2, Vohra, Memon, and Singh teach the computer-implemented method of Claim 1. Vohra teaches wherein the environmental information includes a status of one or more hardware resources within the HCI system ([0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0185] The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0116] Such data analytics applications may be configured, for example, to receive telemetry data phoned home by the storage system 306. Such telemetry data may describe various operating characteristics of the storage system 306 and may be analyzed, for example, to determine the health of the storage system 306 ). As per claim 3, Vohra, Memon, and Singh teach the computer-implemented method of Claim 1. Vohra teaches wherein a status of one or more hardware resources within the HCI system is provided by one or more monitoring elements, the one or more monitoring elements comprising monitoring software and/or hardware ([0148] the monitoring module monitors the performance of the could-based storage system 318; [0229] the monitoring module determines that the utilization of the local storage that is collectively provided by the cloud computing instances (424a, 424b, 424n) has reached a predetermined utilization threshold (e.g., 95%); [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above). As per claim 4, Vohra, Memon, and Singh teach the computer-implemented method of Claim 1. Vohra teaches wherein the environmental information includes an indication of a need to perform data recovery operations within the HCI system ([0279] In some implementations, the cloud-based storage system (1002) may be designed to recover from a data loss by scaling up the cloud architecture used to initially store the data to recover data more quickly than if the original cloud architecture were used; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above). As per claim 5, Vohra, Memon, and Singh teach the computer-implemented method of Claim 1. Vohra teaches wherein the environmental information includes an indication of system-side applications that are needed to perform data recovery operations within the HCI system ([0144] As such, one or more modules of computer program instructions that are executing within the cloud-based storage system 318 (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances; [0303] the storage controller application (408) may receive a signal indicating the failure, where the storage controller application (408) may then initiate creation (1104) of the replacement; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above). As per claim 6, Vohra, Memon, and Singh teach the computer-implemented method of Claim 1. Vohra teaches wherein the environmental information includes a risk score determined for one or more applications running within the HCI system ([0212] In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. In such an example, a controller failure may take more time to recover from as a new cloud computing instance that includes the storage controller application would need to be spun up rather than having an already created cloud computing instance take on the role of servicing I/O operations that would have otherwise been handled by the failed cloud computing instance; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above). Additionally, Singh teaches a risk score determined for one or more applications by security monitoring software that intercepts and parses data output and input by the one or more applications (Section 2.2 paragraph 1 Most modern processors have a Performance Monitoring Unit on-chip to monitor micro-architectural events of running applications. Each logical core has a dedicated set of 4 to 8 configurable registers that can count the number of times a particular event occurs in a given duration. These registers are called Hardware Performance Counters (HPCs) and can be used to monitor a wide range of events like CPU-cycles, cache accesses, context-switches, and page faults; pg. 2 left column paragraph 3 it uses the HPCs to quantify the malicious behavior of each thread in the system using a metric called threat index). As per claim 7, Vohra, Memon, and Singh teach the computer-implemented method of Claim 1. Vohra teaches comprising, in response to determining that the environmental information includes an indication that a new hardware storage resource has been added to the HCI system, increasing one or more resources allocated to one or more applications within the HCI system until a building of data on the new hardware storage resource is completed ([0144] As such, one or more modules of computer program instructions that are executing within the cloud-based storage system 318 (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above). As per claim 8, Vohra, Memon, and Singh teach the computer-implemented method of Claim 1. Vohra teaches wherein the data recovery operations include erasure coding-based rebuilding of data of the one or more failed hardware storage resources ([0147] Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system 318 have written to the cloud-based storage system 318. In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created, where each cloud computing instance is responsible for retrieving, from the cloud-based object storage 348, distinct 1/100,000th chunks of the valid data that users of the cloud-based storage system 318 have written to the cloud-based storage system 318 and locally storing the distinct chunk of the dataset that it retrieved. In such an example, because each of the 100,000 cloud computing instances can retrieve data from the cloud-based object storage 348 in parallel, the caching layer may be restored 100 times faster as compared to an embodiment where the monitoring module only create 1000 replacement cloud computing instances. In such an example, over time the data that is stored locally in the 100,000 could be consolidated into 1,000 cloud computing instances and the remaining 99,000 cloud computing instances could be terminated.). As per claim 9, Vohra, Memon, and Singh teach the computer-implemented method of Claim 1. Vohra teaches wherein in response to determining that updated environmental information includes determination that data recovery operations have been completed within the HCI system, additional merged resources allocated to one or more system-side applications that performed the data recovery operations within the HCI system are removed ([0227] Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system (403) have written to the cloud-based storage system (403). In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created, where each cloud computing instance is responsible for retrieving, from the cloud-based object storage (432), distinct 1/100,000.sup.th chunks of the valid data that users of the cloud-based storage system (403) have written to the cloud-based storage system (403) and locally storing the distinct chunk of the dataset that it retrieved. In such an example, because each of the 100,000 cloud computing instances can retrieve data from the cloud-based object storage (432) in parallel, the caching layer may be restored 100 times faster as compared to an embodiment where the monitoring module only create 1000 replacement cloud computing instances. In such an example, over time the data that is stored locally in the 100,000 could be consolidated into 1,000 cloud computing instances and the remaining 99,000 cloud computing instances could be terminated; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above). As per claim 11, Vohra, Memon, and Singh teach the computer-implemented method of Claim 1. Vohra teaches an amount of merged resources currently allocated to the second user-side application is reduced within the HCI system for minimizing an amount of negative activity within the HCI system thereby improving a performance of the HCI system ([0149] if the pool of local storage that is offered by the cloud computing instances is unnecessarily large, data can be consolidated and some cloud computing instances can be terminated; [0209] The cloud computing instances (404, 406) may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines); [0332] removing unnecessary resources from the storage pool (1424) in order to save costs; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above). Additionally, Singh teaches wherein in response to determining that a risk score calculated for a second user-side application exceeds a predetermined threshold, the second user-side application is identified as risky and an amount of resources currently allocated to the second user-side application is reduced for minimizing an amount of negative activity that the second user-side application is capable of performing thereby improving a performance (pg. 2 left column paragraph 3 A high value of threat index indicates that the process gets less CPU time; pg. 3 left column paragraph 2 LEASH, similarly uses HPCs to compute a threat index for a thread. Threads which depict a micro-architectural attack like behavior are given a high threat index and are throttled by reducing their CPU time; pg. 2 left column paragraph 2 LEASH makes use of the observation that a spy thread in a micro-architectural attack needs to contend with the victim for a shared resource. The success of the attack depends on the extent to which the spy can force this contention. If the spy gets insufficient time to execute on the CPU, then the information leakage is reduced, stymieing the attack; Section 2.2 paragraph 1 Most modern processors have a Performance Monitoring Unit on-chip to monitor micro-architectural events of running applications. Each logical core has a dedicated set of 4 to 8 configurable registers that can count the number of times a particular event occurs in a given duration. These registers are called Hardware Performance Counters (HPCs) and can be used to monitor a wide range of events like CPU-cycles, cache accesses, context-switches, and page faults; Section 2.2 paragraph 2 detecting anomalous behavior in programs). As per claim 12, Vohra, Memon, and Singh teach the computer-implemented method of Claim 11. Singh teaches wherein the amount of merged resources no longer allocated to the second user-side application in response to determining that the risk score calculated for the second user-side application exceeds the predetermined threshold is allocated to one or more other applications having a risk score below the predetermined threshold, wherein in response to determining that a risk score for the second user-side application no longer exceeds the predetermined threshold, the second user-side application is no longer identified as risky, and the amount of merged resources is returned to the second user-side application (pg. 4 left column paragraph 1 LEASH uses HPCs to detect such anomalous behavior and penalizes such threads by decreasing their weight, which in turn reduces their timeslice (Equation 1). If the thread stops exhibiting the anomalous behavior, its weight is gradually increased, thus regaining its regular timeslice; pg. 5 left column paragraph 3 In our evaluation platform, γ = 0.1 which means that, for every rise in threat index values, the weight drops by 10% until it reaches wMIN. Similarly, when a thread is recovering, the threat index value is negative and hence every fall in threat index value increases its weight by 10% until its weight is restored. The adaptable design of LEASH efficiently brings down the cost of a false penalization. Once a benign thread, which is erroneously flagged, is unflagged, it regains its CPU share; Section 2.2 paragraph 1 Most modern processors have a Performance Monitoring Unit on-chip to monitor micro-architectural events of running applications. Each logical core has a dedicated set of 4 to 8 configurable registers that can count the number of times a particular event occurs in a given duration. These registers are called Hardware Performance Counters (HPCs) and can be used to monitor a wide range of events like CPU-cycles, cache accesses, context-switches, and page faults; Section 2.2 paragraph 2 detecting anomalous behavior in programs). As per claim 13, it is a computer program product claim of claim 1, so it is rejected for similar reasons. Additionally, Vohra teaches a computer program product comprising one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions comprising instructions configured to cause one or more processors to perform a method ([0341] These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner). As per claim 14, Vohra, Memon, and Singh teach the computer program product of Claim 13. Vohra teaches wherein the environmental information includes a status of one or more hardware resources within the HCI system, wherein the amount of merged resources allocated to the one or more system-side applications already running within the HCI system needed to perform the data recovery operations within the HCI system is retrieved from a portion of merged resources held in reserve within the HCI system ([0288] In this example cloud storage architecture of the cloud-based storage system (1002), recovery from data loss may be implemented in multiple ways. As one example, within the non-durable cloud storage layer (1004), one or more of the cloud computing instances (424a-424n) may fail, or portions of storage (414, 426 . . . 422, 430) for one or more of the cloud computing instances (424a-424n) may fail; [0213] The cloud computing instances (424a, 424b, 424n) with local storage (414, 418, 422) may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 13 instances that include one or more SSDs, and so on. In some embodiments, the local storage (414, 418, 422) must be embodied as solid-state storage (e.g., SSDs); [0292] loss of one or more cloud computing instances, such as the loss of cloud computing instance (424a); [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0144] As such, one or more modules of computer program instructions that are executing within the cloud-based storage system 318 (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances; [0240] Readers will appreciate that, in an effort to increase the resiliency of the cloud-based storage systems described above, various components may be located within different availability zones. For example, a first cloud computing instance that supports the execution of the storage controller application may be located within a first availability zone while a second cloud computing instance that also supports the execution of the storage controller application may be located within a second availability zone. Likewise, the cloud computing instances with local storage may be distributed across multiple availability zones. In fact, in some embodiments, an entire second cloud-based storage system could be created in a different availability zone, where data in the original cloud-based storage system is replicated (synchronously or asynchronously) to the second cloud-based storage system so that if the entire original cloud-based storage system went down, a replacement cloud-based storage system (the second cloud-based storage system) could be brought up in a trivial amount of time.). As per claim 15, it is a computer program product of claim 3, so it is rejected for similar reasons. As per claim 16, it is a computer program product of claim 4, so it is rejected for similar reasons. As per claim 17, it is a computer program product of claim 5, so it is rejected for similar reasons. As per claim 18, Vohra, Memon, and Singh teach the computer program product of Claim 13. Vohra teaches wherein the environmental information includes a risk score determined for one or more applications running within the HCI system ([0212] In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. In such an example, a controller failure may take more time to recover from as a new cloud computing instance that includes the storage controller application would need to be spun up rather than having an already created cloud computing instance take on the role of servicing I/O operations that would have otherwise been handled by the failed cloud computing instance; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above). As per claim 19, Vohra, Memon, and Singh teach the computer program product of Claim 18. Singh teaches comprising, in response to determining that the risk score for one of the applications exceeds a predetermined threshold, identifying the application as risky; and reducing an amount of merged resources currently allocated to the application (pg. 2 left column paragraph 3 A high value of threat index indicates that the process gets less CPU time; pg. 3 left column paragraph 2 LEASH, similarly uses HPCs to compute a threat index for a thread. Threads which depict a micro-architectural attack like behavior are given a high threat index and are throttled by reducing their CPU time). As per claim 20, Vohra, Memon, and Singh teach the computer program product of 13. Vohra teaches wherein in response to determining that the environmental information indicates a need to perform data recovery operations within the HCI system, an amount of resources currently allocated to one or more system-side applications needed to perform the data recovery operations within the HCI system is increased by allocating additional resources thereto ([0144] As such, one or more modules of computer program instructions that are executing within the cloud-based storage system 318 (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0279] In some implementations, the cloud-based storage system (1002) may be designed to recover from a data loss by scaling up the cloud architecture used to initially store the data to recover data more quickly than if the original cloud architecture were used.). As per claim 21, Vohra, Memon, and Singh teach the computer program product of Claim 20. Vohra teaches wherein in response to determining that updated environmental information includes determination that data recovery operations have been completed within the HCI system, additional merged resources allocated to one or more system-side applications that performed the data recovery operations within the HCI system are removed ([0227] Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system (403) have written to the cloud-based storage system (403). In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created, where each cloud computing instance is responsible for retrieving, from the cloud-based object storage (432), distinct 1/100,000.sup.th chunks of the valid data that users of the cloud-based storage system (403) have written to the cloud-based storage system (403) and locally storing the distinct chunk of the dataset that it retrieved. In such an example, because each of the 100,000 cloud computing instances can retrieve data from the cloud-based object storage (432) in parallel, the caching layer may be restored 100 times faster as compared to an embodiment where the monitoring module only create 1000 replacement cloud computing instances. In such an example, over time the data that is stored locally in the 100,000 could be consolidated into 1,000 cloud computing instances and the remaining 99,000 cloud computing instances could be terminated; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above). Additionally, Memon teaches reallocated to the other applications from which the additional resources were retrieved ([0083] Specifically, the herein disclosed techniques can be used to attach the computing device 112 to a storage target in the distributed storage system 104 using a highly available virtual portal with a protocol redirect (at grouping 2182). For example, as shown, the redirect might be to virtualized controller 126.sub.1 (e.g., VC1). After a time lapse 140.sub.2, the virtualized controller 126.sub.1 selected to host the session for the storage target (e.g., iSCSI target) might fail. In such cases, the broken connection 302 can trigger a TCP reset detected by the computing device 112 (operation 304). Responsive to the TCP reset, a login (e.g., re-login) from the computing device 112 can be received by the leader virtualized controller hosting the virtual IP address (message 308). For example, virtualized controller 126.sub.2 might be the leader virtualized controller to receive the login at <vIPa>:3260. Upon receiving the login, the leader virtualized controller will detect that the preferred virtualized controller for the storage target (e.g., VC1) is down (operation 309). The leader virtualized controller can then select a healthy failover virtualized controller to which the login can be redirected (operation 310). For example, virtualized controller 126.sub.32 (e.g., VC32) might be selected as the failover virtualized controller. The leader virtualized controller can then issue a login redirection response to the computing device 112 comprising identifying information (e.g., IP address, port, etc.) pertaining to the failover virtualized controller (message 312). The computing device 112 can respond to the redirect by logging into the failover virtualized controller (message 314). For example, as shown, the redirected login can be to <VC32-IPa> at port 3205. Upon a successful login, an attach success message can be issued to the computing device 112 (message 316). [0084] The then-current failover virtualized controller hosting the storage target (e.g., virtualized controller 126.sub.32) can monitor the preferred virtualized controller (e.g., virtualized controller 126.sub.1) to determine when it might be available for an automatic failback operation (operation 322). For example, after a time lapse 1403, the preferred virtualized controller (e.g., virtualized controller 126.sub.1) might be brought back online. The failover virtualized controller (e.g., virtualized controller 126.sub.32) might receive a health notification indicating the preferred virtualized controller is available (message 324). The failover virtualized controller might then quiesce any storage I/O pertaining to the storage target to facilitate closing the connection with the computing device 112 (message 326). Responsive to the closed connection, the computing device 112 can attach to the storage target through the preferred virtualized controller (e.g., virtualized controller 126.sub.1) using a highly available virtual portal with a protocol redirect, according to herein disclosed techniques (at grouping 218.sub.3).). As per claim 22, Vohra, Memon, and Singh teach the computer program product of Claim 13. Singh teaches wherein in response to determining that the environmental information includes an existence of a security threat associated with a plurality of user-side applications, an amount of merged resources currently allocated to the plurality of user-side applications is reduced (Section 2.2 paragraph 1 Most modern processors have a Performance Monitoring Unit on-chip to monitor micro-architectural events of running applications. Each logical core has a dedicated set of 4 to 8 configurable registers that can count the number of times a particular event occurs in a given duration. These registers are called Hardware Performance Counters (HPCs) and can be used to monitor a wide range of events like CPU-cycles, cache accesses, context-switches, and page faults; pg. 3 left column paragraph 2 LEASH, similarly uses HPCs to compute a threat index for a thread. Threads which depict a micro-architectural attack like behavior are given a high threat index and are throttled by reducing their CPU time; pg. 3 4. The LEASH Framework paragraph 1 Micro-architectural attacks depend considerably on CPU resources. If the attack programs are starved of the CPU, the success drops considerably; Fig. 1a caption LEASH stymies micro-architectural attacks by detecting malicious behavior in programs and reducing its CPU-share, thereby reducing the leakage from the shared resource; Section 1 paragraph 2 In a typical micro-architectural attack, the attacker runs a program called the spy that contends with a victim program for shared hardware resources). Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Vohra, in view of Memon, and further in view of Gupta et al. (US 11249790 B1 hereinafter Gupta). As per claim 23, Vohra teaches a system, comprising: a hardware processor; and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor, the logic being configured to ([0336] Embodiments can include be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.): identify environmental information for a hyper-converged infrastructure (HCI) system ([0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0185] The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0116] Such data analytics applications may be configured, for example, to receive telemetry data phoned home by the storage system 306. Such telemetry data may describe various operating characteristics of the storage system 306 and may be analyzed, for example, to determine the health of the storage system 306 ); and in response to determining that the environmental information indicates a need to perform data recovery operations within the HCI system due to failure of one or more hardware storage resources ([0288] In this example cloud storage architecture of the cloud-based storage system (1002), recovery from data loss may be implemented in multiple ways. As one example, within the non-durable cloud storage layer (1004), one or more of the cloud computing instances (424a-424n) may fail, or portions of storage (414, 426 . . . 422, 430) for one or more of the cloud computing instances (424a-424n) may fail; [0213] The cloud computing instances (424a, 424b, 424n) with local storage (414, 418, 422) may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 13 instances that include one or more SSDs, and so on. In some embodiments, the local storage (414, 418, 422) must be embodied as solid-state storage (e.g., SSDs); [0292] loss of one or more cloud computing instances, such as the loss of cloud computing instance (424a); [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0144] As such, one or more modules of computer program instructions that are executing within the cloud-based storage system 318 (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances): identify one or more system-side applications already running within the HCI system that are needed to perform the data recovery operations within the HCI system, and adjust one or more resources allocated to the one or more system-side applications already running within the HCI system that are needed to perform the data recovery operations within the HCI system for reducing a time for performing the data recovery operations, wherein the adjusting includes allocating an increased amount of the one or more resources to the one or more system-side applications needed to perform the data recovery operations within the HCI system, wherein the one or more resources are retrieved, wherein the one or more system-side applications perform the data recovery operations using at least the adjusted one or more resources allocated to the one or more system-side applications ([0144] As such, one or more modules of computer program instructions that are executing within the cloud-based storage system 318 (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0279] In some implementations, the cloud-based storage system (1002) may be designed to recover from a data loss by scaling up the cloud architecture used to initially store the data to recover data more quickly than if the original cloud architecture were used; [0279] In some implementations, the cloud-based storage system (1002) may be designed to recover from a data loss by scaling up the cloud architecture used to initially store the data to recover data more quickly than if the original cloud architecture were used; [0148] Readers will appreciate that various performance aspects of the cloud-based storage system 318 may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system 318 can be scaled-up or scaled-out as needed;). Vohra fails to teach wherein the one or more resources are retrieved from resources allocated to other system-side applications within the HCI system; in response to determining that the environmental information includes an anomaly corresponding to relatively larger CPU usage by a node running user-side applications relative to a historical pattern of CPU usage by said node, increasing a risk count of a risk score for the node, wherein an amount of merged resources currently allocated to all of the user-side applications running on the node are reduced in response to the risk score exceeding a predetermined threshold. However, Memon teaches wherein the one or more resources are retrieved from resources allocated to other system-side applications within the HCI system ([0089] a load imbalance for the then-current hosting virtualized controller (e.g., virtualized controller 126.sub.1) might be triggered by a threshold breach associated with one or more load metrics (e.g., CPU utilization, CPU processes, storage I/O bandwidth, storage IOPS, etc.), and/or by satisfying a certain sets of rules. The leader virtualized controller can then select an alternative virtualized controller (e.g., a VC unloaded or less loaded as compared to the then-current hosting VC) to which the computing device 112 can be redirected for attaching to the storage target (operation 404); [0090] According to the herein disclosed techniques, some or all sessions on a then-current hosting virtualized controller can be migrated to the alternative virtualized controller. Specifically, the leader virtualized controller can issue a storage target connection migrate command to the then-current hosting virtualized controller (e.g., virtualized controller 126.sub.1) to migrate one or more of its connections (message 406); [0036] As shown, the computing device 112 can attach to the storage targets 122 by connecting to the selected virtualized controller 197 (at step 3); [0037] In highly dynamic distributed storage systems, changes to the environment can occur such that the leader virtualized controller 195 might no longer identify the selected virtualized controller 197 as the virtualized controller to serve the computing device 112… The leader virtualized controller 195 can then initiate rebalancing (step 5) by issuing a redirect message to the computing device 112, which redirect message identifies an alternative virtualized controller 199 that has been selected on the basis of various policies or criteria (step 6). The alternative virtualized controller 199 can serve as the controller for attaching the storage targets 122 to the computing device 112. As shown, the computing device 112 can attach to the storage targets 122 by connecting to the alternative virtualized controller 199 (at step 7); [0135] A hyperconverged system coordinates the efficient use of compute and storage resources by and between the components of the distributed system). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Vohra with the teachings of Memon to improve performance (see Memon [0086] For example, the virtualized controller host redistribution might be implemented so as to improve a degraded storage access performance resulting from the event.). Vohra and Memon fail to teach in response to determining that the environmental information includes an anomaly corresponding to relatively larger CPU usage by a node running user-side applications relative to a historical pattern of CPU usage by said node, increasing a risk count of a risk score for the node, wherein an amount of merged resources currently allocated to all of the user-side applications running on the node are reduced in response to the risk score exceeding a predetermined threshold. However, Gupta teaches in response to determining that the environmental information includes an anomaly corresponding to relatively larger CPU usage by a node running user-side applications relative to a historical pattern of CPU usage by said node, increasing a risk count of a risk score for the node, wherein an amount of merged resources currently allocated to all of the user-side applications running on the node are reduced in response to the risk score exceeding a predetermined threshold (Col. 2 lines 35-40 if some virtual machines on the physical host machine unexpectedly consume a lot of resources (e.g., in a manner not consistent with their normal resource usage profiles or with prior resource usage history), the resource usage of other virtual machines on the same physical host machine may be limited or throttled; Col. 10 lines 65-66 burst to the 70% of the CPU cycles provided by the CPU 302B has been received; Col. 3 lines 34-37 a burst period (e.g., a period during which the virtual machine can “burst”, or a period during which the user of the virtual machine anticipates increased resource usage by the virtual machine); Col. 4 line 2 disabling bursts beyond a threshold level). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Vohra and Memon with the teachings of Gupta to improve resource utilization and prevent overutilization (see Gupta Col. 2 lines 1-4 Generally described, aspects of the present disclosure relate to methods, systems, and processes for improving resource utilization (e.g., reducing over-utilization) by virtual machines). Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Vohra in view of Koehler et al. (US 11194620 B2 hereinafter Koehler), and further in view of Tembey et al. (US 20190327144 A1 hereinafter Tembey). As per claim 24, Vohra teaches a computer-implemented method, comprising: identifying, by a computer, one or more hardware storage resource failures within a hyper-converged infrastructure (HCI) system ([0288] In this example cloud storage architecture of the cloud-based storage system (1002), recovery from data loss may be implemented in multiple ways. As one example, within the non-durable cloud storage layer (1004), one or more of the cloud computing instances (424a-424n) may fail, or portions of storage (414, 426 . . . 422, 430) for one or more of the cloud computing instances (424a-424n) may fail; [0213] The cloud computing instances (424a, 424b, 424n) with local storage (414, 418, 422) may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 13 instances that include one or more SSDs, and so on. In some embodiments, the local storage (414, 418, 422) must be embodied as solid-state storage (e.g., SSDs); [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above); identifying, by the computer, one or more system-side applications already running within the HCI system that are needed to perform data recovery operations within the HCI system in response to the one or more hardware storage resource failures ([0288] In this example cloud storage architecture of the cloud-based storage system (1002), recovery from data loss may be implemented in multiple ways. As one example, within the non-durable cloud storage layer (1004), one or more of the cloud computing instances (424a-424n) may fail, or portions of storage (414, 426 . . . 422, 430) for one or more of the cloud computing instances (424a-424n) may fail; [0213] The cloud computing instances (424a, 424b, 424n) with local storage (414, 418, 422) may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 13 instances that include one or more SSDs, and so on. In some embodiments, the local storage (414, 418, 422) must be embodied as solid-state storage (e.g., SSDs); [0292] loss of one or more cloud computing instances, such as the loss of cloud computing instance (424a); [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0144] As such, one or more modules of computer program instructions that are executing within the cloud-based storage system 318 (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances); and allocating, by the computer, the predetermined amount of resources to the identified one or more system-side applications for reducing a time for performing the data recovery operations ([0144] As such, one or more modules of computer program instructions that are executing within the cloud-based storage system 318 (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above; [0279] In some implementations, the cloud-based storage system (1002) may be designed to recover from a data loss by scaling up the cloud architecture used to initially store the data to recover data more quickly than if the original cloud architecture were used; [0147] Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system 318 have written to the cloud-based storage system 318. In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created); and determining that the data recovery operations have been completed within the HCI system; and wherein the one or more system-side applications perform the data recovery operations using at least the predetermined amount of resources allocated to the identified one or more system-side applications ([0147] Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system 318 have written to the cloud-based storage system 318. In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created, where each cloud computing instance is responsible for retrieving, from the cloud-based object storage 348, distinct 1/100,000th chunks of the valid data that users of the cloud-based storage system 318 have written to the cloud-based storage system 318 and locally storing the distinct chunk of the dataset that it retrieved. In such an example, because each of the 100,000 cloud computing instances can retrieve data from the cloud-based object storage 348 in parallel, the caching layer may be restored 100 times faster as compared to an embodiment where the monitoring module only create 1000 replacement cloud computing instances. In such an example, over time the data that is stored locally in the 100,000 could be consolidated into 1,000 cloud computing instances and the remaining 99,000 cloud computing instances could be terminated; [0157] Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways; [0208] The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above;). Vohra fails to teach reducing, by the computer, an amount of resources allocated to other applications, that are running within the HCI system concurrently with the identified one or more system-side applications, by a predetermined amount; in response to determining that the data recovery operations have been completed within the HCI system, the predetermined amount of resources allocated to the identified one or more system-side applications that performed the data recovery operations within the HCI system are reallocated to the other applications from which the resources were retrieved; wherein the other applications have a lower priority than the identified one or more system-side applications. However, Koehler teaches reducing, by the computer, an amount of resources allocated to other applications, that are running within the HCI system concurrently with the identified one or more system-side applications, by a predetermined amount; wherein the other applications have a lower priority than the identified one or more system-side applications (Col. 14 lines 21-23 migration will commence by invoking execution of task “ts11” and task “ts23”. As a task runs, it uses it allotted tokens; Col. 19 lines 14-18 Some hyperconverged systems implement certain aspects of virtualization. For example, in a hypervisor-assisted virtualization environment, certain of the autonomous entities of a distributed system can be implemented as virtual machines; Col. 15 lines 14-31 task “ts11” and task “ts23” have been allocated the two processing tokens “p01” and “p02”, respectively, and as such have a “running” status. When one or more changes to in-process migration tasks are detected (step 506.sub.1), the contents (e.g., set of migration task attributes) of the data structure are updated in accordance with detected changes (step 508.sub.1). Indications of such changes can originate from various sources. For example, a set of migration task changes 524 might include a change originating from a worker process that has indicated that a certain in-process migration task is complete. As another example, migration task changes 524 might include one or more changes associated with a reallocation of processing tokens performed by a token-based scheduler. As shown in an updated set of select migration task attributes 522.sub.2, migration task changes 524 might result in updates to migration task attributes in the data structure that indicate that task “ts23” is “done” and processing token “p02” is reallocated to task “ts12”; Col. 16 lines 45-51 all in-process migration tasks associated with VM “vm2” have the highest priority (e.g., “priority=2”) and processing token “p02” is reallocated to task “tu2x” of VM “vm2” and has a status of “running”. Furthermore, the previous owner of processing token “p02” (e.g., task “ts12”) is “halted” and prioritized at the second highest priority (e.g., “priority=1”)). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Vohra with the teachings of Koehler to reduce resource utilization (see Koehler Col. 7 lines 66-67 reduce demands for computer memory and data storage). Vohra and Koehler fail to teach in response to determining that the data recovery operations have been completed within the HCI system, the predetermined amount of resources allocated to the identified one or more system-side applications that performed the data recovery operations within the HCI system are reallocated to the other applications from which the resources were retrieved. However, Tembey teaches in response to determining that the data recovery operations have been completed within the HCI system, the predetermined amount of resources allocated to the identified one or more system-side applications that performed the data recovery operations within the HCI system are reallocated to the other applications from which the resources were retrieved ([0080] the controller invoker 350 may direct one of the configuration controllers 304 to obtain a utilization of computing resources, networking resources, storage resources, etc., allocated to the workload domain 129. The example controller invoker 350 may determine that the workload domain 129 can be contracted based on the workload domain 129 having non-utilized resources; [0081] For example, the controller invoker 350 may determine that a new workload domain is being generated based on the first template 316 that has a higher priority than an existing workload domain. The example controller invoker 350 may direct one of the configuration controllers 304 to back up the lower priority workload domain and release resources allocated to the lower priority workload domain back to one or both physical racks 102, 104 for re-allocation to the higher priority workload domain; [0020] Examples described herein can be used in connection with different types of SDDCs. In some examples, techniques described herein are useful for managing network resources that are provided in SDDCs based on Hyper-Converged Infrastructure (HCI).). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Vohra and Koehler with the teachings of Tembey to improve performance (see [0027] optimize the resources for improved performance). Claims 25 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US20160357451A1 hereinafter Chen) in view of Singh. As per claim 25, Chen teaches a computer-implemented method, comprising: receiving, by a computer, per-node telemetry data from nodes within a hyper- converged infrastructure (HCI) system, the nodes corresponding to user-side applications ([0011] Preferably, the storage service is a monitoring service, for monitoring performance metrics of each service container; [0024] If a service container is for providing an application service, it is defined as an application container; [0003] In addition, a more and more popular technology for storage architecture is hyper-converged storage; Abstract A storage system having nodes with light weight containers; Abstract a number of service containers, which are used for providing specific services to clients); receiving, from security monitoring software, risk scores for the user-side applications; identifying behavior patterns for the user-side applications based on the per-node telemetry data; updating, by the computer, at least some of the risk scores for the user-side applications based on the behavior patterns ([0011] Preferably, the storage service is a monitoring service, for monitoring performance metrics of each service container in the node. The storage service is a traffic modeling service, for creating a traffic model of at least one performance metric in the node and generating prediction of the performance metric(s). The performance metric may be CPU load, IOPS (Input/output Per Second), throughput, or latency of the storage system, cache hit ratio, or throughput of a network the storage system applied to. The storage service may be an anomaly detecting service, for detecting unusual patterns of the performance metric obtained by the monitoring service; [0012] if a detected anomaly exceeds a threshold value, all of the service containers in that node are removed; [0027] The storage service provided by the first storage container 301a is an anomaly detecting service. It can detect unusual patterns of the performance metric obtained by the monitoring service. Please see FIG. 4. Detected unusual patterns of CPU load are plotted by dashed broken lines. Anomaly usually implies a malfunction of software, hardware, or even malicious usage.); determining, by the computer, that one of the user-side applications on a first of the nodes within the HCI system has an associated risk score that exceeds a predetermined risk threshold ([0012] if a detected anomaly exceeds a threshold value, all of the service containers in that node are removed; [0003] In addition, a more and more popular technology for storage architecture is hyper-converged storage). Chen fails to teach security monitoring software that intercepts and parses data output and input by the user-side applications; in response to the determination that the risk score associated with the one of the user-side applications exceeds the predetermined risk threshold, running the one of the user-side applications with a reduced amount of resources allocated to the one or the user-side applications to reduce an amount of activity that the one of the user-side applications is capable of performing, and capping, by the computer, the amount of the resources allocated to the one of the user-side applications at a predetermined resource threshold cap; in response to determining that the associated risk score for the one of the user-side applications no longer exceeds the predetermined risk threshold, removing, by the computer, the predetermined resource threshold cap for the one of the user-side applications and allocating additional resources to the one of the user-side applications, wherein the one of the user-side applications runs using the allocated additional resources in response to the allocation of the additional resources. However, Singh teaches security monitoring software that intercepts and parses data output and input by the user-side applications (Section 2.2 paragraph 1 Most modern processors have a Performance Monitoring Unit on-chip to monitor micro-architectural events of running applications. Each logical core has a dedicated set of 4 to 8 configurable registers that can count the number of times a particular event occurs in a given duration. These registers are called Hardware Performance Counters (HPCs) and can be used to monitor a wide range of events like CPU-cycles, cache accesses, context-switches, and page faults); in response to the determination that the risk score associated with the one of the user-side applications exceeds the predetermined risk threshold, running the one of the user-side applications with a reduced amount of resources allocated to the one or the user-side applications to reduce an amount of activity that the one of the user-side applications is capable of performing, and capping, by the computer, the amount of the resources allocated to the one of the user-side applications at a predetermined resource threshold cap (pg. 3 left column paragraph 2 LEASH, similarly uses HPCs to compute a threat index for a thread. Threads which depict a micro-architectural attack like behavior are given a high threat index and are throttled by reducing their CPU time; pg. 3 4. The LEASH Framework paragraph 1 Micro-architectural attacks depend considerably on CPU resources. If the attack programs are starved of the CPU, the success drops considerably; Fig. 1a caption LEASH stymies micro-architectural attacks by detecting malicious behavior in programs and reducing its CPU-share, thereby reducing the leakage from the shared resource; pg. 5 left column paragraph 3 In our evaluation platform, γ = 0.1 which means that, for every rise in threat index values, the weight drops by 10% until it reaches wMIN; pg. 1 right column paragraph 1 In a typical micro-architectural attack, the attacker runs a program called the spy that contends with a victim program; Section 2.3 paragraph 2 When multiple threads compete for CPU time, the scheduler allocate timeslices in proportion to a metric called weight); in response to determining that the associated risk score for the one of the user-side applications no longer exceeds the predetermined risk threshold, removing, by the computer, the predetermined resource threshold cap for the one of the user-side applications and allocating additional resources to the one of the user-side applications, wherein the one of the user-side applications runs using the allocated additional resources in response to the allocation of the additional resources (pg. 4 left column paragraph 1 LEASH uses HPCs to detect such anomalous behavior and penalizes such threads by decreasing their weight, which in turn reduces their timeslice (Equation 1). If the thread stops exhibiting the anomalous behavior, its weight is gradually increased, thus regaining its regular timeslice; pg. 5 left column paragraph 3 In our evaluation platform, γ = 0.1 which means that, for every rise in threat index values, the weight drops by 10% until it reaches wMIN. Similarly, when a thread is recovering, the threat index value is negative and hence every fall in threat index value increases its weight by 10% until its weight is restored. The adaptable design of LEASH efficiently brings down the cost of a false penalization. Once a benign thread, which is erroneously flagged, is unflagged, it regains its CPU share; pg. 1 right column paragraph 1 In a typical micro-architectural attack, the attacker runs a program called the spy that contends with a victim program; Section 2.3 paragraph 2 When multiple threads compete for CPU time, the scheduler allocate timeslices in proportion to a metric called weight; Section 3 paragraph 2 LEASH reacts to a flagged program by throttling its CPU-share, thus preventing the attack from completing. Contemporary works, on the other hand, would either migrate the program to another CPU or terminate it. The advantage we achieve with the feedback loop of LEASH is that falsely flagged threads can recover and regain their CPU-share). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Chen with the teachings of Singh to prevent attacks (see Singh pg. 2 left column paragraph 2 If the spy gets insufficient time to execute on the CPU, then the information leakage is reduced, stymieing the attack). As per claim 26, Chen and Singh teach the computer-implemented method of Claim 25. Singh teaches comprising: in response to the determination that the risk score associated with the one of the user-side applications exceeds the predetermined risk threshold, identifying the one of the user-side applications as risky using metadata (Section 4 paragraph 2 Due to the continuous probing, the number of memory accesses by the receiver increases to be significantly higher compared to a regular thread. LEASH uses HPCs to detect such anomalous behavior and penalizes such threads by decreasing their weight; Section 3 paragraph 2 LEASH reacts to a flagged program by throttling its CPU-share, thus preventing the attack from completing). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HSING CHUN LIN whose telephone number is (571)272-8522. The examiner can normally be reached Mon - Fri 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.L./Examiner, Art Unit 2195 /Aimee Li/Supervisory Patent Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Oct 20, 2021
Application Filed
Oct 24, 2023
Non-Final Rejection — §103, §112
Jan 29, 2024
Response Filed
May 17, 2024
Final Rejection — §103, §112
Jul 02, 2024
Examiner Interview Summary
Jul 10, 2024
Response after Non-Final Action
Aug 22, 2024
Response after Non-Final Action
Aug 29, 2024
Request for Continued Examination
Sep 03, 2024
Response after Non-Final Action
Dec 13, 2024
Non-Final Rejection — §103, §112
Feb 24, 2025
Interview Requested
Mar 03, 2025
Applicant Interview (Telephonic)
Mar 03, 2025
Examiner Interview Summary
Mar 06, 2025
Response Filed
Jun 25, 2025
Final Rejection — §103, §112
Aug 18, 2025
Interview Requested
Aug 26, 2025
Applicant Interview (Telephonic)
Aug 28, 2025
Examiner Interview Summary
Aug 29, 2025
Response after Non-Final Action
Sep 23, 2025
Request for Continued Examination
Sep 26, 2025
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554523
REDUCING DEPLOYMENT TIME FOR CONTAINER CLONES IN COMPUTING ENVIRONMENTS
2y 5m to grant Granted Feb 17, 2026
Patent 12547458
PLATFORM FRAMEWORK ORCHESTRATION AND DISCOVERY
2y 5m to grant Granted Feb 10, 2026
Patent 12468573
ADAPTIVE RESOURCE PROVISIONING FOR A MULTI-TENANT DISTRIBUTED EVENT DATA STORE
2y 5m to grant Granted Nov 11, 2025
Patent 12461785
GRAPHIC-BLOCKCHAIN-ORIENTATED SHARDING STORAGE APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Nov 04, 2025
Patent 12443425
ISOLATED ACCELERATOR MANAGEMENT INTERMEDIARIES FOR VIRTUALIZATION HOSTS
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
59%
Grant Probability
99%
With Interview (+79.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month