Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
Claims 1-2, 4-13 and 15-20 are pending.
Response to argument
4. Applicant argument filed on 10/23/2025 has been fully considered but they are not persuasive.
5. Applicant argument states that Eberlein does not disclose detecting a change event, wherein detecting the change event comprises determining that an average latency for deploying the computing service at the second provider site is better than a threshold latency value for a predetermined amount of time.
In response,
Eberlein expressly identifies latency as a metric used in determine whether to execute a transfer process as cited (col. 1 lines 59-67, col.2 lines 1-14, col 9 lines 30-60), the set of metrics includes latency of accesses, alongside other metrics such as access counts and percentages. The discourse further explains that these metrics are calculated periodically or as moving averages which inherently requires evaluation over a predetermined period of time (every 10 minutes over the last 2 hours, see col. 14 lines 10-40). A moving average is by definition, an average latency measure across a defined time window, directly correspond to applicant’s average latency for predetermined amount of time.” Eberlein teaches selectively executing a transfer process based on comparison of metrics against thresholds. Specifically, Eberlein discloses executing the transfer process when “ a difference between the percentage of remote accesses and the percentage of local accesses exceeds a first threshold” (see col. 8 Lines 10-24 and col. 1 lines 59-67, col.2 lines 1-14). This directly maps to claimed decision logic in which a metric (e.g., access percentage or latency) is evaluated relative to a threshold to determine whether to perform a data transfer. Further, the decision to execute the “transfer process” is triggered by metrics crossing thresholds. (col 11 lines 10-col 12 lines 8), discloses Logic 1 &2, where transfer is triggered if access percentage form one location exceed those of another by threshold (e.g., 10%) it explicitly states, “it should be understood that the thresholds… are provided as examples and can be adjusted to an appropriate value” (col. 11, lines 60-67 and col 12-lines 1-42). Importantly, Eberlein makes clear that the threshold itself is not static, but rather forms part of the metric evaluation system. As disclosed in (col. 12 lines 1-25) the threshold includes dampening value that changes over time such that the effective threshold is a sum of fixed threshold and a dynamic damping factor. This demonstrates that the threshold is integrally tied to system metrics and is part of decision -making mechanism not merely and external constant. The examiner therefore reasonably intermates the threshold is being part of the metric framework particularly with respect to time varying characteristics such as average latency or access behavior. Moreover, ( col 15 lines 38-49), describes monitoring the “efficiency of the proactive transfer process where the index of the efficiency includes “the network latency per request (if the moving average network latency increase over time re-execute the analysis..” the inverse of this logic inherently discloses, if the system is configured to “re-execute the analysis” when the moving average of the latency increase (i.e. gets worse), this is a the threshold based decision using average latency over time. A person of ordinary skill in the art would readily understand that the same logic framework can be applied to determine if latency has become better than the threshold to trigger a beneficial transfer. The disclosed system is bi-directional, it moves data to improve access – so evaluating if a new site offers sufficiently better (lower latency to justify a move is natural and obvious application of the disclosed threshold system to the disclosed latency metric. Contrary to Applicant’s assertion, Eberlein is not silent regarding latency thresholds or temporal evaluation. Instead , it explicitly discloses latency as a monitored metric evaluated using moving averages over predestined time windows, and compared against functional thresholds to trigger or suppress transfer actions. This disclosures collectively teach “detecting a change event, wherein detecting the change event comprises determining that an average latency for deploying the computing service at the second provider site is better than a threshold latency value for a predetermined amount of time” as properly interpreted .
Claim Rejections - 35 USC § 103
6. In the event the determination of the status of the application as subject to AlA 35 U.S.C. 102 and 103 (or as subject to pre-AlA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claim 1-2, 4-8 and 10, 12, and 13, 15-19 rejected under 35 U.S.C. 103 as being unpatentable over Akolkar et al.( US 20140164166 A1) hereinafter referred as Akolkar in view of Dailianas et al. (US 20200314174 A1) hereinafter referred as Dailianas and further in view of Eberlein et al. (US 11206305 B1) hereinafter Eberlein
Regarding claim 1, Akolkar discloses a method, comprising:
receiving service information for a computing service (para. [0005] building a knowledge base of cloud-based service providers includes receiving information from a cloud-based service provider. [0012] providing information technology (IT) in a cloud-based services marketplace. the AS 104 may comprise a datacenter that supports cloud-based services e.g., data provisioning or other IT-related services[);
receiving, through a user interface, optimization criteria (para. [0016] the AS 104 includes a dynamic conversational user interface for interacting with service providers and customers in the network 100. [0032] messages exchanged in steps 304 and 306 are sent and received over a dynamic, conversational user interface. The conversational user interface allows the customer to specify his or her resiliency requirements in a conversational manner, using natural language. [0031] The resiliency requirements may be defined in terms of one or more metrics, domains, functions, or sources of failure. The metrics may include, for example, typical service-level agreement key performance indicators such as: the service's availability, reliability, integrity, maintainability, or confidentiality. [0038] the ranking may be based on a calculated metric that assigns weights to various customer model criteria (e.g., cost, budget, etc.), number and/or type of resiliency requirements met, deployment timeline, and/or other criteria);
determining, based on the service information and the optimization criteria, a plurality of provider site candidates for the computing service, including at least a first provider site of a provider network (para. [0006] providing a cloud-based service includes receiving information from a customer of the cloud-based service over a conversational interface, the information identifying a requirement of the customer related to a resiliency of the service, generating a first model that represents the requirement of the customer, receiving information from a cloud-based service provider, wherein the information specifies at least one resiliency attribute of the cloud-based service provider, generating a second model that represents the at least one resiliency attribute, wherein the second model is indexed within an ontology-based organizational framework that indexes a plurality of models associated with a plurality of cloud-based service providers, matching the first model to the second model when the at least one resiliency attribute indicates that the cloud-based service provider is capable of satisfying the requirement of the customer, and forwarding information about the cloud-based service provider to the customer [0012] model a customer's resiliency needs and various service providers' abilities to provide resiliency and then match the customer with the service providers who can potentially meet the customer's resiliency needs. [0031] The resiliency requirements may be defined in terms of one or more metrics, domains, functions, or sources of failure. The metrics may include, for example, typical service-level agreement key performance indicators such as: the service's availability, reliability, integrity, maintainability, or confidentiality);
Akolkar discloses detecting a change event (para. [0039] in the event that the customer suffers a service failure, a failover may be identified from the model of the customer's resiliency needs. For instance, the failover may be a current service to which the customer explicitly subscribes as a failover. Akolkar may not explicitly disclose deploying the computing service at the first provider site, storing the optimization criteria automatically determining, based on the stored optimization criteria, a second provider site of the provider network and deploying the computing service at the second provider site.
However, Dailianas discloses deploying the computing service at the first provider site (para. [0105]-[0107] the provider element manager determines whether the consumer budget is sufficient to pay the price for the requested provider services (decision block 506). If it is determined that there is sufficient budget, the provider element manager deploys the consumer at the provider, which proceeds to process its workload (step 508). For example, CPU and memory resources that have been purchased may be allocated to a container by the underlying scheduler of the container system);
storing the optimization criteria (para. [0100], [0109] the supply chain model databases 246 are maintained by element managers (such as element managers 234, 236, 238, 240, 242, 244 shown in FIG. 2), which handle the service objects corresponding to the respective elements that they manage. An element manager is initialized by the platform manager 250, and subsequently the element manager proceeds to populate the supply chain model databases 246 with respective service objects it is responsible for. Once the supply chain model databases 246 have been updated, the element manager continues to update the dynamic attributes of its respective service objects (such as the “used” and “available” attributes). For example, a server manager 238 that is responsible for managing HBA resources will initialize the supply chain model databases 246 with corresponding simple service objects relating to the HBA. The server manager 238 will then monitor and update the “used” and “available” attributes of this simple service object by periodically accessing the HBA instrumentation);
automatically determining, based on the stored optimization criteria, a second provider site of the provider network (para. [0110] the consumer service period, the provider element manager notifies the consumer element manager (step 518), which may proceed to shop for a new provider offering lowest cost services to meet the consumer's needs (step 520). The consumer element manager determines whether the price of the new provider found is lower than the price of the old provider (where the consumer resides at the time);
detecting a change event, (para. [0078], [0111] the platform manager 250 may discover loss or gain of network I/O pathways, congestion or under-utilization of an I/O pathway, low or excessive latency of an I/O pathway, or packet losses along an I/O pathway. Otherwise, the platform manager 250 evaluates whether there have been any major storage changes (decision block 318). For example, the platform manager 250 may discover storage I/O congestion, or alternate I/O pathways that would provide better (i.e., lower) access latency).
Therefore, it would have been obvious to one of ordinary skill before the effective filling date of the claimed invention to modify the teaching of Akolkar and include deploying the computing service at the first provider site, storing the optimization criteria automatically determining, based on the stored optimization criteria, a second provider site of the provider network using the teaching of Dailianas. One of ordinary skill in the art would have been motivated to do so in order to set proactive automation policies to optimize or improve performance and resource utilization, detect and resolve operational problems and performance bottlenecks, allocate priorities and usage charges to different applications, and plan capacity expansions.
Both Akolkar and Dailianas disclose detecting a change event as recited above. Akolkar in view of Dailianas may not explicitly disclose wherein detecting the change event comprises determining that an average latency for deploying the computing service at the second provider site is better than a threshold latency value for a predetermined amount of time.
However, Eberlein discloses may not explicitly disclose wherein detecting the change event comprises determining that an average latency for deploying the computing service at the second provider site is better than a threshold latency value for a predetermined amount of time ([AB] monitoring, by a LML plug-in to a first service executed within a first datacenter, accesses to provide access data representative of the accesses to a data record stored in the first datacenter, the accesses including local accesses executed by the first service and remote accesses executed by a second service executed within a second datacenter, receiving, by a LML instance executed within the first datacenter, the access data from the LML plug-in to the first service, determining, by the LML instance, a set of metrics for the data record based on the local accesses and the remote accesses in a first time period, and selectively executing a transfer process based on the set of metrics to copy the data record to the second datacenter. (col 1 lines 35-41), implementations of the present disclosure are directed to minimizing latency in datacenters. More particularly, implementations of the present disclosure are directed to dynamically moving data between datacenters using a latency minimization layer (LML) within instances of a service across datacenters. (Col 14 lines 10-40), the network latency is also monitored and considered in the sets of metrics to avoid un-necessary moves of the data records. The network latencies between the datacenters can be computed from request for access to the locally stored data records, for example, using the measurement of the instance-to-instance communications depicted in FIG. 4. A moving average of the network latencies can be computed periodically (e.g., every 10 minutes) taking the data over a predetermined period of time (e.g., the last 2 hours). As described herein, the local instances (e.g., the service instance and/or the LML instance) can read the network latencies and use the network latencies as a reference to find the closest location to request a data record. The access requests of a data record from one location always go to the same or other location, then this other location must be the location that has the lowest network latency. (col 8 lines 10-24), the first LML instance 222a and/or the second LML instance 222b selectively execute a transfer process based the sets of metrics to move one or more data records to another server system. For example, the first LML instance 222a can selectively transfer a data record to the server system 202b. In this manner, and as described in further detail herein, the latency of accessing the data record can be shortened. The evaluation made on selectively executing the transfer processes can be synchronized to the calculation of the sets of the metrics or can be made after particular times of calculations. For example, the calculation of the sets of metrics can be performed periodically, whether to transfer one or more data records can be determined in response to calculation of the sets of metrics, see also response to argument above);
deploying the computing service at the second provider site ([AB.] receiving, by a LML instance executed within the first datacenter, the access data from the LML plug-in to the first service, determining, by the LML instance, a set of metrics for the data record based on the local accesses and the remote accesses in a first time period, and selectively executing a transfer process based on the set of metrics to copy the data record to the second datacenter).
Therefore, it would have been obvious to one of ordinary skill before the effective filling date of the claimed invention to modify the teaching of Akolkar in view of Dailianas and include wherein detecting the change event comprises determining that an average latency for deploying the computing service at the second provider site is better than a threshold latency value for a predetermined amount of time using the teaching of Eberlein. One of ordinary skill in the art would have been motivated to do so in order to provide a cloud computing as a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service
Regarding claim 2, claim 1 is incorporated and Akolkar further discloses wherein the change event comprises a service failure ([para. [0039] in the event that the customer suffers a service failure, a failover may be identified from the model of the customer's resiliency needs. For instance, the failover may be a current service to which the customer explicitly subscribes as a failover)
Regarding claim 4, claim 1 is incorporated and Akolkar further disclose wherein determining a plurality of provider site candidates comprises determining optimization data corresponding to the optimization criteria (para. [0012] model a customer's resiliency needs and various service providers' abilities to provide resiliency and then match the customer with the service providers who can potentially meet the customer's resiliency needs. [0032] messages exchanged in steps 304 and 306 are sent and received over a dynamic, conversational user interface. The conversational user interface allows the customer to specify his or her resiliency requirements in a conversational manner, using natural language. [0031] the resiliency requirements may be defined in terms of one or more metrics, domains, functions, or sources of failure. The metrics may include, for example, typical service-level agreement key performance indicators such as: the service's availability, reliability, integrity, maintainability, or confidentiality. [0038] the ranking may be based on a calculated metric that assigns weights to various customer model criteria (e.g., cost, budget, etc.);
Akolkar may not explicitly disclose wherein automatically determining the second provider site occurs after deploying the computing service at the first provider site, further comprising: determining updated information comprising at least one of updated service information and updated optimization data for the computing service after deployment of the computing service at the first provider site, wherein the updated service information includes network performance information for the provider network; wherein the automatically determining the second provider site is based on the updated information and the stored optimization criteria.
However, Dailianas discloses wherein automatically determining the second provider site occurs after deploying the computing service at the first provider site, further comprising: determining updated information comprising at least one of updated service information and updated optimization data for the computing service after deployment of the computing service at the first provider site, wherein the updated service information includes network performance information for the provider network (para. [0084] resource pricing may also be based one or both of capacity or performance characteristics. For example, the server 214 or a cloud provider may offer multiple types of processors or CPUs, each with respective clock rates and other characteristics, at different prices. Similarly, for example, storage I/O resources in the storage system 216 and network I/O resources in the network 218 or supplied by a cloud provider may be priced according to their bandwidth and latency characteristics. This manner of pricing can take into account that, as noted above, I/O pathways internal to a server (i.e., interconnections of containers co-located with a single server, e.g., the containers 120 and 122 as shown in FIG. 1) typically offer higher bandwidth and lower latency than I/O pathways between containers located at different and distinct servers (e.g., the containers 120 and 124 as shown in FIG. 1). Thus, for example, one or more of the components and resources associated with internal I/O pathways (or the aggregate of such components and resources) may be priced lower than components and resources (alone or in the aggregate) for pathways traversing switches and/or involving multiple servers. Alternatively, for example, components and resources associated with such internal I/O pathways may be priced higher to account for an expected increase in performance and thus value to the acquiring entity. [0109] after the provider element manager deploys the consumer at the provider, the provider element manager or the consumer element manager monitors consumer resource usage and adjusts allocation of resources to optimize or improve the use of the consumer's budget (step 516). For example, the provider element manager may find that the consumer is using only 20% of one service it bought, while using 90% of another service it bought. In that case, the provider element manager may reduce the allocation of the first service and use the corresponding released budget to increase the allocation of the second resource);
wherein the automatically determining the second provider site is based on the updated information and the stored optimization criteria (para. [0129] The server element manager optimizes or improves the resources allocated to containers, as described above (step 516), such that containers acquire a share of the storage I/O resources that is commensurate with and optimally reflects their budget. The server element manager then periodically estimates both the average storage I/O capacity used and the average available I/O capacity, and updates the respective attributes of the storage I/O objects in the above-described supply chain model databases 246 with this usage data. [0111] internal I/O pathways (including at either the server 102 or the server 104) may offer higher bandwidth and lower latency, and thus result in improved performance. Therefore, such internal I/O pathways may be priced lower than I/O pathways involving, for example, multiple servers 102 and 104 and network 160. The cost of running a workload on a resource provider, in other words, may be adjusted for performance advantages. For example, in one approach, the cost of running a workload on a resource provider begins with the nominal price charged by the provider, but is adjusted upward if the resource provider delivers performance below a quality metric (e.g., average performance) or downward if the resource provider delivers performance above the quality metric. In another approach, the performance metric is considered separately from cost but the resource budget is adjusted upward if performance is to be considered a factor in the procurement decision. In this way, performance can be weighted separately as a decision-making factor and the larger budget will permit selection of a higher-cost but better-performing resource provider. As used herein, the term “utilization value” reflects both cost and performance using either approach).
Therefore, it would have been obvious to one of ordinary skill before the effective filling date of the claimed invention to modify the teaching of Akolkar and include wherein automatically determining the second provider site occurs after deploying the computing service at the first provider site, further comprising: determining updated information comprising at least one of updated service information and updated optimization data for the computing service after deployment of the computing service at the first provider site, wherein the updated service information includes network performance information for the provider network; wherein the automatically determining the second provider site is based on the updated information and the stored optimization criteria using the teaching of Dailianas. One of ordinary skill in the art would have been motivated to do so in order to set proactive automation policies to optimize or improve performance and resource utilization, detect and resolve operational problems and performance bottlenecks, allocate priorities and usage charges to different applications, and plan capacity expansions.
Regarding claim 5, claim 4 is incorporated and Akolkar may not explicitly disclose determining second updated information for a second computing service deployed on the first provider site; automatically determining, based on the optimization criteria and the second updated information, a third provider site of the provider network; deploying the second computing service to the third provider site.
However, Dailianas discloses determining second updated information for a second computing service deployed on the first provider site (para. [0105]-[0107] the provider element manager determines whether the consumer budget is sufficient to pay the price for the requested provider services (decision block 506). If it is determined that there is sufficient budget, the provider element manager deploys the consumer at the provider, which proceeds to process its workload (step 508). For example, CPU and memory resources that have been purchased may be allocated to a container by the underlying scheduler of the container system, which may include the use of a traditional operating systems scheduling algorithm. The server element manager configures the scheduler parameters to accomplish fairly accurate allocation of the CPU and memory. Memory may be allocated by specifying an amount of memory to be provided. The container system can allocate physical memory, based on these specifications, or support virtual memory mechanisms that permit over 100% utilization of physical memory. Additionally, the CPU may be allocated by configuring reservations and shares parameters of the scheduler. For example, reservations may be used to allocate a reserved CPU slice, using a time-shared round-robin scheduler, while shares allocate the remaining CPU bandwidth through a Weighted Fair Queuing scheduler. CPU reservations and shares may be viewed as separate services, and may be individually priced according to supply and demand. For example, a low-priority application may be unable to buy reservations, and may thus need to settle for shares, which may be priced lower. A high-priority, mission-critical application, on the other hand, may have sufficient budget to afford sufficient reservations to support its needs);
automatically determining, based on the optimization criteria and the second updated information, a third provider site of the provider network (para. [0013] computing a second utilization value for running the workload on a third provider based at least in part on the determined cost for hosting the template on the third provider, the determined cost of moving the workload to the third provider, and a second determined remaining budget capacity, wherein the third provider is another cloud-based service provider);
deploying the second computing service to the third provider site (para. [0017]- [0018]] (f) computing a utilization value for hosting the workload on the third provider based at least in part on the determined cost for hosting the workload on the third provider and the determined cost of moving the workload to the third provider; and (g) moving the workload to the third provider if the utilization value for running the workload on the third provider exceeds a utilization value of continuing to run the workload on the selected one of the first or second provider).
Therefore, it would have been obvious to one of ordinary skill before the effective filling date of the claimed invention to modify the teaching of Akolkar and include determining second updated information for a second computing service deployed on the first provider site; automatically determining, based on the optimization criteria and the second updated information, a third provider site of the provider network; deploying the second computing service to the third provider site using the teaching of Dailianas. One of ordinary skill in the art would have been motivated to do so in order to set proactive automation policies to optimize or improve performance and resource utilization, detect and resolve operational problems and performance bottlenecks, allocate priorities and usage charges to different applications, and plan capacity expansions.
Regarding claim 6, claim 4 is incorporated and Akolkar may not explicitly disclose wherein determining the updated information and automatically determining the second provider site both occur periodically after deployment of the computing service at the first provider site.
However, Dailianas discloses wherein determining the updated information and automatically determining the second provider site both occur periodically after deployment of the computing service at the first provider site (para. [0031] cloud resources are represented as “templates,” which represent a package of cloud-based resource offerings. A template may reflect the needs of a consumer but also specifies a set of resources offered by one or more cloud providers for a known price. The template thereby permits cloud services to be generalized across providers. Each template is associated with a cost for each cloud provider that offers services corresponding to the template, either separately (on demand) or as a bundle (which may be discounted relative to the separate services). For example, a template may specify a bundle of storage and CPUs offered for a fixed period of time. In various embodiments, templates are reviewed and assessed periodically as provider offerings change and as consumer utilization patterns change to favor different collections of resources. [0100] Once the supply chain model databases 246 have been updated, the element manager continues to update the dynamic attributes of its respective service objects (such as the “used” and “available” attributes). For example, a server manager 238 that is responsible for managing HBA resources will initialize the supply chain model databases 246 with corresponding simple service objects relating to the HBA. The server manager 238 will then monitor and update the “used” and “available” attributes of this simple service object by periodically accessing the HBA instrumentation. [0166] It will be understood that the principles discussed herein apply not only to initial placement of applications or workloads with one or more providers, but also a recurring, periodic or continuous monitoring of available providers. For example, once a certain demand has been accounted for through deployment or migration to a cloud provider, the principles disclosed herein can be employed to continuously explore and/or shop for alternative providers that may provide one or more benefits over the initially selected provider. This may include bringing an application or workload back to an on-premises or private provider).
Therefore, it would have been obvious to one of ordinary skill before the effective filling date of the claimed invention to modify the teaching of Akolkar and include wherein determining the updated information and automatically determining the second provider site both occur periodically after deployment of the computing service at the first provider site using the teaching of Dailianas. One of ordinary skill in the art would have been motivated to do so in order to set proactive automation policies to optimize or improve performance and resource utilization, detect and resolve operational problems and performance bottlenecks, allocate priorities and usage charges to different applications, and plan capacity expansions.
Regarding claim 7, claim 1 is incorporated and Akolkar may not explicitly disclose wherein automatically determining the second provider site is performed in response to detecting the change event.
However, Dailianas discloses wherein automatically determining the second provider site is performed in response to detecting the change event (para. [0110] the consumer service period, the provider element manager notifies the consumer element manager (step 518), which may proceed to shop for a new provider offering lowest cost services to meet the consumer's needs (step 520). The consumer element manager determines whether the price of the new provider found is lower than the price of the old provider (where the consumer resides at the time).
Therefore, it would have been obvious to one of ordinary skill before the effective filling date of the claimed invention to modify the teaching of Akolkar and include wherein determining the updated information and automatically determining the second provider site both occur periodically after deployment of the computing service at the first provider site using the teaching of Dailianas. One of ordinary skill in the art would have been motivated to do so in order to set proactive automation policies to optimize or improve performance and resource utilization, detect and resolve operational problems and performance bottlenecks, allocate priorities and usage charges to different applications, and plan capacity expansions.
Regarding claim 8, claim 4 is incorporated and Akolkar may not explicitly disclose wherein determining updated information for a computing service comprises detecting the updated customer service information based on operating metrics of the computing service deployed at the first provider site.
However, Dailianas discloses wherein determining updated information for a computing service comprises detecting the updated customer service information based on operating metrics of the computing service deployed at the first provider site (para. [0100]-[0101] once the supply chain model databases 246 have been updated, the element manager continues to update the dynamic attributes of its respective service objects (such as the “used” and “available” attributes). For example, a server manager 238 that is responsible for managing HBA resources will initialize the supply chain model databases 246 with corresponding simple service objects relating to the HBA. The server manager 238 will then monitor and update the “used” and “available” attributes of this simple service object by periodically accessing the HBA instrumentation.[0101] the supply chain economy matches consumers and providers of resources or services by using pricing and budgeting. Demand for services is matched to supply through a shopping model. A consumer element manager (such as one of element managers 234, 236, 238, 240, 242, 244 shown in FIG. 2), desiring services from a provider element manager, queries the supply chain model databases 246 in search of the best priced provider or providers of the desired services. The query specifies requirements and the service or services the element manager is requesting)
Therefore, it would have been obvious to one of ordinary skill before the effective filling date of the claimed invention to modify the teaching of Akolkar and include wherein determining updated information for a computing service comprises detecting the updated customer service information based on operating metrics of the computing service deployed at the first provider site using the teaching of Dailianas. One of ordinary skill in the art would have been motivated to do so in order to set proactive automation policies to optimize or improve performance and resource utilization, detect and resolve operational problems and performance bottlenecks, allocate priorities and usage charges to different applications, and plan capacity expansions.
Regarding claim 10, claim 3 is incorporated and Akolkar may not explicitly disclose wherein determining that an improvement metric for deploying the computing service at the second provider site exceeds the threshold for the predetermined amount of time comprises evaluating network performance information of the provider network and computing availability at the second provider site.
However, Dailianas discloses wherein determining that an improvement metric for deploying the computing service at the second provider site exceeds the threshold for the predetermined amount of time comprises evaluating network performance information of the provider network and computing availability at the second provider site (para. [0011] (g) computing a utilization value for running the workload on the second provider based at least in part on the determined cost of the optimal template on the second provider, the determined cost of moving the workload to the second provider, and whether any template resources exceeding the workload resource requirement can be deployed by the consumer manager in the computer system. [0112] the container 120 may be moved to server 104 so that the I/O pathway becomes more (or entirely) local to server 104, thus benefiting from higher expected bandwidth capacity and lower latency)
Therefore, it would have been obvious to one of ordinary skill before the effective filling date of the claimed invention to modify the teaching of Akolkar and include wherein determining that an improvement metric for deploying the computing service at the second provider site exceeds the threshold for the predetermined amount of time comprises evaluating network performance information of the provider network and computing availability at the second provider site using the teaching of Dailianas. One of ordinary skill in the art would have been motivated to do so in order to set proactive automation policies to optimize or improve performance and resource utilization, detect and resolve operational problems and performance bottlenecks, allocate priorities and usage charges to different applications, and plan capacity expansions.
Regarding independent claim 12, the claim corresponds to independent claim 1 and is therefore rejected for similar reasoning. Akolkar further discloses at least one processor; memory, operatively connected to the at least one processor and storing instructions that, when executed by the at least one processor, (see Figs. 1 and 4).
Regarding 13, claim 12 is incorporated. Claim 13 corresponds to claim 2 and is
therefore, rejected for similar reasoning.
Regarding 15, claim 12 is incorporated. Claim 15 corresponds to claim 4 and is
therefore rejected for similar reasoning.
Regarding 16, claim 15 is incorporated. Claim 16 corresponds to claim 5 and is
therefore rejected for similar reasoning.
Regarding independent claim 17, the claim corresponds to independent claim 1 and is therefore rejected for similar reasoning
Regarding 18, claim 17 is incorporated. Claim 18 corresponds to claim 6 and is
therefore rejected for similar reasoning.
Regarding 19, claim 17 is incorporated. Claim 19 corresponds to claim 8 and is
therefore rejected for similar reasoning.
9. Claim 9, 11 and 20 rejected under 35 U.S.C. 103 as being unpatentable over Akolkar et al in view of Dailianas et al. in view Eberlein and further in view of Chheda et al. (US 10243819 B1) hereinafter referred as Chheda.
Regarding claim 9, claim 1 is incorporated and Akolkar, Dailianas and Eberlein
may not explicitly disclose determining a type of application for the computing service; retrieving a template corresponding to the type of application; customizing the user interface based on the template, wherein the user interface prompts a user to input at least some of the service information and at least some of the optimization criteria.
However, Chheda discloses determining a type of application for the computing service; retrieving a template corresponding to the type of application; customizing the user interface based on the template, wherein the user interface prompts a user to input at least some of the service information and at least some of the optimization criteria ((col. 16 lines 35-45), the template management component may provide a user interface to allow customers to select resources, configuration values, interconnections, and other parameters. The template management component may, based on the inputs, generate template corresponding to the requested parameters. By allowing a customer to select the parameters, customers can customize aspects of a template at runtime when the stack is constructed. For example, the customer may determine a database size, instance type, and webserver port numbers when a stack is created. A customer may also use a parameterized template to create multiple stacks that may differ in a controlled way. For example, the customer's instance types may differ between geographic regions. (Col. 18, lines 18-33, col. 19 lines 38-48), the API may be facilitating requests for generating recommendations and templates. For example, the API can be called with information such as a resource identifier, resource configuration, and applications. After the API is called, in one embodiment the resource analysis service 180 may take actions such as: Invoke a detection function to generate a baseline of available metrics pertaining to the resource analysis and individual customer resources to determine if there are any metrics that indicate behavior outside of one or more trends. Access activity logs for the customer's resources. Retrieve configuration of the customer's resources. Retrieve connection states for the customer's resources. Call available APIs that can provide metrics for the customer's resources. (Col 17 lines 22-53), the resource analysis service 180 may provide the ability for customers of provider network to create templates based on recommendations generated by the resource advisor component. Customers may use resource advisor component recommendations to improve their resource configurations and then create templates using the template management component. Additionally, the customer may modify the created templates as described above).
Therefore, it would have been obvious to one of ordinary skill before the effective filling date of the claimed invention to modify the teaching of Akolkar, Dailianas and Eberlein and include determining a type of application for the computing service; retrieving a template corresponding to the type of application; customizing the user interface based on the template, wherein the user interface prompts a user to input at least some of the service information and at least some of the optimization criteria using the teaching of Chheda. One of ordinary skill in the art would have been motivated to do so in order to improve overall system performance and close security gaps. The large-scale computing resources is provided for customers and thus allowing computing resources to be efficiently and securely shared between multiple customers.
Regarding claim 11, claim 10 is incorporated and Akolkar further discloses the additional network performance information to determine whether to utilize the different provider in providing network connectivity to the second provider site (para. [0006] providing a cloud-based service includes receiving information from a customer of the cloud-based service over a conversational interface, the information identifying a requirement of the customer related to a resiliency of the service, generating a first model that represents the requirement of the customer, receiving information from a cloud-based service provider, wherein the information specifies at least one resiliency attribute of the cloud-based service provider, generating a second model that represents the at least one resiliency attribute, wherein the second model is indexed within an ontology-based organizational framework that indexes a plurality of models associated with a plurality of cloud-based service providers, matching the first model to the second model when the at least one resiliency attribute indicates that the cloud-based service provider is capable of satisfying the requirement of the customer, and forwarding information about the cloud-based service provider to the customer [0012] model a customer's resiliency needs and various service providers' abilities to provide resiliency and then match the customer with the service providers who can potentially meet the customer's resiliency needs).
Akolkar in view of Dailianas may not explicitly disclose providing an application programming interface (API); and receiving additional network performance information from a different provider through the API. However, Chheda discloses providing an application programming interface (API); and receiving additional network performance information from a different provider through the API (( col. 13, 36-65, col. 14 lines 1-11 and 36-46), a service, such as resource analysis service 180, may be configured to provide real-time or accumulated and/or archived monitoring of a customer's resources. The monitored resources may include instances of various types, such as reserved instances and spot instances as discussed above. The monitored resources may also include other computing resources provided by the service provider, such as storage services and database services. The resource analysis service 180 may provide metrics, such as CPU utilization, data transfers, and disk usage activity. The resource analysis service 180 may be made accessible via an API… a placement calculation may also be used when selecting a prepared resource to transfer to a client account. A client requests a virtual machine having an operating system. The provisioning server 514 may determine that the request may be satisfied with a staged volume in a slot 504. A placement decision may be made that determines which infrastructure may be desirable to share and which infrastructure is undesirable to share. Using the placement decision, a staged volume that satisfies at least some of the placement decision characteristics may be selected from a pool of available resources. For example, a pool of staged volumes may be used in a cluster computing setup. When a new volume is requested, a provisioning server 514 may determine that a placement near other existing volumes is desirable for latency concerns)
Therefore, it would have been obvious to one of ordinary skill before the effective filling date of the claimed invention to modify the teaching of Akolkar in view of Dailianas and include providing an application programming interface (API); and receiving additional network performance information from a different provider through the API using the teaching of Chheda. One of ordinary skill in the art would have been motivated to do so in order to improve overall system performance and close security gaps. The large-scale computing resources is provided for customers and thus allowing computing resources to be efficiently and securely shared between multiple customers.
Regarding 20, claim 17 is incorporated. Claim 20 corresponds to claim 9 and is
therefore rejected for similar reasoning.
Conclusion
10. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kidest Mendaye whose telephone number is (571)272-2603. The examiner can normally be reached on Monday through Friday 7:00 am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ario Etienne can be reached on (571) 272-4001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
01/30/2026
/KIDEST MENDAYE/
Examiner, Art Unit 2457
/ARIO ETIENNE/Supervisory Patent Examiner, Art Unit 2457