Prosecution Insights
Last updated: April 19, 2026
Application No. 18/217,096

SYSTEMS AND METHODS FOR MODELING AND ANALYSIS OF INFRASTRUCTURE SERVICES PROVIDED BY CLOUD SERVICES PROVIDER SYSTEMS

Non-Final OA §101§103
Filed
Jun 30, 2023
Examiner
ROTARU, OCTAVIAN
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Stripe, Inc.
OA Round
3 (Non-Final)
28%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
116 granted / 409 resolved
-23.6% vs TC avg
Strong +39% interview lift
Without
With
+38.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
48 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
39.2%
-0.8% vs TC avg
§103
10.9%
-29.1% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
29.9%
-10.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 409 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application is a continuation application of U.S. application 16905205 filed on 06/18/2020. See MPEP §201.07. In accordance with MPEP §609.02 A. 2 and MPEP §2001.06(b) (last paragraph), the Examiner has reviewed and considered the prior art cited in the Parent Application. Also in accordance with MPEP §2001.06(b) (last paragraph), all documents cited or considered ‘of record’ in the Parent Application are now considered cited or ‘of record’ in this application. Additionally, Applicant(s) are reminded that a listing of the information cited or ‘of record’ in the Parent Application need not be resubmitted in this application unless Applicants desire the information to be printed on a patent issuing from this application. See MPEP §609.02 A. 2. Finally, Applicants are reminded that the prosecution history of the Parent Application is relevant in this application. See e.g., Microsoft Corp. v. Multi-Tech Sys., Inc., 357 F.3d 1340, 1350, 69 USPQ2d 1815, 1823 (Fed. Cir. 2004) (holding that statements made in prosecution of one patent are relevant to the scope of all sibling patents). DETAILED ACTION The following NON-FINAL Office action is in response to Applicant’s request for continued examination filed on 09/10/2025. Status of Claims Claims 1-6, 14-18, and 20 have been amended by Applicant. Claims 1-20 are currently pending and have been rejected as follows. Continued Examination under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/10/2025 has been entered. Priority Examiner noted Applicants claiming Priority from Application 16905205 filled 06/18/2020, which at its turn, claims priority from Provisional Application 62864095 filled 06/20/2019 Response to Arguments / amendments Applicant’s 09/10/2025 amendment necessitated new grounds of rejection in this action. Response to Applicant’s rebuttal of 35 USC 112(b) rejection 112(b) rejection in the previous act is withdrawn in view of Applicant’s amendment. Response to Applicant’s rebuttal of 35 USC 101 rejection Step 2A prong one: Remarks 09/10/2025 p.16 -p.18 argues independent Claims 1,14,20 as read in light of Original Specification, do not recite mathematical concepts, mental processes or organizing human activities, but rather specify an application executing on an electronic device, rendering specific user interfaces that transforms raw data into actionable configurations for deploying computing resources. When the operations and technical mechanisms are considered as a combination, they do not represent well-understood or routing conventional operations. Examiner fully considered the Step 2A prong one argument but respectfully disagrees finding it unpersuasive by first noting that the Applicant has amended the last two limitations of independent Claims 1,14,20, to replace the term “report” for “data package”. Yet replacing the term “report” for “data package”, as a claim drafting effort, does not necessarily render the claims less abstract and eligible because, according to MPEP 2106 II, ¶1, “Evaluating eligibility based on the BRI also ensures that patent eligibility under 35 U.S.C. 101 does not depend simply on the draftsman’s art”1. Here, when read in light of Original Specification ¶ [0076] 5th sentence, the “report” is disclosed to comprise a “data package” which is broad enough to encompass a mere text based document. Thus here, when tested per MPEP 2106.04(a)(2) III, the Examiner finds that the aid of “a commerce platform system” in “generating” a document or “data package” having the adjust[ments] of “costs” for respective “usage information”, can be interpreted as a computer aid performing analogous mental process of evaluation and judgment that would have otherwise be performed by a skilled artisan, and further displayed or observed by the skill artisan on the canvas or user interface of the computer tool, in a manner not meaningfully different than what was long achieved by a pen and paper report. The use or aid of such computer interfaces are far from any patent eligible transformations alleged by Applicant at Remarks 09/10/2025 p.18 ¶2. This is because MPEP 2106.04(a)(2) III C is clear that: # 1. Performing a mental process on a generic computer, # 2. Performing a mental process in a computer environment and # 3. Using a computer as a tool to perform a mental process, all do not preclude the claims from reciting, describing or setting forth the abstract exception. Add to this finding, the fact that no matter which of the “report” or “data package” terminology is used, its contents still refer to the abstract concepts of “costs” and “usage information” or consumption [i.e. at 6th limitation of each of independent Claims1,14,20], and it becomes increasingly clear that the character as a whole of the claims remains undeniably abstract. In fact, the preponderant recitation of the “commerce platform system” vis-à-vis the “provider system” throughout the Claims 1-10, 12-20 can be argued not only as a computer-aided tool but also as being not meaningfully different than the electronic clearing-house in “Dealertrack”. In fact, MPEP 2106.04(a)(2) II ¶6, 4th sentence is clear that computer interactions does not preclude the claims from reciting, describing or setting forth the abstract exception. Such fundamental, economic, and/or commercial character of the claims is further corroborated by Applicant himself/herself at Remarks 09/10/2025 p.17 ¶1 citing Original Specification ¶ [0023]-¶ [0024], to state that the claims propose a solution to address the variability of cost and utilization or consumption of resources. To this, the Examiner further ads that, the term fundamental is not used in the sense of necessarily being old or well-known, but rather as referring to building blocks of modern economy, as explicitly stated by MPEP 2106.04(a)(2) II A. Accordingly, Examiner stresses that here, the Step 2A prong one test is not whether or not the operations are well-understood or conventional, as raised by Applicant at Remarks 09/10/2025 p.18 ¶2, but rather whether said operations correspond to computer-aided abstract steps, per MPEP 2106.04(a)(2) III C #1-#3, and/or building blocks of modern economy, per MPEP 2106.04(a)(2) II A. Based on such guidelines, the Examiner finds that here the recitation of “the first data” [that] “comprises information indicative of one or more costs of cloud services provider resource usage by the commerce platform system over a period of time” and the “second data” [that] “comprises information indicative of execution of services of the commerce platform system over the period of time” as amended at independent Claims 1,14,20 still represent such building blocks of cost and consumption for the modern economy, no matter of its limited application2 to the technological environment of “cloud services”. Thus, the abstract character is clearly evident. Further, the fact that such fundamental economic practice are argued by Applicant at Remarks 09/10/2025 p.17 ¶1 to pertain to several sources to generate a model of how various services utilize resources are used within the distributed cloud based system based on the data, to use the model to attribute data related to system performance, and generate the data packages having the analysis, based on the modeling/analysis/reporting to a prior more-efficient state, does not necessarily render the claims less abstract and eligible because, in Fairwarning Ip, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 U.S.P.Q.2d 1293 (Fed. Cir. 2016), as cited by MPEP 2106.04, the Federal Circuit found that an analogous accessing, compiling and combining disparate information sources to make it possible to generate a full picture of a user's activity, identity, frequency of activity, and the like in a computer environment, did not differentiate their process from ordinary mental processes, whose implicit exclusion from 101 undergirds the information based category of abstract ideas. Elec. Power, 830 F.3d 1350, [2016 BL 247416], 2016 WL 4073318, at *4. It then follows that here, the access, combination and compilation of the purported several sources, to provide an adjust[ed] or alter[ed] [picture of] “cloud services provider costs for one or more of the cloud services provider usage information and commerce platform execution information over the period of time” would similarly not preclude the claims from describing or setting forth the abstract idea. Also, the examiner again stresses that, the fact that the alleged improvement in alter[ing] “cloud services provider costs” [is attributed] “for one or more of the cloud services provider usage information and commerce platform execution information over the period of time” does not necessarily render the claims patent eligible because MPEP 2106.04 I ¶3 is clear that claims directed to narrow laws that have limited applications, are still patent ineligible. Further, as stated by MPEP 2106.04(d)(1), “improvement in the judicial exception itself is not an improvement in technology”. Similarly, MPEP 2106.04 I. cites “Myriad, 569 U.S. at 591, 106 USPQ2d at 1979” to underline that even a “groundbreaking, innovative, or even brilliant discovery” [akin to what is argued here at Remarks 09/10/2025 p.16 ¶11- p.18] “does not by itself satisfy the §101 inquiry" as corroborated by “SAP Am, Inc v InvestPic, LLC, No 2017-2081, 2018 BL 275354 (Fed. Cir.Aug.02, 2018)”: “even if one assumes that the techniques claimed are groundbreaking, innovative, or even brilliant”, [akin to what is argued here at Remarks 09/10/2025 p.16 ¶11-p.18] “those features are not enough for eligibility because their innovation is innovation in ineligible subject matter” [here improving abstract “Certain Methods Of Organizing Human Activity”]. “An advance of that nature is ineligible for patenting”. Simply said here, as in “SAP” supra, “no matter how much of [such] an advance in the field” “the claims [would] recite the advance [would still] lie entirely in the realm of abstract ideas” with no plausibly alleged innovation in non-abstract application realm. This finding was further corroborated by “Versata Dev Grp, Inc v SAP Am, Inc 115 USPQ2d 1681 Fed Cir 2015” undelaying the difference between improvement to entrepreneurial goal objective versus improvement to actual technology. see MPEP 2106.04. As per last limitation of “configuring deployment of the services of the commerce platform system on the hardware computing resources of the cloud services provider system based on the data package” at Claims 1,14,20, as argued by Applicant at Remarks 09/10/2025 p.16 ¶1 and p.18 ¶2, such feature will be more granularly tested at the subsequent Step 2A prong two below. For now, given the preponderance of legal evidence above, it is clear that the claims’ character as a whole still recite, describe or, at a minimum, set forth the computer aided mental processes and/or the associated fundamental or commercial economic practices of the abstract certain methods organizing human activities grouping. Step 2A prong one. Step 2A prong two: Remarks 09/10/2025 p.20 ¶2 argues the amended claims provide a method performed by a commerce platform system to receive data from remote cloud systems and internal services, analyze the data, generate a directed graph as a particular kind of model of the services of the commerce platform system, generate data packages having analysis results, compare relevant deployment factors over different periods of time to detect when changes in the factor exceed one or more thresholds, and then configure a service deployment to a prior state based on such detection. Thus, it is argued by Applicant that the communications, analysis, modeling, detection, and configuration provide direct and tangible improvements for the distributed cloud-based services deployed by the commerce platform system, as well as the resource utilization at the cloud service provider system. Examiner fully considered the Applicant’s Step 2A prong 2 argument but respectfully disagrees finding it unpersuasive by submitting that the capabilities to receive and analyze data to generate a directed graph as a particular kind of model of the services of the commerce platform system, to generate data packages (i.e. text-based document in light of Spec. ¶ [0076] 5th sentence) having analysis results, compare relevant deployment factors over different periods of time to detect when changes in the factor exceed one or more thresholds, still fall within the confines of the abstract exception because they are not meaningfully different than the abstract collecting, analyzing it, and displaying certain [or particular kind of] results of the collection and analysis, of Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016), as cited by MPEP 2106.04(a)(2) III. Thus, far from an improvement in actual technology or the computer itself, the argued limitations would at most improve the abstract exception itself, which was tested above per MPEP 2106.04(d)(1), and MPEP 2106.04 I, and shown not to render the claims patent eligible. As per the last limitation of “configuring deployment of the services of the commerce platform system on the hardware computing resources of the cloud services provider system based on the data package” as initially argued by Applicant at Remarks 09/10/2025 p.17 ¶1, p.18 ¶2, and now argued at Remarks 09/10/2025 p.20 ¶2, 2nd sentence, the Examiner, points to MPEP 2106.05(f)(1) and notes that such limitation does not provide any technological details as to how the “configuring deployment of the services of the commerce platform system” is achieved. In the absence of such technological details, the Examiner finds that such “configuring deployment of the services of the commerce platform system” represents a general application of the abstract idea with respect to a “commerce platform”, which according to MPEP 2106.05(f)(3) represents mere instructions to apply the abstract idea which, does not integrate it into a practical application. Further, as corroborated by MPEP 2106.04, and 2019 PEG Advanced Module Slide 20, USPTO Memorandum-Recent Subject Matter Eligibility Decisions McRO, Inc. dba Planet Blue v. Bandai Namco Games America Inc. and BASCOM Global Internet Services v. AT&T Mobility LLC, November 2, 2016 p.2 ¶5-¶6: the asserted particular solution still needs to be a technological solution. As clarified by MPEP 2106.05(a) ¶5 “An important consideration in determining whether a claim is directed to an improvement in technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome”. Here, despite Remarks 09/10/2025 p.20 ¶2 alleging to the contrary, the current claims do the former as opposed to the latter. Specifically, here, when tested per MPEP 2106.05(a), the amended limitations as identified and tested above, comprise additional, computer-based elements that do not provide the requisite degree of technological improvement to integrate any purported abstract idea into a practical application [Step 2A prong two]. Such narrowing of the abstract cloud services cost and service to a mere general statement of “configuring deployment of the services of the commerce platform system on the hardware computing resources of the cloud services provider system based on the data package” could be also viewed from the prism of narrowing the abstract idea to a field of use or technological environment, which again would not integrate it into a practical application, when tested per MPEP 2106.05(h)(vi) because, narrowing the combination of collecting information, analyzing, and displaying certain results of the collection and analysis to data related to the particular technological environment does not integrate the abstract idea into a practical application. Therefore, the additional computer-based elements do not integrate the abstract idea into a practical application. Step 2A prong two. Based on the preponderance of evidence above, the Examiner submits that the claims still recite or at least describe or set forth the abstract exception, with no additional, computer- based elements capable to either alone or in combination, integrate the abstract exception into a practical application and, for these same reasons, also incapable to provide significantly more. Accordingly, even as amended, the argued claims are believed to remain patent ineligible. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Response to prior art Arguments Applicant’s 09/10/2025 amendment necessitated the new grounds of rejection in this office action. Prior art Argument # 1: Remarks 09/10/2025 p.21 ¶4 argues Chen et al, US 20170083585 A1 does not tech “a super sink node representing the commerce platform system” at Claim 1. Examiner relies on Charles Brian O’Kelley US 20070185779 A1 Fig.3 and ¶ [0016] The transaction involve purchase of commodity or service; the source node may represent a seller of the product or commodity or a provider of the service; and the particular sink node may represent a potential buyer of the product or commodity or a potential consumer of the service. ¶ [0018] The path between the source node and the particular sink node may pass through one or more interior nodes of the graph. If the transaction involves the purchase of a product, commodity or service, the source node may represents a seller of the product or commodity or a provider of the service, each of the interior nodes may represent an intermediary to facilitate the transaction, and the particular sink node may represent a potential buyer of the product or commodity or a potential consumer of the service. If the transaction involves the sale of a product, commodity or service, the source node represents a buyer of the product or commodity or a consumer of the service, each of the one or more interior nodes represents an intermediary to facilitate the transaction, and the particular sink node represents a potential seller of the product or commodity or a potential provider of the service. For example, at Fig.3, and ¶ [0056] 4th-5th sentences: One of the nodes of graph 300, designated as a source node, represents the end seller. One or more of nodes of graph 300, each designated as a sink node, represents a potential end buyer. Disposed between a sink node and source node may be interior nodes (or “int.node”), each representative of an intermediary. ¶ [0067] 2nd sentence: at each branching point among the nodes connected to and in the tier below the given common branching point, and such auctions within the overall hierarchy are sometimes referred to hereinbelow more concisely as being auctions at a branching point and tier (or, similarly, at a tier and branching point), where the tier is intended to refer to the tier of the bidding nodes connected to an below the common branching point. Thus, the prior art teaches or at least suggests the Applicant’s contested feature. Prior art Argument #2 Remarks 09/10/2025 p21 ¶5-p22 ¶1 argues Chen US 20170083585 A1 does not tech: “nodes between the source node and the super sink node representing interconnected services of the commerce platform system and the hardware computing resources of the cloud services provider system” as recited at Claim 1. Chen et al, US 20170083585 A1 teaches “nodes” “representing interconnected services of the commerce platform system and the hardware computing resources of the cloud services provider system”; (Chen ¶ [0202] 1st-3rd sentences: to provide an alternative to an entirely on-premises environment for system 108, one or more of the components of a data intake and query system instead may be provided as a cloud-based service. In this context, a cloud-based service refers to a service hosted by one more computing resources accessible to end users over a network, by using a web browser or other application on a client device to interface with the remote computing resources. For example, a service provider may provide a cloud-based data intake and query system by managing computing resources configured to implement various aspects of the system (e.g., forwarders, indexers, search heads, etc.) and by providing access to the system to end users via a network. To this end at ¶ [0301] a topology map interface enable users to select edges displayed in a topology map and specify an action to be applied to all nodes connected [or interconnected] by selected edges. For example, a user select an edge connecting 1st node representing a server [or hardware] instance and 2nd node representing a storage [or service] volume attached [or interconnected] to the server [or hardware] instance, and further select an option to backup [as a service] the connected resources. In response, the cloud computing application may send a command to a cloud computing service to backup both the server [or hardware] instance and the storage [or service] volume. As another example, an interface enable users to select a particular node and apply an action to any other node connected [or interconnected] to the particular node by an edge. For example, a user select a particular node of a topology map representing a subnet, where the particular node is connected to server instances by a plurality of edges. The user may further specify an action ( startup, shutdown, backup, etc.) that may then be applied to all of the resources connected to the selected node) * While * Chen ¶ [0301] 5th-6th sentences recites: a particular node of a topology map representing a subnet, where the particular node is connected to a plurality of server instances by a plurality of edges. The user may further specify an action (e.g., startup, shutdown, backup, etc.) that may then be applied to all of the resources connected to the selected node. Chen does not go so far to explicitly recite “source node” and “super sink node” as claimed. O’Kelley however in an analogous art of modeling services of business entities across a network, teaches or at least suggests: source node and super sink node (O’Kelley ¶ [0016] The transaction involve purchase of commodity or service; the source node may represent a seller of the product or commodity or a provider of the service; and the particular sink node may represent a potential buyer of the product or commodity or a potential consumer of the service. ¶ [0018] The path between the source node and the particular sink node may pass through one or more interior nodes of the graph. If the transaction involves the purchase of a product, commodity or service, the source node may represents a seller of the product or commodity or a provider of the service, each of the interior nodes may represent an intermediary to facilitate the transaction, and the particular sink node may represent a potential buyer of the product or commodity or a potential consumer of the service. If the transaction involves the sale of a product, commodity or service, the source node represents a buyer of the product or commodity or a consumer of the service, each of the one or more interior nodes represents an intermediary to facilitate the transaction, and the particular sink node represents a potential seller of the product or commodity or a potential provider of the service. For example, at Fig.3, and ¶ [0056] 4th-5th sentences: One of the nodes of graph 300, designated as a source node, represents the end seller. One or more of nodes of graph 300, each designated as a sink node, represents a potential end buyer. Disposed between a sink node and source node may be interior nodes (or “int.node”), each representative of an intermediary. ¶ [0067] 2nd sentence: at each branching point among the nodes connected to and in the tier below the given common branching point, and such auctions within the overall hierarchy are sometimes referred to hereinbelow more concisely as being auctions at a branching point and tier (or, similarly, at a tier and branching point), where the tier is intended to refer to the tier of the bidding nodes connected to an below the common branching point. Thus, the prior art teaches or at least suggests the Applicant’s contested feature. Prior art Argument #3 Remarks 09/10/2025 p22 ¶2-p23 ¶1 argues Chen US 20170083585 A1 does not tech: “edges representing technical dependencies between the nodes, wherein flows along the edges model costs of commerce platform system service usage at the cloud services provider system based at least in part on the extracted cloud services provider resource usage information and the commerce platform execution information” recited in claim 1. Examiner fully considered the Applicant’s prior art argument #3 but respectfully disagrees finding it unpersuasive because Chen US 20170083585 A1 teaches or suggests - “edges representing technical dependencies between the nodes” (Chen ¶ [0278] 1st sentence: As illustrated in Fig.20, the topology map display includes a set of interconnected nodes and edges representing a collection of cloud computing resources. Chen ¶ [0284] similar to displaying info in response to selection of topology map node, a topology map interface may receive input selecting edges in the map and to display information about selected edge(s). For example, if a particular edge connects 1st node representing 1st server instance to another node representing a subnet, information about network traffic [or dependencies] transferred to and from the server instance may be displayed. Other examples of [dependencies] information that may be displayed about a particular edge include: information about the origin and/or destination of network traffic, network traffic statistics (ratio of accept, deny, etc.), or any other information related to the relationship between the connected nodes) “wherein flows along the edges model one or more costs of commerce platform system service usage at the cloud services provider system based at least in part on the extracted cloud services provider resource usage information and the commerce platform execution information” (Chen ¶ [0284] 1st-3rd sentence: similar to displaying information in response to the selection of a topology map node, a topology map interface may be configured to receive input selecting one or more edges in the map and to display information about the selected edge(s). For example, if a particular edge connects a first node representing a first server instance to another node representing a subnet, information about network traffic transferred to and from the server instance may be displayed. examples of information displayed about a particular edge include any other information related to the relationship between the connected nodes, such as at ¶ [0288] a topology map interface display cost information associated with selected topology map elements. For example, a user select a node representing a server instance and, in response, a topology map interface display cost info for the server instance such as, total cost incurred by the server instance, an estimated current bill amount, an average cost for the server instance per month, etc. ¶ [0289] a topology map interface also display cost efficiency information for selected map elements. For example, cloud computing services offer various types of the same computing resource based on different payment models. For example, a cloud service provider offer 3 different types of server instances such as on-demand instances, reserved and spot instances, the cost benefits of which depend on how the server instances are used. Based on a determined type of server instance and performance information associated with the instance, a topology map interface display information indicating whether the type of server instance being used is the most cost effective of the available types of server instances. Although the examples above illustrate display of cost information for server instances, similar info may be displayed for selected storage volumes, network interfaces, or any other cloud computing resources. ¶ [0316] 2nd-3rd sentences: a user may desire to view an animated, time-lapse display of a topology map in synchronization with other visualizations that provide performance metrics, cost and/or billing info, or other information related to the depicted resources across the displayed points in time. Examples of other data visualizations that may be displayed in conjunction with an animated topology map include line charts (e.g., displaying CPU utilization levels, network traffic levels, and/or cost information over time. ¶ [0302] a cloud computing application display info related to portions of a topology map that represent underutilized resources and/or resources used in an inefficient manner from a cost perspective. ¶ [0280] 1st sentence: a displayed topology map displayed using particular graphical elements based on data related to the cost utilization, performance, operating state, or other metrics related to each resource. For example, at ¶ [0283] last sentence: If a device is in a particular operating state (e.g., if server instance is currently shutdown), additional info may be displayed such as… cost information associated with the instance, etc. Similar ¶ [0294]. Further see ¶ [0308], ¶ [0316] a cloud computing management application enable display of an animated topology map (e.g. a time-lapse display) synchronized with other data visualizations. For example, a user desire to view an animated, time-lapse display of a topology map in synchronization with one or more other visualizations that provide performance metrics, cost and/or billing information, or other information related to the depicted resources across the displayed points in time. Examples of other data visualizations that may be displayed in conjunction with an animated topology map include line charts (e.g., displaying CPU utilization levels, network traffic levels, and/or cost information over time). In this example, a topology mapping module may enable display of a response time line chart to the topology diagram that enables a user to more easily determine how the number of instances affects response time. Specifically, at ¶ [0412] the animated topology map includes nodes representing the cloud computing resources, and edges representing relationships among the cloud computing resources; and wherein at least one node of the plurality of nodes is displayed using a particular graphical element based on cost data associated with the at least one node. ¶ [0305] 1st-2nd, 4th sentences: Fig.26 illustrates a portion of interface 2600 for selecting one or more topology map elements and providing input to export data related to the selected elements. Interface 2600 includes a set of selected nodes and edges 2604 corresponding to a set of server instances and further includes an export panel 2604. A user may use export panel 2604, to specify particular data fields to export (e.g. cost and/or billing information, resource identifiers etc.) Accordingly, there is a preponderance of evidence showing that Chen teaches or at least suggests the contested limitation or prior art argument #3. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Objections Claims 2 and 15 are objected for informally reciting: - “receiving, instructions for a reconfiguration based at least in part on the the resource deployment adjustments”; [bolded emphasis added] instead of - “receiving, instructions for a reconfiguration based at least in part on [[”; Correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea, here abstract idea) without significantly more. The claim(s) recite(s) describe or set forth the abstract “modeling and analyzing a commerce platform infrastructure” as summarized in the preamble of independent Claims 1,14,20 which falls under the abstract “Certain Method of Organizing Human Activities” grouping implemented through equally abstract computer-aided “Mental Processes”3. First, when tested per MPEP 2106.04(a)(2) II, the claims describe or set forth the abstract “Certain Method of Organizing Human Activities” grouping namely, fundamental economic practices, tested per MPEP 2106.04(a)(2) II A and/or commercial interactions, tested per MPEP 2106.04(a)(2) II B, represented as “services” of business relations [MPEP 2106.04(a)(2) II B], reflected by a “services provider system” vis-à-vis “resource usage information” and “costs for” “the cloud services provider usage information” (independent Claims 1,14,20) and “increase” in such “cost” (dependent Claims 5,9,19), “increased usage” (dependent Claim 10), “consumption” of “services” (dependent Claim 7), “cost is a size of one or more tables within the database” (dependent Claim 8), “cost of the cloud service consumed between the two nodes” (dependent Claim 12), “attributing costs to individual products, software development groups, customers of the commerce platform system, or a combination thereof” (dependent Claim 13). Examiner also points to MPEP 2106.04(a)(2) II A ¶2 to stress that the term fundamental is not used in the sense of necessarily being old or well-known4. In a similar vein, Examiner points to MPEP 2106.04 I mid-¶3 to stress that narrow forms of abstract idea that have limited applications were still held ineligible. It then follows that here, limiting “services” to “cloud services” and “on the hardware computing resources of the cloud services provider”, and limiting “services provider” to “cloud services provides” does not necessarily preclude the claims from reciting, describing or setting forth such fundamental economic practices business relations, commercial interactions, and management of such interactions. In fact, MPEP 2106.04(a)(2) II, ¶6, 4th sentence, is clear that certain activity between a person and a computer may still fall within the "certain methods of organizing human activity" grouping. It then follows that here: “rendering, through the GUI on a user device, the cloud services provider cost information” at dependent Claims 3,16 may still fall within certain methods of organizing human activity grouping. It also appears that here, the claims set forth mitigative forms of organizing human activities such as “configuring the commerce platform system service of the commerce platforms system to a state associated with the prior period of time” in response to “detecting an increase of a cloud system cost attributable to a commerce platform system service update that exceeds a predefined threshold” at dependent Claims 5,19; and similarly “configuring usage of the specific service area of the cloud service provider to a previous usage configuration of the specific service area of the cloud services provider” in response to “detecting that an increased usage of a specific service area of the cloud services provider by the commerce platform is inconsistent with an anticipated increased usage of the cloud services provider resources by the commerce platform system service” at dependent Claim 10; “wherein the scorecard generates an alert as a result of an increase in a cost of a commerce platform system service” at dependent Claim 9. Speaking of such mitigative actions, it can also be argued that such Certain Method of Organizing Human Activities can be implementable by computer-aided observation, evaluation and judgement of the equally abstract Mental Processes5 [MPEP 2106.04(a)(2) III] to expand upon the use of use pen and paper, especially relevant here given the recitation of the “directed graph” of dependent Claims 11-13, or even through use of computer aids, tested per MPEP 2106.04(a)(2) III C. For example, as tested per MPEP 2106.04(a)(2) III C #1,#3, the high level of generality of expression “by commerce platform system”, could be viewed as a generic computer or tool to aid in performing the mental processes of observation, evaluation and judgement. In a similar vein, the “cloud services” environment and the associated “services on the hardware computing resources of the cloud services provider” as tested per MPEP 2106.04(a)(2) III #2, could be argued as a computer environment upon which the abstract mental observation, evaluation and judgment are performed. None of these preclude the claims from reciting, describing or setting forth the abstract idea. As one example, MPEP 2106.04(a)(2) III A, 5th bullet, cites Electric Power Group v Alstom, S.A 830 F.3d 1350, 1353-54,119 USPQ2d 1739, 1741-42 (Fed Cir 2016) to state that a claim of collecting information, analyzing it, and displaying certain results of the collection and analysis, recites the mental processes. - Here, such collection of information are set forth by: “receiving”, “a first data generated by a cloud services provider system, wherein the first data comprises information indicative of one or more costs of cloud services provider resource usage by the commerce platform system over a period of time”; “receiving”, “a second data for one or more systems of the commerce platform system executed by hardware computing resources of the cloud services provider system, wherein the second data comprises information indicative of execution of services of the commerce platform system over the period of time” at independent Claims 1,14,20; “receiving, instructions for a reconfiguration based at least in part on the resource deployment adjustments” at dependent Claims 2,15. - Here, such analysis or evaluation and judgment is set forth by: “generating”, “a directed graph comprising a super sink node representing the commerce platform system nodes between the source node and the super sink node representing interconnected services of the commerce platform system and the hardware computing resources of the cloud services provider system; and edqes representing technical dependencies between the nodes, wherein flows along the edges model one or more costs of commerce platform system service usage at the cloud services provider system based at least in part on the extracted cloud services provider resource usage information and the commerce platform execution information”; “performing”, “analysis of the directed graph that models costs of commerce platform service usage at the cloud services provider to attribute costs of the cloud services provider resource usage information to execution of the services of the commerce platform system at the cloud services provider”; “deployment adjustments that alters cloud services provider costs for one or more of the cloud services provider usage information and commerce platform execution information over the period of time” at independent Claims 1,14,20; “comparing cloud service provider system costs from a period of time from which the data was generated with the cloud service provider system costs from a prior period of time from which a prior data package was generated; detecting an increase of a cloud system cost attributable to a commerce platform system service update that exceeds a predefined threshold” at dependent Claims 5, 19; “detecting that an increased usage of a specific service area of the cloud services provider by the commerce platform is inconsistent with an anticipated increased usage of the cloud services provider resources by the commerce platform system service”; at dependent Claim 10, “wherein the analysis of the directed graph comprises solving a maximum flow analysis for the flow graph having one or more adjusted nodes and/or edges of the flow graph” at dependent Claim 11; “performing a maximum flow analysis using the directed flow graph having the one or more adjusted nodes and/or edges of the flow graph to determine an adjusted usage of the service of the commerce platform system across the directed graph, wherein the directed graph is decomposed into a plurality of spanning trees during the maximum flow analysis for attributing costs to individual products, software development groups, customers of the commerce platform system, or a combination thereof” at dependent Claim 13 - Here, such displaying of certain results of collection and analysis is set forth by: “rendering”, “the cloud services provider cost information” at dependent Claims 3,16; “render the data as a scorecard detailing cloud spend for at least one of a commerce platform system service or a team” at dependent Claims 4,17; “wherein the scorecard comprises a monthly snapshot of consumption of cloud services provider resources by a set of commerce platform system services associated with a commerce platform system developer and/or a team” at dependent Claims 7, 18; “wherein the scorecard generates an alert as a result of an increase in a cost of a commerce platform system service”; at dependent Claim 9; and the results of the analysis of the graph at dependent Claims 11-13. Thus, there is preponderance of legal evidence showing that the claims’ character is abstract. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- This judicial exception is not integrated into a practical application because per Step 2A prong two, no individual or combination of the additional, computer-based elements integrate the abstract idea into a practical application. For example, when tested per MPEP 2106.05(f)(2)(i), the “computer processing system” of independent Claim 14, and the “memory” instruct[ed] “processor” of independent Claim 20, are found to merely apply the aforementioned abstract, business processes as identified above, as mere invocation of tools, which, according to MPEP 2106.05(f)(2)(i), do not integrate the abstract idea into a practical application. The same analysis and results would apply to “the commerce platform system” of independent Claims 1,14,20, which, if not already considered as an aid of the abstract idea identified above, would at most represent, as tested per MPEP 2106.05(f)(2)(i), an example of merely applying the above abstract, business processes above, as mere invocation of a tool which, again would not integrate the abstract idea into a practical application. In fact, MPEP 2106.05(f)(2) ¶1, 2nd sentence is clear that use of a computer or other machinery in its ordinary capacity for economic or other combination of tasks such as to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application. It would then follow that the capabilities of the “commerce platform system” in “receiving” “data” as recited at independent Claims 1,14 and recitations in passive diathesis of “wherein the data is generated by a service execution tracking system of the commerce platform” at dependent Claim 6 would not integrate the abstract idea into a practical application. Additionally or alternatively, it could also be argued that recitation of “wherein the data is generated by a service execution tracking system of the commerce platform system” at dependent Claim 6, and any computerized capabilities of “detecting an increase of a cloud system cost” and “usage of a specific service area” as recited throughout dependent Claims 5, 10, 19, would represent, as tested per MPEP 2106.05(f)(2) iii, a process for monitoring audit log data, executed on a general-purpose computer6, which again would represent, in the arguendo, mere invocation of computers or machinery to perform an abstract process, which, as tested per MPEP 2106.05(f)(2) iii, would not integrate the abstract idea into a practical application. In a similar vein, the capabilities the “graphical user interface (GUI)” to “render” “the cloud services provider cost information” and “data” at dependent Claims 3,4,16, 17, if not already an aid of the abstract idea identified above, it could be argued, as tested per MPEP 2106.05(f)(2) v. as a mere requirement to use of software to tailor information and provide it to user on a generic computer7, which again would represent an example of invoking computers or machinery as tool to perform an existing process, without integrating the abstract idea into a practical application. As per the general degree of automation, recited as “automatically configuring the commerce platform system service of the commerce platforms system to a state associated with the prior period of time” at dependent Claims 5,19 and, “automatically configuring usage of the specific service area of the cloud service provider to a previous usage configuration of the specific service area of the cloud services provider” at dependent Claim 10, the Examiner points to MPEP 2106.05(f)(2)(iii), finding that a process for monitoring audit log data that is executed on a general-purpose computer where the increased speed in the process comes solely from the capabilities of the general-purpose computer. Alternatively, the computerized environment of any alleged additional elements, as tested per MPEP 2106.05(h) would represent a mere narrowing of the abstract exception, to a field of use or technological environment, which would not integrate the abstract idea into practical appclaition. For example, MPEP 2106.05(h) vi. cites the same Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016), to state that narrowing the combination of collecting information, analyzing it, and displaying certain results of the collection and analysis to data related to a field of use or technological environment does not integrate the abstract ide into a practical application. It would then follow that here narrowing the combination of collecting information, analyzing it, and displaying certain results of the collection and analysis, as identified and mapped above, to data related to a cloud-based environment of computerized functions or elements, including “services on the hardware computing resources of the cloud services provider system” would similarly not integrate the abstract idea into a practical application. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as shown above, the additional computer-based elements merely apply the already recited abstract idea, [see MPEP 2106.05(f)] and/or provide a narrowing of the abstract idea to a field of user or technological environment [MPEP 2106.05(h)]. Specifically, Examiner points to MPEP 2106.05 (d) II and carries over the finings tested per MPEP 2106.05 (f) and (h) and submits that the additional computer-based elements also do not provide significantly more. Examiner submits that the above tests show the applying of the abstract idea [MPEP 2106.05 (f)] and narrowing the abstract idea to a field of use or technological environment [MPEP 2106.05 (h)], suffice in showing that the additional computer-based elements also do not provide significantly more without having to rely on the conventionality test [MPEP 2106.05(d)]. Yet assuming arguendo that further evidence would be required to demonstrate conventionality of the additional, computer-based elements, Examiner would further point to MPEP 2106.05(d) to demonstrate that said additional elements remain well-understood, routine, conventional. In that case, Examiner would rely as evidence on Applicant’s own Spec, publications and/or case law. Per MPEP 2106.05(d)(I)(2) Examiner points to Applicant’s own Spec. as follows: * Original Spec ¶ [0013] 2nd-3rd sentences: It will be apparent to one of ordinary skill in the art having the benefit of this disclosure, that the embodiments described herein may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments described * Original Specification ¶ [0014] 2nd sentence: “These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art” * Original Specification ¶ [0069] reciting at high level of generality “data packages may further be used to configure systems of the cloud service provider, such as when a detected cloud spend increase from a prior report exceeds a threshold (E.g. increase of X, increase of Y%, etc.)” * Original Specification ¶ [0073] reciting at a high level of generality: “Figure 5 is one embodiment of a computer system that may be used to support the systems and operations discussed herein. It will be apparent to those of ordinary skill in the art, however that other alternative systems of various system architectures may also be used”. * Original Specification ¶ [0077] reciting at high level of generality “It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory or read only memory and executed by processor”. * Original Specification ¶ [0078] last two sentences: “Conventional methods may be used to implement such a handheld device. The implementation of embodiments for such a device would be apparent to one of ordinary skill in the art given the disclosure as provided herein”. * Original Specification ¶ [0081] reciting at a high level of generality: “It will be appreciated by those of ordinary skill in the art that any configuration of the system may be used for various purposes according to the particular implementation. The control logic or software implementing the described embodiments can be stored in main memory, mass storage device, or other storage medium locally or remotely accessible to processor”. * Original Specification ¶ [0082] reciting at a high level of generality “It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled”. * Original Spec ¶ [0083] reciting at high level of generality “for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the described embodiments to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and PatentApplication44 10142.P036 **** ****described in order to best explain the principles and practical applications of the various embodiments, to thereby enable others skilled in the art to best utilize the various embodiments with various modifications as may be suited to the particular use contemplated”. The conventionality of maximum flow analysis is further corroborated by * US 20150199724 A1 at ¶ [0038] last 3 sentences, ¶ [0063]. Additionally, Per MPEP 2106.05(d)(II), the additional computer-based elements, can also be viewed as performing the well-understood, routine or conventional functions of: * receiving or transmitting data over a network8 [here “commerce platform system” in “receiving” “first data” that “comprises information indicative of one or more costs of cloud services provider resource usage by the commerce platform system over a period of time” and “receiving” “a second data” that “comprises information indicative of execution of services of the commerce platform system over the period of time” as recited at independent Claims 1,14] * sorting information9 / electronically extracting data10 [here “by the commerce platform” analyze to “extract cloud services provider resource usage information and commerce platform execution information of the services over the period of time”; at independent Claims 1,14]; * arranging a hierarchy of groups11 [here “wherein the directed graph comprises a source node associated with the cloud services provider, a super sink node associated with the commerce platform system, and a plurality of intermediate nodes that represent infrastructure of the commerce platform system and are associated with the services of the commerce platform, and wherein edges between any two nodes in the directed graph are directed and are labeled with a timestamp, a type of cloud service, a consumer of the cloud service, an identifier of the cloud service, and a cost of the cloud service consumed between the two nodes” at dependent Claim 12] * gathering statistics12, electronic recordkeeping13 and recording a customer’s order14 [here “commerce platform system” in “receiving” “first data” that “comprises information indicative of one or more costs of cloud services provider resource usage by the commerce platform system over a period of time” and “receiving” “a second data” that “comprises information indicative of execution of services of the commerce platform system over the period of time” as recited at independent Claims 1,14; “the directed graph comprises a source node associated with the cloud services provider, a super sink node associated with the commerce platform system, and a plurality of intermediate nodes that represent infrastructure of the commerce platform system and are associated with the services of the commerce platform, and wherein edges between any two nodes in the directed graph are directed and are labeled with a timestamp, a type of cloud service, a consumer of the cloud service, an identifier of the cloud service, and a cost of the cloud service consumed between the two nodes” at Claim 12] All of these fail to provide anything significantly more than what is already well-understood, routine and conventional in light MPEP 2106.05(d). In conclusion, Claims 1-20 although directed to statutory categories (here “method” or process at Claims 1-13, “non-transitory computer-readable storage medium” or computer product at Claims 14-19 and “system” or machine at Claim 20) they still recite, or at least set forth the abstract idea (Step 2A prong one), with their additional, computer based elements not integrating the abstract idea into a practical application (Step 2A prong two) or providing significantly more (Step 2B). Claims 1-20 are thus not patent eligible. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Rejections under 35 § U.S.C. 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4,6,7,14-18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al, US 20170083585 A1 hereinafter Chen, in view of Charles Brian O’Kelley US 20070185779 A1 hereinafter O’Kelley Claims 1,14,20 Chen teaches or suggests: “A method for modeling and analyzing a commerce platform infrastructure provided by cloud services provider systems to a commerce platform system, the method comprising”: / “A non-transitory computer readable storage medium having instructions stored thereon, which when executed by a computer processing system, cause the computer processing system to perform operations for modeling and analyzing a commerce platform system infrastructure provided by cloud services provider systems to a commerce platform system, the operations comprising”: / “A system for modeling and analyzing a commerce platform system infrastructure provided by cloud services provider systems to a commerce platform system, the system comprising: a memory that stores one or more instructions; and a processor coupled with the memory to execute the one or more instructions to perform operations, comprising” (Chen ¶ [0100] -¶ [0102], ¶ [0434]-¶ [0449]): - “receiving, by the commerce platform system, a first data generated by a cloud services provider system, wherein the first data comprises information indicative of one or more costs of cloud services provider resource usage by the commerce platform system over a period of time”; (Chen ¶ [0260] 1st sentence: a cloud computing management application 1810 includes the ability to collect data related to a collection of cloud computing resources from one or more cloud computing services and/or other sources. ¶ [0261] 3rd , 5th sentences: the providers generate activity logs and other data that record. In an embodiment, performance data further include cost information that indicate, financial costs related the use of resources over periods of time. For example, at ¶ [0289] 3rd sentence: the cloud service provider offer 3 or more different types of server instances such as on-demand, reserved, and spot instances, the cost benefits of which depend on how the server instances are used) - “receiving, by the commerce platform system, a second data for one or more systems of the commerce platform system executed by hardware computing resources of the cloud service provider system, wherein the second data comprises information indicative of execution of services of the commerce platform system over the period of time”; (Chen ¶ [0260] 1st-2nd sentences: cloud computing management application 1810 includes the ability to collect data related to a collection of cloud computing resources from one or more cloud computing services and/or other sources. The collected data generally may include any data that provides information about the operating status, performance characteristics, relationships with other resources, cost data, or any other attributes of the resources. Chen ¶ [0244] 3rd-5th sentences: cloud computing service providers group resources into various geographic regions… and view server instances associated with each region. Another example interface may be provided that displays a list [or report] of storage volumes currently in [execution or] use, including references to one or more server instances associated with each of the storage volumes. Yet another separate interface may be provided that displays a list [or report] of configured virtual private clouds, and so forth. ¶ [0246] 2nd sentence: the collected data generally may comprise any available information related to the computing resources, including performance data, relationship data, state data, log data, etc., and may originate [executed] from cloud computing service providers. ¶ [0260] 3rd sentence: the data related to the cloud computing resources may originate from one or more cloud computing service providers (e.g., including various types of log data generated by the services). ¶ [0261] 4th-6th sentences: For example cloud computing service providers generate activity logs and other data that record each time resources are created, modified, or deleted, and further may include information about a user associated with each action, information about when each action occurred, information about resource failures, etc. Performance data may further include log data [or report] indicating performance characteristics of one or more resources, including CPU [or hardware] utilization [or execution] for server instances, volume IO counts for storage volumes, etc. In an embodiment, performance data may further include information related the use of one or more resources over one or more periods of time. For example, at ¶ [0200] 2nd sentence: the screen in Fig.9D displays a listing of recent tasks and events and a listing of recent log entries for a selected time range above a performance-metric graph for average CPU core utilization for selected time range. ¶ [0261] 5th sentence: performance data further include log data [or report] indicating performance characteristics of resources including CPU utilization [or execution] for server instances, volume IO counts for storage volumes, etc. ¶ [0264] 4th-5th sentences: In the same log file or in a different file, the cloud computing service record info about performance of various resources, including CPU utilization [or execution] for server instances, reads [execution] and writes [execution] for storage volumes, etc. As indicated above, such data generally comprise performance data, which indicate state and/or performance information about resources, and relationship data, which may indicate information about relationships among various resources. For additional details see ¶ [0280] 2nd sentence, ¶ [0287] 2nd sentence, ¶ [0316] 3rd sentence) - “analyzing, by the commerce platform system, the first data and the second data to extract cloud services provider resource usage information and commerce platform execution information of the services on the hardware computing resources of the cloud services provider system over the period of time” (Chen teaches several examples as follows: Chen ¶ [0243] 3rd-4th sentences: By using computing resources provided by cloud computing service provider, an organization avoid some of upfront investments in hardware and maintenance costs that may otherwise be incurred if the organization purchased computing hardware for itself. Further, scaling an organization's computing resource needs often may be more easily accomplished using a cloud-based computing service as an organization's compute and/or storage needs increase or decrease. ¶ [0287] 1st-2nd sentences interface 2300 displays panel 2304 providing detailed info about selected map element 2302. Relative to information panel 2204 in Fig.22, side panel display 2304 includes more detailed set of info for a selected resource, including info about relationships to other resources, a line chart indicating a CPU utilization % over time, and activity count for a particular time period. Chen ¶ [0308] cloud computing management application 1810 enables display of animated topology maps which provide visualizations of how a collection of cloud computing resources and relationships among the resources change over time. Examples of such time-based topology map displays include, display of topology maps at specified points in time, animated topology maps displaying evolution of a collection of resources over period of time, and comparison topology maps displaying differences between a topology map at 2 or more particular points in time. ¶ [0310] a topology map generation module 1814 enables display of animated topology maps, also referred to herein as time-lapse displays, which display a series of topology maps over one or more period of times. A time-lapse display of a topology diagram may, for example, result in a movie-like display that enables users to better understand how a collection of cloud computing resources and relationships among the resources evolve over time); - “generating, by the commerce platform system, a directed graph comprising: nodes services of the commerce platform system and the hardware computing resources of the cloud services provider system; (Chen ¶ [0202] 1st-3rd sentences: to provide an alternative to an entirely on-premises environment for system 108, one or more of the components of a data intake and query system instead may be provided as a cloud-based service. In this context, a cloud-based service refers to a service hosted by one more computing resources accessible to end users over a network, by using a web browser or other application on a client device to interface with the remote computing resources. For example, a service provider may provide a cloud-based data intake and query system by managing computing resources configured to implement various aspects of the system (e.g., forwarders, indexers, search heads, etc.) and by providing access to the system to end users via a network. To this end at ¶ [0301] a topology map interface enable users to select edges displayed in a topology map and specify an action to be applied to all nodes connected [or interconnected] by selected edges. For example, a user select an edge connecting 1st node representing a server [or hardware] instance and 2nd node representing a storage [or service] volume attached [or interconnected] to the server [or hardware] instance, and further select an option to backup [as a service] the connected resources. In response, the cloud computing application may send a command to a cloud computing service to backup both the server [or hardware] instance and the storage [or service] volume. As another example, an interface enable users to select a particular node and apply an action to any other node connected [or interconnected] to the particular node by an edge. For example, a user select a particular node of a topology map representing a subnet, where the particular node is connected to server instances by a plurality of edges. The user may further specify an action (startup, shutdown, backup etc) that may then be applied to all of the resources connected to the selected node) “and” “edges representing technical dependencies between the nodes” (Chen ¶ [0278] 1st sentence: As illustrated in Fig.20, the topology map display includes a set of interconnected nodes and edges representing a collection of cloud computing resources. Chen ¶ [0284] similar to displaying info in response to selection of topology map node, a topology map interface may receive input selecting edges in the map and to display information about selected edge(s). For example, if a particular edge connects 1st node representing 1st server instance to another node representing a subnet, information about network traffic [or dependencies] transferred to and from the server instance may be displayed. Other examples of [dependencies] information that may be displayed about a particular edge include: information about the origin and/or destination of network traffic, network traffic statistics (ratio of accept, deny, etc.), or any other information related to the relationship between the connected nodes) “wherein flows along the edges model one or more costs of commerce platform system service usage at the cloud services provider system based at least in part on the extracted cloud services provider resource usage information and the commerce platform execution information” (Chen ¶ [0284] 1st-3rd sentence: similar to displaying information in response to the selection of a topology map node, a topology map interface may be configured to receive input selecting one or more edges in the map and to display information about the selected edge(s). For example, if a particular edge connects a first node representing a first server instance to another node representing a subnet, information about network traffic transferred to and from the server instance may be displayed. examples of information displayed about a particular edge include any other information related to the relationship between the connected nodes, such as at ¶ [0288] a topology map interface display cost information associated with selected topology map elements. For example, a user select a node representing a server instance and, in response, a topology map interface display cost info for the server instance such as, total cost incurred by the server instance, an estimated current bill amount, an average cost for the server instance per month, etc. ¶ [0289] a topology map interface also display cost efficiency information for selected map elements. For example, cloud computing services offer various types of the same computing resource based on different payment models. For example, a cloud service provider offer 3 different types of server instances such as on-demand instances, reserved and spot instances, the cost benefits of which depend on how the server instances are used. Based on a determined type of server instance and performance information associated with the instance, a topology map interface display information indicating whether the type of server instance being used is the most cost effective of the available types of server instances. Although the examples above illustrate display of cost information for server instances, similar info may be displayed for selected storage volumes, network interfaces, or any other cloud computing resources. ¶ [0316] 2nd-3rd sentences: a user may desire to view an animated, time-lapse display of a topology map in synchronization with other visualizations that provide performance metrics, cost and/or billing info, or other information related to the depicted resources across the displayed points in time. Examples of other data visualizations that may be displayed in conjunction with an animated topology map include line charts (e.g., displaying CPU utilization levels, network traffic levels, and/or cost information over time. ¶ [0302] a cloud computing application display info related to portions of a topology map that represent underutilized resources and/or resources used in an inefficient manner from a cost perspective. ¶ [0280] 1st sentence: a displayed topology map displayed using particular graphical elements based on data related to the cost utilization, performance, operating state, or other metrics related to each resource. For example, at ¶ [0283] last sentence: If a device is in a particular operating state (e.g., if server instance is currently shutdown), additional info may be displayed such as… cost information associated with the instance, etc. Similar ¶ [0294]. Further see ¶ [0308], ¶ [0316] a cloud computing management application enable display of an animated topology map (e.g. a time-lapse display) synchronized with other data visualizations. For example, a user desire to view an animated, time-lapse display of a topology map in synchronization with one or more other visualizations that provide performance metrics, cost and/or billing information, or other information related to the depicted resources across the displayed points in time. Examples of other data visualizations that may be displayed in conjunction with an animated topology map include line charts (e.g., displaying CPU utilization levels, network traffic levels, and/or cost information over time). In this example, a topology mapping module may enable display of a response time line chart to the topology diagram that enables a user to more easily determine how the number of instances affects response time. Specifically, at ¶ [0412] the animated topology map includes nodes representing the cloud computing resources, and edges representing relationships among the cloud computing resources; and wherein at least one node of the plurality of nodes is displayed using a particular graphical element based on cost data associated with the at least one node. ¶ [0305] 1st-2nd, 4th sentences: Fig.26 illustrates a portion of interface 2600 for selecting one or more topology map elements and providing input to export data related to the selected elements. Interface 2600 includes a set of selected nodes and edges 2604 corresponding to a set of server instances and further includes an export panel 2604. A user may use export panel 2604, to specify particular data fields to export (e.g. cost and/or billing information, resource identifiers etc.) - “performing, by the commerce platform system, an analysis of the directed graph to attribute the one or more costs of the cloud services provider resource usage information to execution of the services of the commerce platform system at the cloud services provider system”; (Chen ¶ [0316] 2nd-3rd sentences: a user may desire to view an animated, time-lapse display of a topology map in synchronization with other visualizations that provide performance metrics, cost and/or billing info, or other information related to the depicted resources across the displayed points in time. Examples of other data visualizations that may be displayed in conjunction with an animated topology map include line charts (e.g., displaying CPU utilization levels, network traffic levels, and/or cost information over time. ¶ [0302] a cloud computing application display info related to portions of a topology map that represent underutilized resources and/or resources used in an inefficient manner from a cost perspective. ¶ [0280] various elements of a displayed topology map may be displayed using particular graphical elements based on data related to the performance, operating state, cost utilization, or other metrics related to each resource. As one example, a topology map generation module 1814 displays a topology map where nodes representing server instances currently above a particular CPU utilization level are displayed using one type of graphical element (e.g., a flashing red icon), whereas other nodes representing server instances that are currently below the particular CPU utilization level are displayed using a different graphical element (e.g., a static gray icon). For example, at ¶ [0283] last sentence: If a device is in a particular operating state (if server instance is currently shutdown), additional info is displayed such as… cost information associated with the instance, etc.) - “generating, by the commerce platform system, a data package having resource deployment adjustments that alters cloud services provider costs for one or more of the cloud services provider usage information and commerce platform execution information over the period of time”; (Chen ¶ [0243] an organization developing a web-based mobile application may pay to use a number of cloud-based server instances to host and execute application code, purchase storage volumes and database servers to store and process application data, and so forth, typically on a pay as you go or similar payment model. By using computing resources provided by a cloud computing service provider, an organization can avoid some of upfront investments in hardware and maintenance costs that may otherwise be incurred if the organization purchased computing hardware for itself. For example, at ¶ [0289] 3rd-4th sentences: a cloud service provider may offer 3 or more different types of server instances such as on-demand, reserved and spot instances, the cost benefits of which depend on how the server instances are used. In an embodiment, based on a determined type of server instance and performance information associated with the instance, a topology map interface may display information indicating whether the type of server instance used is most cost effective of available types of server instance. see ¶ [0319] different example where at Fig.29B, 2 additional nodes 2906 are displayed in topology map 2902B relative to topology map [or data package] 2902A representing 2 new server instances that were created in intervening time period. For example, the new server instances may be a part of an auto scaling group of server instances intended to increase or decrease [or adjust] in number depending on demand. As illustrated by the updated separate visualization 2904B, the addition of the new server instances corresponded with a decrease in aggregate response time of the collection of resources. The synchronization of additional visualizations with a time-lapse display of a topology map may provide even greater insight into the cause and effects of certain changes within the topology of a collection of cloud computing resources) “and” - “configuring deployment of the services of the commerce platform system on the hardware computing resources of the cloud services provider system based on the data package” (Chen ¶ [0302] cloud computing application display info related to portions of a topology map that represent underutilized resources and/or resources used in inefficient manner from a cost perspective. For example, cloud computing application 1810 include cloud computing best practices or guidelines that indicate info related to efficient use of particular types of cloud computing resources. In response to detecting that specified guideline warning conditions are met (e.g., in response to detecting that a server instance of a particular type is being over utilized), alerts or other displays may be presented to the user. For example, at ¶ [0295] 3rd sentence: In the context of a server instance, example triggers include causing the server instance to… shut down in response to detecting that CPU utilization drops below certain level. Also, Chen ¶ [0319] for a different example where at Fig.29B, 2 additional nodes 2906 are displayed in topology map 2902B relative to topology map [or package] 2902A, representing 2 new server instances that were created in intervening time period. For example, the new server instances may be a part of an auto scaling [or configuration] group of server instances intended to increase or decrease in number depending on demand. As illustrated by the updated separate visualization 2904B, the addition [or deployment] of new server instances corresponded with a decrease in aggregate response time of the collection of resources. The synchronization of additional visualizations with a time-lapse display of a topology map may provide even greater insight into the cause and effects of certain changes within the topology of a collection of cloud computing resources) * While * Chen recites at ¶ [0288]-¶ [0289] a user may select a node representing a server instance and, in response, a topology map interface may display cost information for the server instance such as, for example, a total cost incurred by the server instance, an estimated current bill amount, an average cost for the server instance per month, etc. [where] a cloud service provider may offer three or more different types of server instances such as on-demand instances, reserved instances, and spot instances, the cost benefits of which depend on how the server instances are used. Chen goes so far to state at ¶ [0301] 1st statement that: In an embodiment, a topology map interface may enable users to select one or more edges displayed in a topology map and to specify an action to be applied to all of the nodes connected by the selected edges. * Nevertheless* Chen does not explicitly label such nodes as: - “a source node representing the cloud services provider system”; - “a super sink node representing the commerce platform system” as claimed. * However * O’Kelley in an analogous art of modeling services of business entities across a network, as presented by O’Kelley at ¶ [0038] and Figs.1,3 teaches or at least suggests: - “a source node representing the cloud services provider system”; - “a super sink node representing the commerce platform system”; (O’Kelley ¶ [0016] The transaction involve purchase of commodity or service; the source node may represent a seller of the product or commodity or a provider of the service; and the particular sink node may represent a potential buyer of the product or commodity or a potential consumer of the service. ¶ [0018] The path between the source node and the particular sink node may pass through interior nodes of the graph. If the transaction involves the purchase of a product, commodity or service, the source node may represents a seller of the product or commodity or a provider of the service, each of the interior nodes may represent an intermediary to facilitate the transaction, and the particular sink node may represent a potential buyer of the product or commodity or a potential consumer of the service. If the transaction involves the sale of a product, commodity or service, the source node represents a buyer of the product or commodity or a consumer of the service, each of the interior nodes represents an intermediary to facilitate the transaction, and the particular sink node represents a potential seller of the product or commodity or a potential provider of the service. For example, at Fig.3, and ¶ [0056] 4th-5th sentences: One of nodes of graph 300, designated as a source node, represents the end seller. One or more of nodes of graph 300, each designated as a sink node, represents a potential end buyer. Disposed between a sink node and source node may be interior nodes (or “int.node”), each representative of an intermediary. ¶ [0067] 2nd sentence: at each branching point among the nodes connected to and in the tier below the given common branching point, and such auctions within the overall hierarchy are sometimes referred to hereinbelow more concisely as being auctions at a branching point and tier (or, similarly, at a tier and branching point), where the tier is intended to refer to the tier of the bidding nodes connected to an below the common branching point). It would have been obvious to one skilled in the art, before the effective filling date of the claimed invention, to have modified Chen’s “method” / “non-transitory medium” / “system” to have included O’Kelley’s teachings or suggestions to provide an exchange environment that would have more effectively forged relationships, access more products, services, and/or commodities, and have allowed to buy and sell more efficiently, while at same time having allowed sellers to control performance globally and maximize their return-on-investment, and buyers to transact with the highest bidder and maximize revenue. (O’Kelley ¶ [0031] in view of MPEP 2143 F and/or G). Further, the claimed invention could have also been viewed as mere combination of old elements in a similar field of endeavor that models services of business entities across a network. In such combination each element would have merely performed same analytical and processing function as it did separately. Thus, one of ordinary skill in the art would have recognized that, given existing technical ability to combine the elements as evidenced by Chen in view of O’Kelley, the to be combined elements would have fitted together like pieces of a puzzle in a logical, complementary, technologically feasible and/or economically desirable manner. Thus, it would have been reasoned that the results of the combination would have been predictable (MPEP 2143 A). Claims 2,15 Chen / O’Kelley teaches all the limitations in claims 1,14 above. Chen further teaches “further comprising”: - “receiving, instructions for a reconfiguration based at least in part on the the resource deployment adjustments”; (Chen ¶ [0319] 1st-2nd sentences: at Fig.29B, two additional nodes 2906 are displayed in topology map 2902B relative to topology 2902A representing 2 new server instances were created in intervening time period. For example, the new server instances are part of auto scaling [reconfiguration] group of server instances are intended to increase or decrease in number depending on demand) “and” - “executing a reconfiguration at the cloud service provider based on the instructions, the reconfiguration comprising a change to the cloud services provider usage information and/or the commerce platform execution information” (Chen ¶ [0319] 3rd-4th sentences: the addition of the new server instances corresponded with a decrease in the aggregate response time of the collection of resources. The synchronization of additional visualizations with a time-lapse display of a topology map may provide even greater insight into the cause and effects of certain changes within the topology of a collection of cloud computing resources) Claims 3,16 Chen / O’Kelley teaches all the limitations in parent claims 1,14 above. Chen teaches: “wherein generating the data package comprises”: - “generating a graphical user interface (GUI) of the cloud services provider cost information attributed to the commerce platform service usage”; (Chen ¶ [0281] Interacting with topology map displays. ¶ [0282] in addition to the display of topology map elements representing a collection of cloud computing resources, a graphical user interface displaying a topology map may be configured to enable user interaction with the resources represented in the topology map. For example, at ¶ [0288] 1st sentence: a topology map interface may be configured to display cost information associated with one or more selected topology map elements) “and” - “rendering, through the GUI on a user device, the cloud services provider cost information” (Chen ¶ [0288] a topology map interface display cost information associated with one or more selected topology map elements. For example, a user may select a node representing a server instance and, in response, a topology map interface may display cost information for the server instance such as, for example, a total cost incurred by the server instance, an estimated current bill amount, an average cost for the server instance per month, etc. ¶ [0289] topology map interface also display cost efficiency information for selected map elements. For example, many cloud computing services offer various types of the same computing resource based on different payment models. For example, a cloud service provider may offer three or more different types of server instances such as on-demand, reserved and spot instances, the cost benefits of which depend on how the server instances are used. In an embodiment, based on a determined type of server instance and performance information associated with the instance, a topology map interface may display information indicating whether the type of server instance being used is the most cost effective of the available types of server instances. Although the examples above illustrate display of cost information for server instances, similar information may be displayed for selected storage volumes, network interfaces, or any other cloud computing resources). Claims 4,17 Chen / O’Kelley teaches all the limitations in parent claims 3,16 above. Chen teaches wherein generating the GUI comprises”: - “generating data that comprises the data package indicating the cloud service provider system costs attributed to the commerce platform service usage”; (Chen ¶ [0288] a topology map interface may be configured to display cost information associated with one or more selected topology map elements. For example, a user may select a node representing a server instance and, in response, a topology map interface may display cost information for the server instance such as, for example, a total cost incurred by the server instance, an estimated current bill amount, an average cost for the server instance per month, etc. ¶ [0289] In an embodiment, a topology map interface may also be configured to display cost efficiency information for selected map elements. For example, many cloud computing services offer various types of the same computing resource based on different payment models. For example, a cloud service provider may offer three or more different types of server instances such as on-demand, reserved and spot instances, the cost benefits of which depend on how the server instances are used. In an embodiment, based on a determined type of server instance and performance information associated with the instance, a topology map interface may display information indicating whether the type of server instance being used is the most cost effective of the available types of server instances. Although the examples above illustrate display of cost information for server instances, similar information may be displayed for selected storage volumes, network interfaces, or any other cloud computing resources) “and” - “generating the GUI to render the data as a scorecard detailing cloud spend for at least one of a commerce platform system service or a team” (Chen ¶ [0287] 2nd sentence: Relative to information panel 2204 in Fig.22, a side panel display 2304 includes a more detailed set of information for a selected resource, including information about relationships to other resources, a line chart indicating a CPU utilization percentage [tally, or scorecard] over time, and activity count [tally, or scorecard] for a particular time period. Similarly ¶ [0254] 3rd sentence: interface 1900, for example, comprises a dashboard which displays configuration metrics 1902 (e.g., providing information about a number of configuration changes over time), server instance metrics 1904 (e.g., providing information about a total number of running, stopped, and/or reserved server instances), storage metrics 1906 (e.g., providing information about a total number of volumes in use, a total amount of storage space used, etc.), among other indicators. ¶ [0290] last sentence: aggregate information include metrics derived from information associated with the selected resources (e.g., an average response time for a set of selected server instances, a total [tally or scorecard] cost incurred by a set of selected resources, a total number of configuration changes made with respect to the selected resources, etc.) Claim 6. Chen / O’Kelley teaches all the limitations in parent claim 4 above. Chen further teaches: “wherein the data is generated by a service execution tracking system of the commerce platform system” (Chen ¶ [0227] The operation described above illustrates the source of operational latency: streaming mode has low latency (immediate results) and usually has relatively low bandwidth (fewer results can be returned per unit of time) while the concurrently running reporting mode has high latency (it has to perform a lot more processing before returning any results) and usually has relatively high bandwidth (more results can be processed per unit of time. Then at ¶ [0308] cloud computing management application 1810 enables display of animated topology maps which provide visualizations of how a collection of cloud computing resources and relationships among resources change over time. Examples of such time-based topology map displays include, but are not limited to, display of topology maps at specified points in time, animated topology maps displaying an evolution of a collection of resources over a period of time, and comparison topology maps displaying differences between a topology map at two or more particular points in time. For additional details see ¶ [0311]-¶ [0316] with emphasis on ¶ [0315] 2nd sentence: as the playback of a time-lapse progresses, an indication of a date associated with each of the displayed frames of the time-lapse may be displayed in association with the topology map so that a user can better track when the associated events in the time-lapse actually occurred). Claims 7,18. Chen / O’Kelley teaches all the limitations in parent claims 4,17 above. Chen further teaches: “wherein the scorecard comprises a monthly snapshot of consumption of cloud services provider resources by a set of commerce platform system services associated with a commerce platform system developer and/or a team” (Chen ¶ [0288] 2nd sentence: the topology map interface display cost information for the server instance such as, for example, a total cost incurred by the server instance, an estimated current bill amount, an average cost for the server instance per month, etc. ¶ [0286] 4th sentence: the performance metrics may correspond to a particular time period (e.g., for the past week or past month) or display information for the entire lifespan of the resource. ¶ [0314] In one embodiment, during playback of a topology map time-lapse, a user may provide input to mark two different points in time of the playback (e.g., if a time-lapse corresponds to the changes of a topology map over a month-long time period, a user may select two particular points in time during the month). Based on the marked points in time, the user may further provide input to generate a comparison topology map displays that displays differences between the topology map at the marked points in time (e.g., indicating nodes and/or edges that are added, removed, and/or modified). ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Claims 5,10,19 are rejected under 35 U.S.C. 103 as being unpatentable over Chen / O’Kelley as applied to claim, and in view of claims 1, 4, 14 above, in view of Iyer et al, US 20160358249 A1 hereinafter Iyer. As per, Claims 5,19 Chen / O’Kelley teaches all the limitations in parent claims 4,14 above. Chen recognizes at ¶ [0314] In one embodiment, during playback of a topology map time-lapse, a user may provide input to mark two different points in time of the playback (e.g., if a time-lapse corresponds to the changes of a topology map over a month-long time period, a user may select two particular points in time during the month). Based on the marked points in time, the user may further provide input to generate a comparison topology map displays that displays differences between the topology map at the marked points in time (e.g., indicating nodes and/or edges that are added, removed, and/or modified). ¶ [0315] In an embodiment, during playback of the topology map time-lapse, an interface displaying a topology map time-lapse may display an indication of a time associated with each portion of the playback. For example, as the playback of a time-lapse progresses, an indication of a date associated with each of the displayed “frames” of the time-lapse may be displayed in association with the topology map so that a user can better track when the associated events in the time-lapse actually occurred. ¶ [0316] In one embodiment, a cloud computing management application may enable display of an animated topology map (e.g., a time-lapse display) synchronized with other data visualizations. For example, a user may desire to view an animated, time-lapse display of a topology map in synchronization with one or more other visualizations that provide performance metrics, cost and/or billing information, or other information related to the depicted resources across the displayed points in time. Examples of other data visualizations that may be displayed in conjunction with an animated topology map include line charts (e.g., displaying CPU utilization levels, network traffic levels, and/or cost information over time). In this example, a topology mapping module may enable display of a response time line chart to the topology diagram that enables a user to more easily determine how the number of instances affects response time. * However * Chen / O’Kelley as a combination does not explicitly recite “further comprising”: - “comparing the cloud service provider system costs from a period of time from which the data was generated with the cloud service provider system costs from a prior period of time from which a prior data package was generated”; - “detecting an increase of a cloud system cost attributable to a commerce platform system service update that exceeds a predefined threshold”; “and” - “automatically configuring the commerce platform system service of the commerce platforms system to a state associated with the prior period of time” as explicitly recited. * Nevertheless * Iyer in analogous management of cloud providers resources teaches or suggests: - “comparing the cloud service provider system costs from a period of time from which the data was generated with the cloud service provider system costs from a prior period of time from which a prior data package was generated”; (Iyer ¶ [0006] last sentence: examining the spot price for m3.xlarge nodes in us-east-1 region from 28 Apr-6 May 2015 one may see that Spot Instances are offered at nearly a 90% discount from the on-demand price ($0.280) of same instance type. ¶ [0027] Since use of spot Instances is highly dependent on bidding an acceptable and stable bid, a user interface may be presented that asks users to set a percentage of the regular on-demand price the user is willing to pay; as well, as a timeout on that bid. For example, a 90% bid level for m1.xlarge in us-east availability zone translates to maxim bid around 31.5 /hour as of June 2015); - “detecting an increase of a cloud system cost attributable to a commerce platform system service update that exceeds a predefined threshold”; (Iyer ¶ [0007] 3rd-4th sentences: if demand for the Spot Instance increases and the Spot Price exceeds the bid price offered by the user, the Spot Instance will be terminated. One way to reduce the probability of this is to use higher bid prices. For example, at ¶ [0028] 4th: a user may bid at about just above 100%); “and” - “automatically configuring the commerce platform system service of the commerce platforms system to a state associated with the prior period of time” (Iyer ¶ [0028] 4th-5th sentences: a user may bid at about just above 100%. This generally achieve cost reduction, while occasionally falling back to on-demand instances, such as those at ¶ [0006] last sentence from 28 Apr-6 May 2015. ¶ [0021] 3rd- 4th sentences: Auto-scaling clusters use Spot instances to add more compute power when required, and scale down cluster size when load recedes. Automatic addition of compute capacity provide an opportunity to use Spot Instances for auto-scaling at significantly lower costs compared to On-Demand Instances. Similar, ¶ [0025] 4th sentence: Depending on the workload, the cluster may then automatic-ally auto-scale adding more nodes. Similar auto-scaling ¶ [0027] 3rd sentence, ¶ [0029] 1st sentence). It would have been obvious to one skilled in the art, before the effective filling date of the claimed invention, to have modified Chen / O’Kelley “method/non-transitory” “medium” to have included Iyer’s teachings to have more efficiently handled big data as necessitated by physical constraints and contemporary market forces under powerful big data systems such as Amazon Web services, as disclosed by Chen at ¶ [0004], O’Kelley ¶ [0036], and corroborated by Iyer ¶ [0003], ¶ [0006], capable of powerful process petabytes of data while running on upwards of thousands of machines, while also frugally adapting for underutilized periods of time for better utilization of resources and cost reduction (Iyer ¶ [0004]-¶ [0006] and MPEP 2143 G, F). The predictability of such modification would have been further justified by the broad level of skill of one of ordinary skills in the art articulated by Iyer at ¶ [0018] 2nd sentence, ¶ [0057] 2nd sentence. Further, the claimed invention could have also been viewed as a mere combination of old elements in a similar service provider based field of endeavor. In such combination each element merely would have performed same benchmarking, contractual and econometric functions as it did separately. Thus, one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements evidenced by Chen / O’Kelley in view of Iyer, the to be combined elements would have fitted together like pieces of a puzzle in a logical, complementary, technologically feasible and/or economically desirable manner. Thus, it would have been reasoned that the results of the combination would have been predictable (MPEP 2143 A). Claim 10. Chen / O’Kelley teaches all the limitations in parent claim 1 above. Chen / O’Kelley does not explicitly recite as claimed: “further comprising”: - “detecting that an increased usage of a specific service area of the cloud services provider by the commerce platform is inconsistent with an anticipated increased usage of the cloud services provider resources by the commerce platform system service”; “and” - “automatically configuring usage of the specific service area of the cloud service provider to a previous usage configuration of the specific service area of the cloud services provider” Iyer however in analogous management of providers’ resources teaches or suggests: - “detecting that an increased usage of a specific service area of the cloud services provider by the commerce platform is inconsistent with an anticipated increased usage of the cloud services provider resources by the commerce platform system service”; (Iyer ¶ [0021] 2nd sentence: the workload in Hadoop cluster may not be uniform, and thus there may be unexpected [or inconsistent] spikes [or increases]. ¶ [0007] 3rd-4th sentences: if demand for the Spot Instance increases and the Spot Price exceeds the bid price offered by the user, the Spot Instance will be terminated. One way to reduce the probability of this is to use higher bid prices. For example, at ¶ [0028] 4th: a user may bid at about just above 100%) “and” - “automatically configuring usage of the specific service area of the cloud service provider to a previous usage configuration of the specific service area of the cloud services provider” (Iyer ¶ [0028] 4th-5th sentences: bidding at about just above 100% achieve cost reduction, while occasionally falling back [previous] to on-demand instances, such as those at ¶ [0006] last sentence from 28 Apr-6 May 2015. Specifically, ¶ [0021] 3rd- 4th sentences: Auto-scaling clusters use Spot instances to add more compute power when required, and scale down cluster size when load recedes. Automatic addition of compute capacity provide an opportunity to use Spot Instances for auto-scaling at significantly lower costs compared to On-Demand Instances). Rationales to have modified Chen / O’Kelley with/and Iyer are above and reincorporated. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Claims 8,9 are rejected under 35 U.S.C. 103 as being unpatentable over Chen / O’Kelley as applied to claim, and in view of claims 7, 4 above, in view of Fliess US 20120060142 A1 hereinafter Fliess. As per, Claim 8 Chen / O’Kelley teaches all the limitations in parent claim 7 above. Chen / O’Kelley does not recite: “wherein the commerce platform system services manages a database of commerce platform system service data and the cost is a size of one or more tables within the database” Fliess however in analogous analyzing cost profiles of cloud providers teaches/suggests: - “wherein the commerce platform system services manages a database of commerce platform system service data and the cost is a size of one or more tables within the database” (Fliess ¶ [0106] 3rd-4th sentences: As stated supra the function of COP is to determine how to minimize cost of input application code. This cost of an application is based on several factors such as database size. Specifically, ¶ [0116] 4th-8th sentences: Using amortized analysis, some of the operations will require greater than constant cost. Thus, no constant payment will be sufficient to cover the worst case cost of an operation, in and of itself. With proper selection of payment, however, this is not a problem as the expensive operations only occur when there is sufficient payment in the pool to cover their costs For example, in capacity planning, it is often necessary to create a table before its size is known. In this case, a possible strategy is to double the size of the table when it is full). It would have been obvious to one skilled in the art, before the effective filling date of the claimed invention , to have modified Chen / O’Kelley’s method to have included Fliess’ teachings in order to have allowed the cloud service provider to budget capacity manner in an effective manner as necessitated by market forces such as a predicted usage of 10% and reduction in cost of 10% (Fliess ¶ [0118] and MPEP 2143 G, F), while at same time having avoided slowdowns by improving coding quality through selection of highly efficient algorithms as recommended by the cost-oriented profiler COP (Fliess ¶ [0108], [0129], [0132], [0163]-[0173] in view of MPEP 2143 G). The predictability of such modification is further corroborated by the broad level of skill of one of ordinary skills in the art as demonstrated by Fliess at ¶ [0047], [0067], [0071], without overextending storage but rather with reasonably effective storage capabilities as per Fliess ¶ [0173]. Further, the claimed invention could have also be viewed as mere combination of old elements in a similar field of endeavor of analyzing could infrastructure. In such combination each element would have merely performed same analytical, econometric and organizational function as it did separately. Thus, one of ordinary skill in the art would have recognized that, given existing technical ability to combine the elements evidenced by Chen / O’Kelley in further view of Fliess above, the to be combined elements would have fitted together like pieces of a puzzle, in a complementary, technologically feasible and/or economically desirable manner. Thus, it would have been reasoned that the combination results would have been predictable (MPEP 2143 A). Claim 9 Chen / O’Kelley teaches all the limitations in parent claim 4 above. Furthermore Chen / O’Kelley recognizes at ¶ [0289] 4th sentence: based on determined type of server instance and performance information associated with the instance, a topology map interface display information indicating whether the type of server instance being used is the most cost effective of the available types of server instances * However * Chen / O’Kelley does not explicitly recite to clearly anticipate: “wherein the scorecard generates an alert as a result of an increase in a cost of a commerce platform system service”. * Nevertheless * Fliess in analogous cost profiles analysis of cloud providers teach/suggest: “the scorecard generates an alert as a result of an increase in a cost of a commerce platform system service” (Fliess ¶ [0216] 2nd sentence: user interface also comprises various views as described supra such as a real time monitor and analysis views. For example, at Fig.20 and ¶ [0208] 2nd sentence: cost report from COP comprises a real-time monitor that presents software project currently being profiled using charts focusing on metrics such as ownership costs, CPU and network utilization. ¶ [0118] 2nd sentence: noting an example where the trace events indicate that certain optimizations reduce cost by 10% and expected usage predicts that utilization will increase by 10% the following month. ¶ [0111] When costly bottleneck is localized, the COP recommends cost saving optimization for that algorithm, tailored to the application's business case. In one embodiment, optimization includes finding a bottleneck (a critical part of the code that is the primary consumer of the needed resource) sometimes known as a hot spot. An example of bottleneck is using the service bus as opposed to polling a queue at a rate calculated according to the peak time of day. The well-known Pareto principle can be applied to resource optimization by observing 80% of resources are typically used by 20% of operations. The COP approximates that 90% of the execution time of a computer program is spent executing 10% of the code (known as 90/10 law in this context). More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data, i.e. the setup, initialization time, and constant factors of the more complex algorithm can outweigh the benefit). Rationales to modify/combine Chen / O’Kelley /Fliess are above and reincorporated. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Claims 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over: Chen / O’Kelley as applied to claim 1 above, in view of Lauderdale US 20120158817 A1 hereinafter Lauderdale. As per, Claim 11 Chen / O’Kelley teaches all the limitations in claim 1 above. Chen further teaches “wherein the directed graph is a flow graph the resource deployment adjustments applied to one or more nodes and/or edges of the flow graph, (Chen ¶ [0298] 3rd sentence: if a user selects a plurality of nodes corresponding to server instances, a full set of options may be presented related to actions that can be taken with respect to the server instances (e.g., shutdown, restart, etc.). ¶ [0300] In one embodiment, an interface displaying a topology map may be configured to receive input selecting one or more nodes and moving the nodes from one location on the topology map to another location on the topology map and, in response, causing one or more relationships between the nodes to change. For example, a user may select a particular node representing a server instance associated a first subnet and drag and drop the particular node at a location near a second subnet. In response, a request may be sent to an associated cloud computing service to move the server instance from the first subnet to the second subnet. Similarly ¶ [0301], 0369]. Also ¶ [0319] two additional nodes displayed in topology map represent two new server instances that were created in intervening time period. For example, the new server instances may be a part of an auto scaling [as example of deployment adjustment for the] group of server instances are intended to increase or decrease in number depending on demand. As illustrated by the updated separate visualization 2904B, the addition of the new server instances corresponded with a decrease in the aggregate response time of the collection of resources. The synchronization of additional visualizations with a time-lapse display of a topology map may provide even greater insight into the cause and effects of certain changes within the topology of a collection of cloud computing resources) Chen/ O’Kelley does not recite “wherein the analysis of the directed graph comprises solving a maximum flow analysis for the flow graph having one or more adjusted nodes and/or edges of the flow graph” as claimed. However, Lauderdale in analogous analyzing distributed computer architecture teaches or suggests: - “wherein the analysis of the directed graph comprises solving a maximum flow analysis for the flow graph having one or more adjusted nodes and/or edges of the flow graph”. (Lauderdale ¶ [0005] 1st sentence: high-end computer architectures include graph analysis applications such as maximum flow analysis). It would have been obvious to one skilled in the art, before the effective filling date of the claimed invention, to have modified Chen / O’Kelley “method”/”non-transitory medium” to have included the teachings of Lauderdale in order to have provided an efficient distributed computing e that makes optimal use of the underlying hardware while providing a usable abstract model of computation for writers of application code, while maintaining coherent control, monitoring, reliability, and security (Lauderdale ¶ [0004] 2nd sentence and MPEP 2143 G) as previously taught by Chen / O’Kelley. Specifically, context-sensitive and insensitive data would have been attached to objects dynamically by creating tags, which may act as sideband comments on the objects. Tags would have been used to provide hints to the runtime, such as where a particular object should have been placed, how long a codelet would have been expected to run, or what modifications to environment would have been preferable or beneficial for the codelet. They may also have been used to effect third party communication channels between application components that use an object, such as recording the object's placement or usage history (Lauderdale ¶ [0097] 2nd-4th sentences and MPEP 2143 G). Further, the claimed invention could have also be viewed as a mere combination of old elements in a similar field of endeavor dealing with analyzing distributed or cloud computer architecture. In such combination each element merely would have merely performed same analytical and graphical functions as it did separately. Thus, one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Chen / O’Kelley in further view of Lauderdale, the to be combined elements would have fitted together, like pieces of a puzzle in a logical, complementary, technologically feasible and/or economically desirable manner. Thus, it would have been reasoned that the results of the combination would have been predictable (MPEP 2143 A). Claim 12 Chen / O’Kelley / Lauderdale teaches all the limitations in claim 11 above. Further Chen teaches or suggests: “wherein the directed graph comprises” - “a source node associated with the cloud services provider” (Chen ¶ [0284] 2nd-3rd sentence 1st node representing 1st server instance. Information displayed about particular edge include, information about the origin. ¶ [0326] 3rd sentence begin at 1st point corresponding to a start timestamp) - “a super sink node associated with the commerce platform system” (Chen ¶ [0284] 2nd-3rd sentences: another node representing a subnet, information about network traffic transferred to and from the server instance may be displayed. Information displayed about the particular edge include, information about destination of network traffic, network traffic statistics, or any other information related to the relationship between the connected nodes. Chen ¶ [0326] 3rd sentence: noting another example of end at a second point corresponding to an end timestamp of the event), “and” - “a plurality of intermediate nodes that represent infrastructure of the commerce platform system and are associated with the services of the commerce platform” (Chen Figs. 5, 9C, 24-30 and ¶ [0257] last sentence: a number of graphically displayed nodes representing individual cloud computing resources. Chen ¶ [0294] 3rd sentence: each of selected nodes and edges 2402 in Fig.24, represent a separate server instance and associated relationships), “and wherein edges between any two nodes in the directed graph are directed and are labeled with a timestamp” (Chen ¶ [0278] 1st sentence: a set of interconnected nodes and edges representing a collection of cloud computing resources. ¶ [0257] last sentence: the edges represent relationship among the resources. ¶ [0301] 2nd sentence noting an example of an edge connecting a 1st node representing a server instance and 2nd node representing a storage volume attached to the server instance. ¶ [0314] 2nd sentence differences between topology map at the marked points in time indicating nodes and/or edges. ¶ [0335] last two sentences: user select nodes and/or edge representing server instances and the network link between the instances. In response to user's selection of topology map elements, a circular timeline visualization may be automatically generated and displayed based on timestamped events associated with the selected elements. ¶ [0320] last sentence: edges are displayed using particular colors or graphics to indicate that the corresponding computing resources were created, deleted, and/or modified during the time period between the earlier point and time and later point in time), “a type of cloud service” (Chen ¶ [0273] 2nd sentence: a topology map visualization of edges, each representing a relationship between 2 or more cloud computing resources. ¶ [0278] 1st sentence: as illustrated in Fig.20, the topology display includes edges representing a collection of cloud computing resources. For example Fig.2 & ¶ [0258] last sentence: where search panel 2006 indicate 12 different virtual private clouds, 60 server instances, 24 subnets, etc., are available for display in the map. ¶ [0284] 2nd sentence: if a particular edge connects a first node representing a first server instance to another node representing the subnet, info about network traffic transferred to and from the server instance may be displayed. ¶ [0289] 3rd sentence: For example, a cloud service provider may offer three or more different types of server instances such as "on-demand" instances, reserved instances, and spot instances, the cost benefits of which depend on how the server instances are used), “a consumer of the cloud service” (Chen ¶ [0273] 2nd sentence: topology of edges, each representing a relationship between two or more cloud computing resources. Chen ¶ [0120] 1st sentence: each data source broadly represents a distinct source of data that can be consumed by system 108. Fig.5 Customer ID. ¶ [0150] 2nd,4th sentences: search head 210 allows vendor's administrator to search the log data for order number and corresponding customer ID number of person placing the order customer ID field value matches across the log data from the 3 systems stored at the one or more indexers 206), “an identifier of the cloud service” (Chen ¶ [0149] 1st sentence: user submits an order for merchandise using a vendor’s shopping application program 501 running on user's system. ¶ [0283] 3rd sentence: a unique identifier generated by a cloud computing service for the resource. ¶ [0285] 3rd sentence: information panel 2204 include various info about the server instance represented by the selected node including, an identifier of the server instance, a type of the resource, a name or label for the server instance, an account ID associated with the server instance etc.), “and a cost of the cloud service consumed between the two nodes” (Chen ¶ [0255] 2nd sentence noting each edge connecting 2 nodes represents a relationship between resources corresponding to the 2 nodes. For example, ¶ [0305] 2nd-4th sentences: the selected edges 2604 correspond to server instances and further includes an export panel 2604 corresponding to respective cost. ¶ [0412] at least one node of the plurality of nodes is displayed using a particular graphical element based on cost data associated with the at least one node). Claim 13 Chen / O’Kelley / Lauderdale teaches all the limitations in claim 11 above. Chen / O’Kelley does not teach “wherein the analyzing further comprises”: - “performing a maximum flow analysis using the directed flow graph having the one or more adjusted nodes and/or edges of the flow graph to determine an adjusted usage of the service of the commerce platform system across the directed graph, wherein the directed graph is decomposed into a plurality of spanning trees during the maximum flow analysis for attributing costs to individual products, software development groups, customers of the commerce platform system, or a combination thereof” as claimed. Lauderdale however in analogous analyzing distributed computer architecture teaches or suggests “wherein the analyzing further comprises”: - “performing a maximum flow analysis using the directed flow graph having the one or more adjusted nodes and/or edges of the flow graph to determine an adjusted usage of the service of the commerce platform system across the directed graph” (Lauderdale ¶ [0004] 3rd sentence: languages with clear and reasonable semantics, will be necessary so that a reasonably large subset of application developers can work productively in the new environment. In addition compilers or interpreters that support efficient distributed execution of application code will be required, and may necessitate related development tools to provide developers with options and insight regarding the execution of application code. ¶ [0005] 1st sentence: high-end computer (HEC) architectures include graph analysis applications such as maximum flow analysis), “wherein the directed graph is decomposed into a plurality of spanning trees during the maximum flow analysis” (Lauderdale ¶ [0045] 3rd sentence: these groupings may be nested into a tree-like structure, or locale tree, that may be used to describe communication characteristics for software executing on a distributed platform) “for attributing costs to individual products, software development groups” (Lauderdale ¶ [0053] 1st-3rd sentences: to this end, complexes may utilize several layers of context, which may be arranged in a tree-like structure overlaid on top of the locale tree; Fig.5 shows an example of context layers. The leaves of the context layer tree may represent the most localized context blocks, which may correspond to the contexts that codelets see most readily and directly. Less localized context blocks can be referenced through parent pointers from the leaf contexts (although programming languages that generate code compatible with embodiments of the runtime system may obscure references through these pointers to higher-level layers). ¶ [0056] 1st-3rd sentences: to facilitate tree-like conversations between/among interacting resources, an embodiment of the invention may use keys to mark data/control exchanges with resources. A key acts as a descriptor for the state of a series of interactions with a resource, or as selector for such a descriptor (direct or indirect pointer to descriptor). each put and get operation can accept and/or generate key values, according to the resource behavior), “customers of the commerce platform system, or a combination thereof” Rationales to have modified/combined Chen/O’Kelley/Lauderdale are above and reincorporated. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Conclusion Following art is made of record and considered pertinent to Applicant’s disclosure: US 20210294651 A1 Cost-Savings Using Ephemeral Hosts In Infrastructure As A Service Environments US 11100586 B1 Systems and methods for callable options values determination using deep machine learning US 20210256066 A1 Consumption Unit Estimation Analytics for Prescribing Cloud Computing Resources Utilization US 20210255902 A1 Cloud Computing Burst Instance Management US 20210240539 A1 Determining and implementing a feasilbe resource optimization plan for public cloud consumption US 20210232479 A1 Predictive reserved instance for hyperscaler management US 20210208859 A1 System for managing multiple clouds and method thereof US 20210200594 A1 Resource reservation management device and resource reservation management method US 20210182108 A1 Optimizing distribution of heterogeneous software process workloads US 20210112116 A1 Forecasting and reservation of transcoding resources for live streaming US 20200314174 A1 Systems, apparatus and methods for cost and performance-based management of resources in a cloud environment US 20200314175 A1 Systems, apparatus and methods for cost and performance-based management of resources in a cloud environment US 20200089515 A1 dynamic application migration between cloud providers US 20200059539 A1 cloud-native reservoir simulation US 20190179675 A1 Prescriptive Analytics Based Committed Compute Reservation Stack for Cloud Computing Resource Scheduling US 10237135 B1 Computing optimization US 20180375787 A1 providing high availability for a thin-provisioned container cluster US 20180332138 A1 dynamic weighting for cloud-based provisioning systems US 20180321928 A1 software asset management US 10089476 B1 Compartments US 10067801 B1 Acquisition and maintenance of compute capacity US 20180188899 A1 cloud-based event calendar synching and notification US 10002026 B1 Acquisition and maintenance of dedicated, reserved, and variable compute capacity US 20180165619 A1 activity based resource allocation modeling US 20180137445 A1 identifying resource allocation discrepancies US 20180082231 A1 models for visualizing resource allocation US 20180060106 A1 multi-tiered-application distribution to resource-provider hosts by an automated resource-exchange system US 9774489 B1 Allocating computing resources according to reserved capacity US 9747635 B1 Reserved instance marketplace US 20170185929 A1 resource allocation forecasting US 20170178041 A1 completion contracts US 20170109815 A1 ON demand auctions of cloud resources (bundles) in hybrid cloud environments US 20170093645 A1 Displaying Interactive Topology Maps Of Cloud Computing Resources US 20170091689 A1 continuously variable resolution of resource allocation US 20170091678 A1 intermediate resource allocation tracking in data models US 20170093642 A1 resource planning system for cloud computing US 20170085447 A1 adaptive control of data collection requests sent to external data sources US 20170085446 A1 Generating And Displaying Topology Map Time-Lapses Of Cloud Computing Resources US 20170060569 A1 maintenance of multi-tenant software programs US 20170004430 A1 infrastructure benchmarking based on dynamic cost modeling US 9529863 B1 Normalizing ingested data sets based on fuzzy comparisons to known data sets US 20160321115 A1 cost optimization of cloud computing resources US 9448824 B1 Capacity availability aware auto scaling US 20160253339 A1 data migration systems and methods including archive migration US 9384511 B1 Version control for resource allocation modeling US 9350561 B1 Visualizing the flow of resources in an allocation model US 20160019636 A1 cloud service brokerage service store US 20150341230 A1 advanced discovery of cloud resources US 20150341240 A1 assessment of best fit cloud deployment infrastructures US 20150188927 A1 cross provider security management functionality within a cloud service brokerage platform US 20150156065 A1 policy management functionality within a cloud service brokerage platform US 20150058486 A1 instantiating incompatible virtual compute requests in a heterogeneous cloud environment US 20150019301 A1 system and method for cloud capability estimation for user application in black- box environments using benchmark-based approximation US 20150012328 A1 recursive processing of object allocation rules US 20140310418 A1 distributed load balancer US 20140279676 A1 automated business system generation US 20140278807 A1 cloud service optimization for cost, performance and configuration US 20140278808 A1 implementing comparison of cloud service provider package offerings US 20140214496 A1 dynamic profitability management for cloud service providers US 20140108215 A1 system and methods for trading US 20130346390 A1 Cost Monitoring and Cost-Driven Optimization of Complex Event Processing System US 20130282537 A1 utilizing multiple versions of financial allocation rules in a budgeting process US 20130282540 A1 Cloud computing consolidator billing systems and methods US 20130201193 A1 system and method for visualizing trace of costs across a graph of financial allocation rules US 20130179371 A1 scheduling computing jobs based on value US 8484355 B1 System and method for customer provisioning in a utility computing platform US 20130060595 A1 inventory management and budgeting system US 20130042003 A1 smart cloud workload balancer US 20120311106 A1 systems and methods for self-moving operating system installation in cloud- based network US 20120311153 A1 systems and methods for detecting resource consumption events over sliding intervals in cloud-based network US 20120311154 A1 systems and methods for triggering workload movement based on policy stack having multiple selectable inputs US 20120311571 A1 systems and methods for tracking cloud installation information using cloud- aware kernel of operating system US 20120304170 A1 systems and methods for introspective application reporting to facilitate virtual machine movement between cloud hosts US 20120304191 A1 systems and methods for cloud deployment engine for selective workload migration or federation based on workload conditions US 20120233547 A1 platform for rapid development of applications US 20120226796 A1 systems and methods for generating optimized resource consumption periods for multiple users on combined basis US 20120226808 A1 systems and methods for metering cloud resource consumption using multiple hierarchical subscription periods US 8260959 B2 Network service selection US 20120221454 A1 systems and methods for generating marketplace brokerage exchange of excess subscribed resources using dynamic subscription periods US 20120185413 A1 Specifying Physical Attributes of a Cloud Storage Device US 20120131591 A1 method and apparatus for clearing cloud compute demand US 20120130781 A1 cloud service information overlay US 20120130873 A1 systems and methods for generating multi-cloud incremental billing capture and administration US 20120131194 A1 systems and methods for managing subscribed resource limits in cloud network using variable or instantaneous consumption tracking periods US 20120131195 A1 systems and methods for aggregating marginal subscription offsets in set of multiple host clouds US 20120131594 A1 systems and methods for generating dynamically configurable subscription parameters for temporary migration of predictive user workloads in cloud network US 20120124211 A1 system and method for cloud enterprise services US 8175863 B1 Systems and methods for analyzing performance of virtual environments US 20120066018 A1 automatic and semi-automatic selection of service or processing providers US 20120066020 A1 multi-tenant database management for sla profit maximization US 20110295999 A1 methods and systems for cloud deployment analysis featuring relative cloud resource importance US 20110219031 A1 systems and methods for sla-aware scheduling in cloud computing US 7992152 B2 Server/client system, load distribution device, load distribution method, and load distribution program US 20110167034 A1 system and method for metric based allocation of costs US 20110154353 A1 Demand-Driven Workload Scheduling Optimization on Shared Computing Resources US 20110145094 A1 cloud servicing brokering US 20110099403 A1 server management apparatus and server management method US 7917617 B1 Mitigating rebaselining of a virtual machine (VM) US 20110055385 A1 enterprise-level management, control and information aspects of cloud console US 20110022861 A1 reducing power consumption in data centers having nodes for hosting virtual machines US 20110016214 A1 system and method of brokering cloud computing resources US 7870044 B2 Methods, systems and computer program products for a cloud computing spot market platform US 20100318454 A1 Function and Constraint Based Service Agreements US 20100306382 A1 server consolidation using virtual machine resource tradeoffs US 20100293163 A1 operational-related data computation engine US 20100250642 A1 Adaptive Computing Using Probabilistic Measurements US 20100169477 A1 systems and methods for dynamically provisioning cloud computing resources US 20100125473 A1 cloud computing assessment tool US 20100005173 A1 Method, system and computer program product for server selection, application placement and consolidation US 20090300173 A1 Method, System and Apparatus for Managing, Modeling, Predicting, Allocating and Utilizing Resources and Bottlenecks in a Computer Network US 20090300608 A1 methods and systems for managing subscriptions for cloud-based virtual machines US 20090276771 A1 Globally Distributed Utility Computing Cloud US 20090231152 A1 system for monitoring the energy efficiency of technology components US 20090216580 A1 Computer-Implemented Systems And Methods For Partial Contribution Computation In ABC/M Models US 20090201293 A1 system for providing strategies for increasing efficiency of data centers US 20090204382 A1 SYSTEM for assembling behavior models of technology components US 20090063251 A1 System And Method For Simultaneous Price Optimization And Asset Allocation To Maximize Manufacturing Profits US 20090018880 A1 Computer-Implemented Systems And Methods For Cost Flow Analysis US 20080295096 A1 dynamic placement of virtual machines for managing violations of service level agreements (slas) US 20080222638 A1 Systems and Methods for Dynamically Managing Virtual Machines US 20080184254 A1 systems, methods and apparatus for load balancing across computer nodes of heathcare imaging devices US 20080065435 A1 Computer-implemented systems and methods for reducing cost flow models US 20070271203 A1 Methods and systems for cost estimation based on templates US 20060212334 A1 On-demand compute environment US 20060167703 A1 Dynamic resource allocation platform and method for time related resources US 20050120032 A1 Systems and methods for modeling costed entities and performing a value chain analysis US 20050005012 A1 Capacity planning for server resources US 20040186762 A1 System for performing collaborative tasks US 20030236721 A1 Dynamic cost accounting US 20030172018 A1 Automatically allocating and rebalancing discretionary portfolios US 20030158724 A1 Agent system supporting building of electronic mail service system US 6578005 B1 Method and apparatus for resource allocation when schedule changes are incorporated in real time US 20030083888 A1 Method and apparatus for determining a portion of total costs of an entity US 6466980 B1 System and method for capacity shaping in an internet environment US 6249769 B1 Method, system and program product for evaluating the business requirements of an enterprise for generating business solution deliverables US 6208993 B1 Method for organizing directories US 5802508 A Reasoning with rules in a multiple inheritance semantic network with exceptions US 5799286 A Automated activity-based management system US 5539883 A Load balancing of network by maintaining in each computer information regarding current load on the computer and load on some other computers in the network Any inquiry concerning this communication or earlier communications from the examiner should be directed to OCTAVIAN ROTARU whose telephone number is (571)270-7950. The examiner can normally be reached on 571.270.7950 from 9AM to 6PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PATRICIA H MUNSON, can be reached at telephone number (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /OCTAVIAN ROTARU/ Primary Examiner, Art Unit 3624 February 2nd, 2025 1 Alice, 573 U.S. 208, 224, 110 USPQ2d at 1984, 1985 (citing Parker v. Flook, 437 U.S. 584, 593, 198 USPQ 193, 198 (1978) and Mayo, 566 U.S. at 72, 101 USPQ2d at 1966). 2 MPEP 2106.04 I, ¶3, 5th sentence: Mayo, 566 U.S. at 79-80, 86-87, 101 USPQ2d at 1968-69, 1971 (claims directed to "narrow laws that may have limited applications" held ineligible) 3 According to MPEP 2106.04(a): “…examiners should identify at least one abstract idea grouping, but preferably identify all groupings to the extent possible…”. 4 OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1364, 115 U.S.P.Q.2d 1090, 1092 (Fed Cir. 2015);  In re Smith, 815 F.3d 816, 818-19, 118 USPQ2d 1245, 1247 (Fed. Cir. 2016);  In re Greenstein, 774 Fed. Appx. 661, 664, 2019 USPQ2d 212400 (Fed Cir. 2019) (non-precedential) 5 MPEP 2106.04(a)(2) III 6 FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016); 7 Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1370-71, 115 USPQ2d 1636, 1642 (Fed. Cir. 2015) 8 Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362, TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) 9 Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1331, 115 USPQ2d 1681, 1699 (Fed. Cir. 2015). 10 Content Extraction and Transmission, LLC v. Wells Fargo Bank, 776 F.3d 1343, 1348, 113 USPQ2d 1354, 1358 (Fed. Cir. 2014) 11 Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1331, 115 USPQ2d 1681, 1699 (Fed. Cir. 2015). 12 OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93 13 Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014), Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 14 Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1244, 120 USPQ2d 1844, 1856 (Fed. Cir. 2016)
Read full office action

Prosecution Timeline

Jun 30, 2023
Application Filed
Feb 19, 2025
Non-Final Rejection — §101, §103
May 09, 2025
Interview Requested
May 15, 2025
Examiner Interview Summary
May 15, 2025
Applicant Interview (Telephonic)
May 16, 2025
Response Filed
Jun 07, 2025
Final Rejection — §101, §103
Sep 10, 2025
Request for Continued Examination
Oct 02, 2025
Response after Non-Final Action
Feb 02, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602627
SOLVING SUPPLY NETWORKS WITH DISCRETE DECISIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12555059
System and Method of Assigning Customer Service Tickets
2y 5m to grant Granted Feb 17, 2026
Patent 12547962
GENERATIVE DIFFUSION MACHINE LEARNING FOR RESERVOIR SIMULATION MODEL HISTORY MATCHING
2y 5m to grant Granted Feb 10, 2026
Patent 12450534
HETEROGENEOUS GRAPH ATTENTION NETWORKS FOR SCALABLE MULTI-ROBOT SCHEDULING
2y 5m to grant Granted Oct 21, 2025
Patent 12406213
SYSTEM AND METHOD FOR GENERATING FINANCING STRUCTURES USING CLUSTERING
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
28%
Grant Probability
67%
With Interview (+38.9%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 409 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month