Prosecution Insights
Last updated: April 19, 2026
Application No. 17/745,166

Centralized Control For Computing Resource Management

Final Rejection §102§103
Filed
May 16, 2022
Examiner
CHEN, ZHI
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
2 (Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
152 granted / 250 resolved
+5.8% vs TC avg
Strong +40% interview lift
Without
With
+40.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
27 currently pending
Career history
277
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
6.9%
-33.1% vs TC avg
§112
25.2%
-14.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 250 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to Applicant’s Amendment filed on 10/28/2025. Claims 1-20 are presented for examination. Claims 1-3, 5-12 and 20 have been amended. Applicant’s amendments to the claims have overcome 112 rejections set forth in the non-Final Office Action mailed 7/30/2025. Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirely as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Objections Claims 1-10 are objected to because of the following informalities: “the supply and demand signaling” at line 8 of claim 1 should be: the supply signaling and the demand signaling. Claims 2-10 are objected for failing to cure the deficiency from their respective parent claim by dependency. “the supply and demand signaling” at line 2 and last two lines of claim 3 should be: the supply signaling and the demand signaling. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Butler et al. (US 20220197773 A1, hereafter Butler) in view of Huang et al. (CN 109062683 A-publication date: 12/21/2018-English translation provided by Google Patents, hereafter Huang). Regarding to claim 1, Butler discloses: A method comprising: receiving, by one or more processors, supply signaling and demand signaling, each of the supplying signaling and the demand signaling indicating one or more changes in available hardware computing resource inventory in a cloud computing environment including multiple virtual machines (VMs) of different VM types, wherein each VM type is associated with a different set of computing hardware resources; updating, by the one or more processors, a centralized record of available computing hardware resource inventory in response to the supply and demand signaling (see [0164]-[0165], [0168], [0174]; “an inventory catalog subsystem 740 persists a catalog of available resources and configurations that can be added to the existing computing infrastructure, along with the times at which any of those resources are requested to be deployed/placed in the infrastructure”, “Collating inputs from the resource modeler and the inventory catalog to continuously provide an updated capacity assessment for all infrastructural resources”, “Estimating business/purchasing decisions by conducting ‘what-if’ scenarios based available inventory configurations and resource configuration updates that can inform an update to the future inventory”. A centralized record/catalog is continuously updated. For claimed supply signaling and demand signaling, see [0139], [0177], [0194], [0404], [0497]; “allowing for customers and service providers to automatically plan for optimal capacity”, “long-term purchases of new computing hardware to bring additional capacities into the system, or short-term purchases such as renting capacities from cloud providers”, “a first portion of the requisite resource capacities should be allocated in certain resources that are already deployed in the computing infrastructure, while a second portion of the requisite resource capacities should be allocated in new resources that can be added to the computing infrastructure from the resource inventory catalog (e.g., physical resources available for purchase or logical resources available to rent)”, “ A service provider (e.g., an owner/operator of server 3050, CN 3042, and/or cloud 3044) may deploy the IoT devices in the IoT group to a particular area (e.g., a geolocation, building, etc.) in order to provide the one or more services”. The information related to new computing hardware resources added or offered by the resource/service/cloud providers can be considered as claimed supply signaling and the information related to requesting or using already deployed resources from the users/customers/tenants can be considered as claimed demand signaling. Also see [0242]-[0247] for claimed cloud computing environment; “This solution proposes a novel methodology and algorithm to solve the technical problem … incrementing the flexibility of cloud orchestrators to choose the right placement option … place workloads distributed over edge, core network, and cloud resources”. For claimed “wherein each VM type is associated with a different set of computing hardware resources”, see [0148]; “details on how a set of virtual machines (VMs) are being deployed on a physical server (e.g., VM sizes/configurations pinned to particular physical cores)”, i.e., at least different VM sizes having different set of hardware resources are considered as different VM types); receiving, by the one or more processors, a first capacity planning signal from a first capacity management subsystem, and a second capacity planning signal from a second capacity management subsystem, the first capacity management subsystem assigned to perform a first capacity management action type for managing the computing hardware resource inventory across the computing environment according to the centralized record, and the second capacity management subsystem assigned to perform a second capacity management action type for managing the computing hardware resource inventory across the cloud computing environment according to the centralized record, wherein the second capacity management action type is different from the first capacity management action type (see [0091], [0343]-[0346], “all edge compute nodes involved in this collaborative video analytics pipeline must share their system load status to allow overloaded edge nodes to choose optimal peer edge nodes for offloading compute tasks and rebalancing the overall load”, “a set of actions, which are then issued to the controller. These actions can include: … inserting advanced reservations to keep enough headroom for future function invocations … the deployment option can change from a cold to warm to hot container”. Also see [0472]; “common data storage to store data for reuse by one or more functions … management of container and function memory spaces; coordination of acceleration resources available for functions; and distribution of functions between containers”. The resources from Butler including cloud hardware resources, and thus the feature of “inserting advanced reservations to keep enough headroom for future function invocations” from [0344] would require managing/reserving the computing hardware resource inventory (note: even if it is about certain virtual machine/function invocation, such virtual machine/function invocation still requires managing/reserving hardware resource inventory). The description of [0472] also proves the changing the deployment of function from cold to warm to hot container would also require managing the computing hardware resource inventory like the memory space, acceleration resources); applying, by the one or more processors, a common supply-demand matching (SDM) logic to each of the first capacity planning signal and the second capacity planning signal (see [0133]-[0136]; “a solution that provides automated capacity planning for dynamic environments”, “a ‘resource reasoning and planning module’ (RRPM) that complements existing resource managers/orchestrators by enabling continuous capacity planning, near-term scheduling decisions”, “a model-based mechanism/subsystem for expression and reasoning between different stakeholders (in space and time) based on different objectives capturing used and available capacity, dynamicity of the system, dynamicity of the workload, and dependability of a distributed edge platform” and “a method/subsystem that allows for ‘what-if’ and forward-looking planning capabilities while comprehending future and dynamic changes in resources availability and resource requirements”. Also see [0141]; “Such automated planning involves balancing multiple objectives (e.g., focusing on maximizing total cost of ownership (TCO) and quality of service (QoS)) across multiple stakeholders (e.g., infrastructure provider, service provider, end-user)”); transmitting, by the one or more processors, a first capacity management signal to the first capacity management subsystem, the first capacity management signal indicating to perform a first action of the first capacity management action type based on the common SDM logic (see [0142] for applying common SDM logic to generation capability management signal in generality; “using a resource reasoning and planning module (RRPM) 604. This architectural diagram outlines the interaction of RRPM 604 with the other components available for managing compute platforms. For example, based on various insights 602 associated with the infrastructure and workloads, the RRPM 604 outputs a capacity plan 605. The capacity plan 605 can help inform a scheduling component of an orchestrator/resource manager 606 to make spatial and temporal workload placement decisions (in the near and longer term), as well as inform business decisions (e.g., via business intelligence dashboard 608) on adding additional capacity to the infrastructure 610 to maintain an overall optimal infrastructure capacity”. Also see [0093]-[0101] for a specific example of applying common SDM logic to balance multiple objectives to generate a corresponding action to be performed, i.e. claimed first capacity management signal indicating to perform a first action of the first capacity management action type, like “allow overloaded edge nodes to choose optimal peer edge nodes for offloading compute tasks and rebalancing the overall load” discussed at [0091]); and transmitting, by the one or more processors, a second capacity management signal to the second capacity management subsystem, the second capacity management signal indicating to perform a second action of the second capacity management action type based on the common SDM logic (see [0142] for applying common SDM logic to generation capability management signal in generality; “using a resource reasoning and planning module (RRPM) 604. This architectural diagram outlines the interaction of RRPM 604 with the other components available for managing compute platforms. For example, based on various insights 602 associated with the infrastructure and workloads, the RRPM 604 outputs a capacity plan 605. The capacity plan 605 can help inform a scheduling component of an orchestrator/resource manager 606 to make spatial and temporal workload placement decisions (in the near and longer term), as well as inform business decisions (e.g., via business intelligence dashboard 608) on adding additional capacity to the infrastructure 610 to maintain an overall optimal infrastructure capacity”. Also see [0238]-[0241] for a specific example of applying common SDM logic to balance multiple objectives to generate a corresponding action to be performed, i.e. claimed second capacity management signal indicating to perform a second action of the first capacity management action type, like “inserting advanced reservations to keep enough headroom for future function invocations” discussed at [0344]). Butler does not disclose: wherein the SDM logical is configured to match supply signaling to demand signaling based on at least in part on the different VM types and the associated computing hardware resources. However, Huang discloses: A method comprising: receiving, by one or more processor, supply signaling and demand signaling, each of the supply signaling and the demand signaling indicating one or more changes in available computing hardware resource inventory in a cloud computing environment including multiple virtual machines (VMs) of different VM types, wherein each VM type is associated with a different set of computing hardware resources (see [0002]; “In cloud computing IaaS mode, cloud computing service provider is with virtual machine (Virtual machine, VM) as clothes Business provides unit, provides a user the infrastructure resources such as calculating, network, storage.Specifically, cloud computing service provider mentions For various type of virtual machine so that user carries out unrestricted choice, type of virtual machine contains the money such as different calculating, network, storage The matched combined in source;The type of virtual machine that cloud computing platform is selected according to user creates on the physical host of its data center Meet the virtual machine of user resources demand”. The cloud computing service provider is required to provide certain information or signal to indicate what kind of hardware resources that the provider can offer/added and the user is required to provide certain information or signal to indicate what kind of hardware resources that the user requires/needed. Also see claim 1 for similar description, “Obtain the available resource information of host group, wherein the available resource information of the host group includes number of host and each The available resources size of the host” and “Obtain the priority of user and the host resource size of request”), applying, by the one or more processors, a common supply-demand matching (SDM) logic, wherein the SDM logic is configured to match supply signaling to demand signaling based at least in part on the different VM types and the associated computing hardware resources (see [0002]; “The matched combined in source;The type of virtual machine that cloud computing platform is selected according to user creates on the physical host of its data center Meet the virtual machine of user resources demand, and provides it to user's use”. Also see claim 2; “The remaining available resource size for traversing each host in the host group, when i-th host in the host group It is tired according to first priority when remaining available resource size is greater than or equal to the resource size of j-th of user request”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the usage of common supply-demand logic from Butler by including matching cloud service provider’s resource with user’s requirements to create virtual machine for user from Huang, and thus the combination of Butler and Huang would disclose the missing limitations from Butler, since it is well-known and understood to provide sufficient resource sizes based on user’s requirements (see [0002] from Huang). Regarding to Claim 2, the rejection of Claim 1 is incorporated and further the combination of Butler and Huang discloses: wherein each of the first capacity management action type and the second capacity management action type is selected from the group consisting of: reserving hardware computing resource inventory for a forecasted demand; determining pool sizes for domains of the cloud computing environment; moving computing hardware resource inventory between domains of the cloud computing environment; moving projects between domains of the cloud computing environment; and moving demand for computing hardware resource inventory between domains of the cloud computing environment (see “inserting advanced reservations to keep enough headroom for future function invocations” from [0344] of Butler as claimed limitation of “reserving hardware computing resource inventory for a forecasted demand”. See “the deployment option can change from a cold to warm to hot container” from [0345] of Butler as claimed limitation of “moving projects between domains of the cloud computing environment”. See “If one of the edge nodes 210 a-c becomes overloaded, however, a portion of its video processing workload can be dynamically offloaded to other edge nodes 210 a-c to prevent video frames from being dropped” from [0053] of Butler as claimed limitation of “moving demand for computing hardware resource inventory between domains of the cloud computing environment”). Regarding to Claim 3, the rejection of Claim 1 is incorporated and further the combination of Butler and Huang discloses: wherein one or more changes in available computing hardware resource inventory indicated by the supply and demand signaling includes at least one of central processing unit (CPU) capacity, random access memory (RAM) size, or solid state drive (SSD) size (see [0217] and [0260] from Butler; “tasks and services requests need to express their requirements and operations margins (e.g., latency boundaries in which it can operate and hence defining where it can be placed at the edge)”, “the workload model for the experimental setup is a compute-intensive OpenFoam computational fluid dynamics (CFD) simulation workload, requesting for 24 cores”. Also see [0433] from Butler; “in response to a request by a user … fulfils the requirements of the application 3105 … Requirements of the application can include latency, location, compute resources, storage resources, network capability, security conditions, and the like”), and wherein the SDM logic matches demand to supply based at least in part on the one or more changes in available computing hardware resource inventory indicated by the supply and demand signaling (see [0217]-[0222] from Butler; “matching task(s) and/or sub-task(s) to resources based on various properties. In particular, the illustrated process flow shows how a system can, given a service request, decompose it into a set of task(s) and/or sub-task(s) and match those to resources capabilities known to it”. Also see [0433] from Butler; “an instance of a specific MEC App 3136 fulfilling the requirements of the MEC App 3136 regarding the UE 3120. If no instance of the MEC App 3136 fulfilling these requirements is currently running, the multi-access edge system management may create a new instance of the application 3105 on a MEC host 3036 that fulfils the requirements of the application 3105”). Regarding to Claim 10, the rejection of Claim 1 is incorporated and further the combination of Butler and Huang discloses: wherein the supply signaling indicates the VM type (see [0148] from Butler and [0002] from Huang; “hold details on how a set of virtual machines (VMs) are being deployed on a physical server (e.g., VM sizes/configurations pinned to particular physical cores)” and “cloud computing service provider is with virtual machine (Virtual machine, VM) as clothes Business provides unit, provides a user the infrastructure resources such as calculating, network, storage.Specifically, cloud computing service provider mentions For various type of virtual machine so that user carries out unrestricted choice, type of virtual machine contains the money such as different calculating, network, storage”) and wherein the common SDM logic is configured to match supply signaling to demand signaling based at least in part on mappings between VM types and compatible machine types (see [0002] and claim 2 from Huang; “The matched combined in source;The type of virtual machine that cloud computing platform is selected according to user creates on the physical host of its data center Meet the virtual machine of user resources demand, and provides it to user's use” and “The remaining available resource size for traversing each host in the host group, when i-th host in the host group It is tired according to first priority when remaining available resource size is greater than or equal to the resource size of j-th of user request”. The match of the resource requirements of jth user requests with ith host is based on the remaining available resource size of the ith host, the remaining available resources size of the ith host is generated or resulted from mapping or matching of the resource requirement of the previous user requests, i.e. other VM types, and resource availability of all hosts, i.e., machine types). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Butler et al. (US 20220197773 A1, hereafter Butler) in view of Huang et al. (CN 109062683 A-publication date: 12/21/2018-English translation provided by Google Patents, hereafter Huang) and further in view of Kasiolas et al. (US 20070088703 A1, hereafter Kasiolas). Regarding to Claim 4, the rejection of Claim 1 is incorporated and further the combination of Butler and Huang discloses: wherein the common SDM rules include at least one of: packing a location to reduce fragmentation of stored data; requiring supply signaling to be matched with demand [on a first-come-first-served basis]; avoiding inventory that is held back from being counted towards currently available capacity; applying a multiplier to a location in which available resources are overcommitted to reduce a likelihood of further resources being committed; applying a cost efficiency weighting to available capacity based on machine type; or avoiding a single virtual machine (VM) from being split across multiple machines (see [0217]-[0222] from Bulter; “matching task(s) and/or sub-task(s) to resources based on various properties. In particular, the illustrated process flow shows how a system can, given a service request, decompose it into a set of task(s) and/or sub-task(s) and match those to resources capabilities known to it”). The combination of Butler and Huang does not disclose: wherein the common SDM rules include at least: requiring supply signaling to be matched with demand on a first-come-first-served basis. However, Kasiolas discloses: wherein the common SDM rules include: requiring supply signaling to be matched with demand on a first-come-first-served basis (see [0061]; “One or more bids are received from participating cluster managers at step 915. The bids may identify one or more destination nodes which can receive data. One or more of the bids are selected at step 920. Bids can be accepted based on ranking, on a first-come, first served basis”. Also see [0045]; “perhaps one or more bids are received but the bids are deemed to be unsatisfactory, the auctioning node can notify the cluster manager”, i.e., requiring matching of resource node with demand of the bid). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the specific rules or policies for handling resource requests from the combination of Butler and Huang by including a policy of handing requests on a manner of first come first server from Kasiolas, and thus the combination of Butler, Huang and Kasiolas would disclose the missing limitations from the combination of Butler and Huang, since first-come first server is a well-known and understood scheduling mechanism to provide certain level of fairness to the jobs or tasks received earlier. Claims 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Butler et al. (US 20220197773 A1, hereafter Butler) in view of Huang et al. (CN 109062683 A-publication date: 12/21/2018-English translation provided by Google Patents, hereafter Huang) and Kasiolas et al. (US 20070088703 A1, hereafter Kasiolas) and further in view of Orellana et al. (US 20220035669 A1, hereafter Orellana). Regarding to Claim 5, the rejection of Claim 4 is incorporated and further the combination of Butler, Huang and Kasiolas discloses: wherein the supply signaling further indicates a supply lead time for new computing hardware resource inventory to become available in the cloud computing environment, wherein a record of available computing hardware resource inventory includes recording the lead time (see [0164]-[0165] from Butler; “an inventory catalog subsystem 740 persists a catalog of available resources and configurations that can be added to the existing computing infrastructure, along with the times at which any of those resources are requested to be deployed/placed in the infrastructure”. Also see [0156] and [0257]-[0259] from Butler; “determines current and future (based on predictions) available capacities 725 for the resources and the service instances available. This will be carried out over a variety of tunable time windows (e.g., 1 s, 1 m, 1 h, and so forth)” and “Algorithm 1: derives optimal workload placement options in (near) real time based on current resource availability (e.g., by performing placement modeling using current resource and workload data)” and “Algorithm 2: derives optimal workload placement options at future time points based on future resource availability (e.g., by performing forward-looking placement modeling using predicted resource and workload data, such as the possibility of resources being freed/reserved or added/removed from inventory in the future)”). The combination of Butler, Huang and Kasiolas does not disclose: wherein the demand signaling is associated with a demand lead time indicative of when new demand will be received. However, Orellana discloses: wherein the demand signaling is associated with a demand lead time indicative of when new demand will be received (see claim 2, “obtaining the number and use time periods of computing resources requested by the computing resource requester from the resource use request”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the demanding request/signaling from the combination of Butler, Huang and Kasiolas by including the resource use request include resource use time periods from Orellana, and thus the combination of Butler, Huang, Kasiolas and Orellana would disclose the missing limitations from the combination of Bulter, Huang and Kasiolas, since it would provide a specified resource demand or request for better planning resource usage via knowing the actual time period of the request should be achieved (see claim 2 from Orellana). Regarding to Claim 6, the rejection of Claim 5 is incorporated and further the combination of Butler, Huang, Kasiolas and Orellana discloses: wherein the common SDM logic is configured to match supply signaling to demand signaling based at least in part on the supply lead time and the demand lead time (see [0156] and [0257]-[0259] from Butler; “determines current and future (based on predictions) available capacities 725 for the resources and the service instances available. This will be carried out over a variety of tunable time windows (e.g., 1 s, 1 m, 1 h, and so forth)” and “Algorithm 1: derives optimal workload placement options in (near) real time based on current resource availability (e.g., by performing placement modeling using current resource and workload data)” and “Algorithm 2: derives optimal workload placement options at future time points based on future resource availability (e.g., by performing forward-looking placement modeling using predicted resource and workload data, such as the possibility of resources being freed/reserved or added/removed from inventory in the future)”). Regarding to Claim 7, the rejection of Claim 6 is incorporated and further the combination of Butler, Huang, Kasiolas and Orellana discloses: wherein the common SDM logic is configured to match imminent demand with currently available computing hardware resources, and to match forecasted demand with future available computing hardware resources (see [0156] and [0257]-[0259] from Butler; “determines current and future (based on predictions) available capacities 725 for the resources and the service instances available. This will be carried out over a variety of tunable time windows (e.g., 1 s, 1 m, 1 h, and so forth)” and “Algorithm 1: derives optimal workload placement options in (near) real time based on current resource availability (e.g., by performing placement modeling using current resource and workload data)” and “Algorithm 2: derives optimal workload placement options at future time points based on future resource availability (e.g., by performing forward-looking placement modeling using predicted resource and workload data, such as the possibility of resources being freed/reserved or added/removed from inventory in the future)”). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Butler et al. (US 20220197773 A1, hereafter Butler) in view of Huang et al. (CN 109062683 A-publication date: 12/21/2018-English translation provided by Google Patents, hereafter Huang), Kasiolas et al. (US20070088703A1, hereafter Kasiolas) and Orellana et al. (US 20220035669 A1, hereafter Orellana) and further in view of Greenwood et al. (US 20190158422 A1-IDS recorded, hereafter Greenwood). Regarding to Claim 8, the rejection of Claim 5 is incorporated and further the combination of Butler, Huang, Kasiolas and Orellana discloses: wherein each of the supply lead time and the demand lead time is selected from a plurality of lead time categories, wherein the lead time categories include at least: an imminent lead time indicating immediately available computing hardware resources and immediate computing hardware resource demand, respectively; a reserved lead time indicating incoming computing hardware resources that will be available on the order of period longer than the imminent lead time and forecasted computing hardware resource demand that will be received on the order of duration longer than the imminent lead time, respectively; and an in-transit lead time indicating incoming computing hardware resources that will be available on the order of duration longer than the reserved lead time and forecasted computing hardware resource demand that will be received on the order of duration longer than the reserved lead time, respectively (see [0156] and [0257]-[0259] from Butler; “determines current and future (based on predictions) available capacities 725 for the resources and the service instances available. This will be carried out over a variety of tunable time windows (e.g., 1 s, 1 m, 1 h, and so forth)” and “Algorithm 1: derives optimal workload placement options in (near) real time based on current resource availability (e.g., by performing placement modeling using current resource and workload data)” and “Algorithm 2: derives optimal workload placement options at future time points based on future resource availability (e.g., by performing forward-looking placement modeling using predicted resource and workload data, such as the possibility of resources being freed/reserved or added/removed from inventory in the future)”. At least three different categories of time period can be classified for “a variety of tunable time windows (e.g., 1 s, 1 m, 1 h, and so forth)”, i.e., at least the “1s” time windows can be considered as claimed an imminent lead time, the “1m” time windows can be considered as generic/plain meaning of reserved lead time and the “1h” time windows can be considered as generic/plain meaning of in-transit lead time). The combination of Butler, Huang, Kasiolas and Orellana does not disclose: a reserved lead time indicating a time duration on the order of days; and an in-transit lead time indicating a time duration on the order of weeks. However, Greenwood discloses: a reserved lead time indicating duration in the order of days an in-transit lead time indicating duration in the order of weeks (see [0043] and [0072]; “the request 450 may include an amount of time into the future to predict (e.g., 1 week). The available capacity forecast returned 460 may provide a forecast of available capacity based on fragmentation for the 1 week period into the future from a time at which the forecast was generated” and “enough lead time remains to add new capacity as soon as possible before the remaining available capacity is exhausted (e.g., 2 weeks before the remaining 25% may be exhausted) … If, available capacity, decreases by 2 units per day, and 60 units of capacity remain, then the timing of adding new capacity may be set to within 30 days”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the variety of tunable time windows from the combination of Butler, Huang, Kasiolas and Orellana by including time windows period in the order of days and weeks from Greenwood, and thus the combination of Butler, Huang, Kasiolas, Orellana and Greenwood would discloses the missing limitations from the combination of Butler, Huang, Kasiolas and Orellana, since it would provide longer lead time for supply or demand action to provide “enough lead time remains to add new capacity as soon as possible before the remaining available capacity is exhausted” (see [0072] from Greenwood). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Butler et al. (US 20220197773 A1, hereafter Butler) in view of Huang et al. (CN 109062683 A-publication date: 12/21/2018-English translation provided by Google Patents, hereafter Huang), Kasiolas et al. (US20070088703A1, hereafter Kasiolas), Orellana et al. (US 20220035669 A1, hereafter Orellana), and Greenwood et al. (US 20190158422 A1-IDS recorded, hereafter Greenwood) and further in view of Mauer et al. (US 11467872 B1, hereafter Mauuer). Regarding to Claim 9, the rejection of Claim 8 is incorporated and further the combination of Butler, Huang, Kasiolas, Orellana and Greenwood discloses: for supply signaling indicating the in-transit lead time, the supply signaling further indicates a vendor of the incoming computing hardware resource (see [0139], [0177], [0404] from Butler and [0002] from Huang; “allowing for customers and service providers to automatically plan for optimal capacity”, “long-term purchases of new computing hardware to bring additional capacities into the system, or short-term purchases such as renting capacities from cloud providers”, “A service provider (e.g., an owner/operator of server 3050, CN 3042, and/or cloud 3044) may deploy the IoT devices in the IoT group to a particular area (e.g., a geolocation, building, etc.) in order to provide the one or more services”. There are multiple service providers, and thus the supply signaling is required to include certain identification information indicates which service provider is provide the corresponding resources, i.e., claimed vendor. Also see [0043] and [0072] for in-transit lead time. Note: the current claimed language does not exclude the interpretation of only the supply signaling indicating the in-transit lead time would further indicate vendor information, and thus the embodiment of the vendor information included at all different supply signaling (no matter it is related to imminent lead time, reserved lead time or in-transit lead time) is still reasonable to be used to teach current claim 9), and wherein an amount of the incoming computing hardware resource that will be available on the order of weeks is approximated (see [0156] from Butler; “determines current and future (based on predictions) available capacities 725 for the resources and the service instances available”. Note: once again, there is no requirement on the current claim 9 to perform the resource prediction only on the supply having in-transit lead time). The combination of Butler, Huang, Kasiolas, Orellana and Greenwood does not disclose: wherein an amount of the incoming computing hardware resources that will be available on the order of weeks is approximated based at least in part on historical fulfillment data of the vendor. However, Mauer discloses: wherein an amount of the incoming computing hardware resources that will be available on the order of weeks is approximated based at least in part on historical fulfillment data of the vendor (see lines 23-40 of col. 2 and lines 25-45 of col. 10 “machine learning can be used to estimate available capacity for various categories of capacity based on the parameters input by the customer and on historical information about capacity in the provider network … a quick determination as to the availability of a specific number and category or resources, or resource instances, at a future period of time”, “current and historical data is obtained 302 regarding total resource capacity for a resource provider. This can include the total capacity of any type that is available for allocation, regardless of whether or not that capacity was allocated. This amount can vary over time based upon factors such as additional physical resources being provisioned, physical resources being removed from service, maintenance, machine failures, and the like … indicating which portions of the overall capacity were either allocated for a specific customer, application, or task, for example, or unallocated and available for other usage”. Also see lines 64-13 of cols. 4-5; “If additional capacity is needed, it can take weeks to generate additional supply but obtaining, installing, and configuring new server racks or other additional resources”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the resource prediction function from the combination of Butler, Huang, Kasiolas, Orellana and Greenwood by including predicting resource capacities based on historical information of the service provider from Mauer, and thus the combination of Butler, Huang, Kasiolas, Orellana, Greenwood and Mauer would disclose the missing limitations from the combination of Butler, Huang, Kasiolas, Orellana and Greenwood, since it would provide a method to allow “a customer or other entity can obtain a quick determination as to the availability of a specific number and category or resources, or resource instances, at a future period of time” (see lines 23-40 of col. 2 from Mauer). Claims 11-13, 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Butler et al. (US 20220197773 A1, hereafter Butler) in view of Orellana et al. (US 20220035669 A1, hereafter Orellana). Regarding to Claim 11, Butler discloses: a system comprising: memory storing a supply-demand matching (SDM) logic framework for matching resources of a cloud computing environment with requests for the resources of the cloud computing environment (see [0133]-[0136]; “a solution that provides automated capacity planning for dynamic environments”, “a ‘resource reasoning and planning module’ (RRPM) that complements existing resource managers/orchestrators by enabling continuous capacity planning, near-term scheduling decisions”, “a model-based mechanism/subsystem for expression and reasoning between different stakeholders (in space and time) based on different objectives capturing used and available capacity, dynamicity of the system, dynamicity of the workload, and dependability of a distributed edge platform” and “a method/subsystem that allows for ‘what-if’ and forward-looking planning capabilities while comprehending future and dynamic changes in resources availability and resource requirements”. Also see [0141]; “Such automated planning involves balancing multiple objectives (e.g., focusing on maximizing total cost of ownership (TCO) and quality of service (QoS)) across multiple stakeholders (e.g., infrastructure provider, service provider, end-user)”. Also see [0242]-[0247] for claimed cloud computing environment; “This solution proposes a novel methodology and algorithm to solve the technical problem … incrementing the flexibility of cloud orchestrators to choose the right placement option … place workloads distributed over edge, core network, and cloud resources), wherein the resources of the cloud computing environment include multiple virtual machines (VMs) of different VM types, wherein each VM type is associated with a different set of computing hardware resources (see [0148]; “details on how a set of virtual machines (VMs) are being deployed on a physical server (e.g., VM sizes/configurations pinned to particular physical cores)”, i.e., at least different VM sizes having different set of hardware resources are considered as different VM types), and wherein each of the resources includes an indication of lead time (see [0164]-[0165]; “an inventory catalog subsystem 740 persists a catalog of available resources and configurations that can be added to the existing computing infrastructure, along with the times at which any of those resources are requested to be deployed/placed in the infrastructure); and one or more processors of a global cloud inventory availability system configured to: access a centralized record of available computing hardware resource inventory and the SDM logic framework; match the resources with the requests in accordance with the SDM logic framework based at least in part on the lead time (see [0156] and [0257]-[0259]; “determines current and future (based on predictions) available capacities 725 for the resources and the service instances available. This will be carried out over a variety of tunable time windows (e.g., 1 s, 1 m, 1 h, and so forth)” and “Algorithm 1: derives optimal workload placement options in (near) real time based on current resource availability (e.g., by performing placement modeling using current resource and workload data)” and “Algorithm 2: derives optimal workload placement options at future time points based on future resource availability (e.g., by performing forward-looking placement modeling using predicted resource and workload data, such as the possibility of resources being freed/reserved or added/removed from inventory in the future)”. Also see [0164] and [0168] for the claimed centralized record; “an inventory catalog subsystem 740 persists a catalog of available resources and configurations that can be added to the existing computing infrastructure, along with the times at which any of those resources are requested to be deployed/placed in the infrastructure”); receive, from a plurality of capacity management subsystems, a plurality of respective capacity planning signals, wherein each capacity management subsystem is assigned to perform a different capacity management action type for managing the computing hardware resource inventory across the cloud computing environment according to the centralized record (see [0091], [0343]-[0346], “all edge compute nodes involved in this collaborative video analytics pipeline must share their system load status to allow overloaded edge nodes to choose optimal peer edge nodes for offloading compute tasks and rebalancing the overall load”, “a set of actions, which are then issued to the controller. These actions can include: … inserting advanced reservations to keep enough headroom for future function invocations … the deployment option can change from a cold to warm to hot container”. Also see [0472]; “common data storage to store data for reuse by one or more functions … management of container and function memory spaces; coordination of acceleration resources available for functions; and distribution of functions between containers”. The resources from Butler including hardware resources, and thus the feature of “inserting advanced reservations to keep enough headroom for future function invocations” from [0344] would require managing/reserving the computing hardware resource inventory (note: even if it is about virtual future function invocation, like virtual machine invocation, such virtual machine invocation still requires managing/reserving hardware resource inventory); and for each capacity planning signal, transmit an indication of the matched resources and requests relevant to the capacity planning signal to the capacity management subsystem from which the capacity planning signal was sent (see [0142] for applying common SDM logic to generation a resource allocation plan indicating the matched resources and requests in generality; “using a resource reasoning and planning module (RRPM) 604. This architectural diagram outlines the interaction of RRPM 604 with the other components available for managing compute platforms. For example, based on various insights 602 associated with the infrastructure and workloads, the RRPM 604 outputs a capacity plan 605. The capacity plan 605 can help inform a scheduling component of an orchestrator/resource manager 606 to make spatial and temporal workload placement decisions (in the near and longer term), as well as inform business decisions (e.g., via business intelligence dashboard 608) on adding additional capacity to the infrastructure 610 to maintain an overall optimal infrastructure capacity”. Also see [0093]-[0101] for a specific example of applying common SDM logic to balance multiple objectives to generate a corresponding action to be performed, i.e. for a specified claimed capacity planning signal, transmit an indication of the matched resources and requests relevant to the capacity planning signal to the capacity management subsystem that the capacity planning signal was sent). Bulter does not disclose: requests include an indication of lead time However, Orellana discloses: each requests includes an indication of lead time (see claim 2, “obtaining the number and use time periods of computing resources requested by the computing resource requester from the resource use request”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the demanding request/signaling from Butler by including the resource use request include resource use time periods from Orellana, and thus the combination of Butler and Orellana would disclose the missing limitations from Bulter , since it would provide a specified resource demand or request for better planning resource usage via knowing the actual time period of the request should be achieved (see claim 2 from Orellana). Regarding to Claim 12, the rejection of Claim 11 is incorporated and further the combination of Bulter and Orellana discloses: wherein the different capacity management action types are two or more selected from the group consisting of: reserving hardware computing resource inventory for a forecasted demand; determining pool sizes for domains of the cloud computing environment; moving computing hardware resource inventory between domains of the cloud computing environment; moving projects between domains of the cloud computing environment; and moving demand for computing hardware resource inventory between domains of the cloud computing environment (see “inserting advanced reservations to keep enough headroom for future function invocations” from [0344] of Butler as claimed limitation of “reserving hardware computing resource inventory for a forecasted demand”. See “the deployment option can change from a cold to warm to hot container” from [0345] of Butler as claimed limitation of “moving projects between domains of the cloud computing environment”. See “If one of the edge nodes 210 a-c becomes overloaded, however, a portion of its video processing workload can be dynamically offloaded to other edge nodes 210 a-c to prevent video frames from being dropped” from [0053] of Butler as claimed limitation of “moving demand for computing hardware resource inventory between domains of the cloud computing environment”). Regarding to Claim 13, the rejection of Claim 11 is incorporated and further the combination of Bulter and Orellana discloses: wherein the one or more processors are configured to match the resources with the requests based further on at least one of computation capacity, storage capacity or virtual machine (VM) type (see [0217] and [0260] from Butler; “tasks and services requests need to express their requirements and operations margins (e.g., latency boundaries in which it can operate and hence defining where it can be placed at the edge)”, “the workload model for the experimental setup is a compute-intensive OpenFoam computational fluid dynamics (CFD) simulation workload, requesting for 24 cores”. Also see [0433] from Butler; “in response to a request by a user … fulfils the requirements of the application 3105 … Requirements of the application can include latency, location, compute resources, storage resources, network capability, security conditions, and the like”. Furthermore see [0217]-[0222] from Butler; “matching task(s) and/or sub-task(s) to resources based on various properties. In particular, the illustrated process flow shows how a system can, given a service request, decompose it into a set of task(s) and/or sub-task(s) and match those to resources capabilities known to it”). Regarding to Claim 16, the rejection of Claim 11 is incorporated and further the combination of Bulter and Orellana discloses: wherein lead time for the resources indicates a time that the resources become available in the cloud computing environment, and wherein lead time for the requests indicates when projects included in the requests will be executed (see [0164]-[0165] from Butler; “an inventory catalog subsystem 740 persists a catalog of available resources and configurations that can be added to the existing computing infrastructure, along with the times at which any of those resources are requested to be deployed/placed in the infrastructure”. Also see [0156] and [0257]-[0259] from Butler; “determines current and future (based on predictions) available capacities 725 for the resources and the service instances available. This will be carried out over a variety of tunable time windows (e.g., 1 s, 1 m, 1 h, and so forth)” and “Algorithm 1: derives optimal workload placement options in (near) real time based on current resource availability (e.g., by performing placement modeling using current resource and workload data)” and “Algorithm 2: derives optimal workload placement options at future time points based on future resource availability (e.g., by performing forward-looking placement modeling using predicted resource and workload data, such as the possibility of resources being freed/reserved or added/removed from inventory in the future)”. For limitation related to lead time for the requests, see claim 2 from Orellana; “obtaining the number and use time periods of computing resources requested by the computing resource requester from the resource use request”). Regarding to Claim 17, the rejection of Claim 16 is incorporated and further the combination of Butler and Orellana discloses: match resources having an imminent lead time with requests having the imminent lead time; match resources having a ready-for-reservation lead time with requests having the ready-for-reservation lead time; and transmit an indication of the matched resources and requests having the imminent and ready-for-reservation lead times to one or more capacity management subsystems configured to perform one of short-term supply shaping or short-term demand steering (see [0156] and [0257]-[0259] from Butler; “determines current and future (based on predictions) available capacities 725 for the resources and the service instances available. This will be carried out over a variety of tunable time windows (e.g., 1 s, 1 m, 1 h, and so forth)” and “Algorithm 1: derives optimal workload placement options in (near) real time based on current resource availability (e.g., by performing placement modeling using current resource and workload data)” and “Algorithm 2: derives optimal workload placement options at future time points based on future resource availability (e.g., by performing forward-looking placement modeling using predicted resource and workload data, such as the possibility of resources being freed/reserved or added/removed from inventory in the future)”. Also see [0142], [0476] from Butler; “The capacity plan 605 can help inform a scheduling component of an orchestrator/resource manager 606 to make spatial and temporal workload placement decisions (in the near and longer term), as well as inform business decisions (e.g., via business intelligence dashboard 608) on adding additional capacity to the infrastructure 610 to maintain an overall optimal infrastructure capacity”, “Choosing the right platform architecture, rack design, or other hardware features or configurations, for short-term and long term usage (in addition to conducting an appropriate mapping of the services and workloads)”). Regarding to Claim 18, the rejection of Claim 16 is incorporated and further the combination of Butler and Orellana discloses: match resources having an in-transit lead time with requests having the in-transit lead time; transmit an indication of said matched resources and requests having the in-transit lead time to one or more capacity management subsystems configured to perform one of long-term supply shaping or long-term demand forecasting (see [0156] and [0257]-[0259] from Butler; “determines current and future (based on predictions) available capacities 725 for the resources and the service instances available. This will be carried out over a variety of tunable time windows (e.g., 1 s, 1 m, 1 h, and so forth)” and “Algorithm 1: derives optimal workload placement options in (near) real time based on current resource availability (e.g., by performing placement modeling using current resource and workload data)” and “Algorithm 2: derives optimal workload placement options at future time points based on future resource availability (e.g., by performing forward-looking placement modeling using predicted resource and workload data, such as the possibility of resources being freed/reserved or added/removed from inventory in the future)”. Also see [0142], [0476] from Butler; “The capacity plan 605 can help inform a scheduling component of an orchestrator/resource manager 606 to make spatial and temporal workload placement decisions (in the near and longer term), as well as inform business decisions (e.g., via business intelligence dashboard 608) on adding additional capacity to the infrastructure 610 to maintain an overall optimal infrastructure capacity”, “Choosing the right platform architecture, rack design, or other hardware features or configurations, for short-term and long term usage (in addition to conducting an appropriate mapping of the services and workloads)”). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Butler et al. (US 20220197773 A1, hereafter Butler) in view of Orellana et al. (US 20220035669 A1, hereafter Orellana) and further in view of Kasiolas et al. (US20070088703A1, hereafter Kasiolas). Regarding to Claim 14, the rejection of Claim 11 is incorporated and further the combination of Bulter and Orellana discloses: wherein the SDM logic framework includes at least one of: a bin packing rule for reducing fragmentation of stored data; a VM integrity rule for ensuring that VMs are not scheduled across multiple machines of the resources of the cloud computing environment; a place in line rule for addressing requests [on a first-come first-served basis]; a holdback rule for ensuring that held-back resources are not counted towards available capacity of the cloud computing environment; or a clustering rule for moving requests between cells of a common cluster of the cloud computing environment (see [0217]-[0222] from Bulter; “matching task(s) and/or sub-task(s) to resources based on various properties. In particular, the illustrated process flow shows how a system can, given a service request, decompose it into a set of task(s) and/or sub-task(s) and match those to resources capabilities known to it”). The combination of Butler and Orellana does not disclose: a place in line rule for addressing requests on a first-come first-served basis. However, Kasiolas discloses: a place in line rule for addressing requests on a first-come first-served basis (see [0061]; “One or more bids are received from participating cluster managers at step 915. The bids may identify one or more destination nodes which can receive data. One or more of the bids are selected at step 920. Bids can be accepted based on ranking, on a first-come, first served basis”. Also see [0045]; “perhaps one or more bids are received but the bids are deemed to be unsatisfactory, the auctioning node can notify the cluster manager”, i.e., requiring matching of resource node with demand of the bid). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the specific rules or policies for handling resource requests from the combination of Butler and Orellana by including a policy of handing requests on a manner of first come first server from Kasiolas, and thus the combination of Butler, Orellana and Kasiolas would disclose the missing limitations from the combination of Butler and Orellana, since first-come first server is a well-known and understood scheduling mechanism to provide certain level of fairness to the jobs or tasks received earlier. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Butler et al. (US 20220197773 A1, hereafter Butler) in view of Orellana et al. (US 20220035669 A1, hereafter Orellana) and Kasiolas et al. (US20070088703 A1, hereafter Kasiolas) and further in view of Natesan et al. (US 20200380746 A1, hereafter Natesan). Regarding to Claim 15, the rejection of Claim 14 is incorporated and further the combination of Butler, Orellana and Kasiolas discloses: wherein the SDM logic framework includes at least two of the bin packing rule, the VM integrity rule, the place in rule, the holdback rule, and the clustering rule (see “One or more bids are received from participating cluster managers at step 915. The bids may identify one or more destination nodes which can receive data. One or more of the bids are selected at step 920. Bids can be accepted based on ranking, on a first-come, first served basis” from [0061] of Kasiolas for the claimed place in rule and see “works to negotiate and facilitate data relocations between data storage nodes within a cluster” from [0072] of Kasiolas for claimed clustering rule). The combination of Butler, Orellana and Kasiolas does not disclose: the SDM logic framework includes at least three of the bin packing rule, the VM integrity rule, the place in rule, the holdback rule, and the clustering rule. However, Natesan discloses: a bin packing rule for reducing fragmentation of stored data (see [0049]; “attempting to pack objects together into a finite space. In a bin-packing problem, a set of ‘objects’ some or all of which must be packed into a container”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the common supply-demand logic from the combination of Butler, Orellana and Kasiolas by including additional bin packing rule that attempting to placing some or all of asset of object into a container from Natesan, and thus the combination of Butler, Orellana, Kasiolas and Natesan would disclose the missing limitations from the combination of Butler, Orellana and Kasiolas, since it would provide a well-known and understood packing optimization rule/policy to placing a set of object (see “packing problems are a class of optimization problems that involve attempting to pack objects together into a finite space” from [0049] of Natesan). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Butler et al. (US 20220197773 A1, hereafter Butler) in view of Orellana et al. (US 20220035669 A1, hereafter Orellana) and further in view of Huang et al. (CN 109062683 A-publication date: 12/21/2018-English translation provided by Google Patents, hereafter Huang). Regarding to Claim 20, the rejection of Claim 11 is incorporated and further the combination of Butler and Orellana discloses: wherein the memory includes a mapping between the plurality of requests and a plurality of resources (see [0188] from Butler; “the placement options may identify possible mappings of the underlying tasks and dependencies of the services to the resources of the computing infrastructure, which may be determined based on the service requirements and the available capacities of the infrastructure resources”) The combination of Butler and Orellana does not disclose: the mapping is between the plurality of VM types and a plurality of machine platforms, wherein the one or more processors are configured to match the resources with the requests in accordance with the SDM logic framework based further the mapping. However, Huang discloses: wherein the memory includes a mapping between the plurality of VM types and a plurality of machine platforms, wherein the one or more processors are configured to match the resources with the requests in accordance with the SDM logic framework based further on the mapping (see [0002] and claim 2; “The type of virtual machine that cloud computing platform is selected according to user creates on the physical host of its data center Meet the virtual machine of user resources demand, and provides it to user's use” and “The remaining available resource size for traversing each host in the host group, when i-th host in the host group It is tired according to first priority when remaining available resource size is greater than or equal to the resource size of j-th of user request”. The match of the resource requirements of jth user requests with ith host is based on the remaining available resource size of the ith host, the remaining available resources size of the ith host is generated or resulted from mapping or matching of the resource requirement of the previous user requests, i.e. other VM types, and resource availability of all hosts, i.e., machine types). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the usage of common supply-demand logic from the combination of Butler and Orellana by including matching cloud service provider’s resource with user’s requirements to create virtual machine for user from Huang, and thus the combination of Butler, Orellana and Huang would disclose the missing limitations from the combination of Butler and Orellana, since it is well-known and understood to provide sufficient resource sizes based on user’s requirements (see [0002] from Huang). Allowable Subject Matter Claim 19 is objected to due to corresponding limitations. Claim 19 contains allowable subject matter of “determine a level of confidence of delivery of the resources having the in-transit lead time based on a vendor delivering the resources having the in-transit lead time, and historical fulfillment data of the vendor”. Response to Arguments Applicant’s arguments, filed 10/28/2025, with respect to rejection of claims 1-20 under 35 U.S.C. 102 (a) (2) or 35 U.S.C. 103 have been full considered. New grounds of rejections were made for claims 1-10 due to a particular new limitation added for the independent claim 1 (i.e., “wherein the SDM logic is configured to match supply signaling to demand signaling based at least in part on the different VM types and the associated computing hardware resources”) while claim 11-19 are still rejected under the same references as set forth in the previous office action since claim 11 does not contain the particular new limitations as claim 1. In addition, some of Applicant’s arguments not persuasive. Applicant’s arguments at pages 8-12 are summarized as the following: For claim 1, Applicant argued that reference Butler “does not describe multiple virtual machines (VMs) of different VM types, wherein each VM type is associated with a different set of computing hardware resources” (see 3rd paragraph of page 9 from the Remarks). For claim 1, Examiner used [0091] and [0343]-[0344] from reference Butler to reject the limitations related to different capacity management subsystems performing different capacity management action types. However, [0091] only describes actions performed by an edge node, and [0343]-[0344] are relied on merely for their passing mention of “inserting advanced reservation” (see 1st paragraph of page 10 from the Remarks). “By contrast, Butler is limited to a decentralized, tactical system for reactive load balancing between peer compute nodes, particularly edge nodes …. Butler does not coordinate multiple, high-level capacity management subsystems with potential conflicting long-term objectives. Therefore, Butler fails to discloses or suggest the claimed combination of features and does not anticipate claim 1” (see last paragraph of page 10 and 1st paragraph of page 11 from the Remarks). For claim 2, none of [0343]-[0344], [0449] or [0053] of Butler “address operations performed by a subsystem assigned to perform management actions for managing the computing hardware resource inventory across the cloud computing environment according to a centralized record, as now recited in claim 1 from which claim 2 depends” (see 3rd paragraph of page 11 from the Remarks). The examiner respectively disagrees. The amended limitations “wherein each VM type is associated with a different set of computing hardware resources” is still being broad. The descriptions of “details on how a set of virtual machines (VMs) are being deployed on a physical server (e.g., VM sizes/configurations pinned to particular physical cores)” from [0148] of Butler can be used to teach such amended limitations since the related amended limitations can be interpreted as each VM size having a different set of computing hardware resources is considered as a VM type. First of all, the term system or sub-system is very broad. Different portions of same integrated chip can be considered as different systems or sub-systems; different portions of same software instance can also be considered as different system or sub-systems. Such as, the printing service and the typing/inputting service of same Microsoft Word application executing at one single CPU core can be considered as two different systems or sub-systems. Thereby, what is claimed for the different capacity management subsystems of current claim 1 is very broad. Any descriptions that discuss about different computing hardware resource capacity management actions performed on cloud computing environment in response to receiving corresponding signals according to the centralized record would teach the amended limitations that Applicant argued since there must be at least two different sub-systems to handle such different actions. Particularly, at least the action of “inserting advanced reservations to keep enough headroom for future function invocations” discussed at [0344] of Butler is one type of such actions and the action of “the deployment option can change from a cold to warm to hot container” discussed at [0345] of Butler is another type of such actions. Note: according to the descriptions of “an inventory catalog subsystem 740 persists a catalog of available resources and configurations that can be added to the existing computing infrastructure, along with the times at which any of those resources are requested to be deployed/placed in the infrastructure” from [0164] and “Collating inputs from the resource modeler and the inventory catalog to continuously provide an updated capacity assessment for all infrastructural resources” from [0168], it is understood that the two different actions discussed at [0344] and [0345] would be performed based on such inventory catalog subsystem to continuely update resource information. The discussion of overload edge node is only one single particular example/feature of reference Butler, and thus it is illogical to state “Bulter is limited to a decentralized, tactical system for reactive load balancing between peer compute nodes”. Such as, even Applicant admitted that [0343]-[0344] of Butler mentions “inserting advanced reservations” (see 1st paragraph of page 9 from the Remarks). In this way, Butler is not limited to “a decentralized, tactical system for reactive load balancing between peer compute nodes”. In addition, Examiner agreed that the claimed invention requires “multiple” “capacity management subsystems”. However, there is no requirement on current claim 1 for feature related to “high-level” or “potentially conflicting long-term objectives” or “coordinate” “capacity management subsystems” Note1: as explained above, Examiner agreed that the claimed invention requires “multiple” “capacity management subsystems”. However, the current claim 1 only requires there are two different capacity management subsystems without coordinating such two different subsystems. The term “coordinate” would require at least there is certain relationship or connection or link between these two different subsystems. However, the current claim 1 does not require such “coordinate”. Note2: if Applicant considers limitations related to “supplying signaling and demand signaling” or “VM types” are “multiple, high-level capacity management subsystems” or “coordinate”, then as explained at the corresponding 103 rejection section above, the combination of reference Butler and Huang does teach such “multiple, high-level capacity management subsystems” and “coordinate”. See response (b) above. Therefore, Claims 1-20 are rejected. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chang et al. (US 20220076170 A1) discloses: utilizes the historical distribution of provider devices to evaluate the probability of provider devices becoming available within the future time window given the currently available provider devices (see [0098]). Salle (US 20030074245 A1) discloses: the anticipation of future resources is made by performing a probabilistic analysis based on the supplier's performance history and/or the probability of a future change in the resources available to the supplier (see [0010]). Salomatin et al. (US 20140081696 A1) discloses: a forecasting model to generate a forecasted number of resources available for a future time period, wherein the forecasting model is based on historical supply data and calendar information, wherein the historical supply data includes a past time period and a number of resources that were available during the past time period (see claim 2). Yemini et al. (US 9852011 B1) discloses: improve the balancing of demands by virtual machines and the supply of server resources; it may also be used to balance the resource bundle allocated to a virtual machine, e.g., to match the amount of CPU, memory and storage I/O bandwidth allocated to the virtual machine, in order to improve the use of its virtual budget to best service its resource demands (see lines 11-17 of col. 17). Ward, Jr (US 10922666 B1) discloses: wherein to evaluate the virtual compute instance consumption, the provider network is configured to determine whether virtual compute instances being consumed matches a resource type, size, or platform indicated in the capacity reservation request (see claim 2). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHI CHEN whose telephone number is (571)272-0805. The examiner can normally be reached on M-F from 9:30AM to 5:30PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y Blair can be reached on 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /Zhi Chen/ Patent Examiner, AU2196 /APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

May 16, 2022
Application Filed
Jul 26, 2025
Non-Final Rejection — §102, §103
Oct 28, 2025
Response Filed
Feb 09, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596561
SYSTEM AND METHOD OF DYNAMICALLY ASSIGNING DEVICE TIERS BASED ON APPLICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12596584
APPLICATION PROGRAMING INTERFACE TO INDICATE CONCURRENT WIRELESS CELL CAPABILITY
2y 5m to grant Granted Apr 07, 2026
Patent 12591461
ADAPTIVE SCHEDULING WITH DYNAMIC PARTITION-LOAD BALANCING FOR FAST PARTITION COMPILATION
2y 5m to grant Granted Mar 31, 2026
Patent 12585495
DISTRIBUTED COMPUTING PIPELINE PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12579012
FORWARD PROGRESS GUARANTEE USING SINGLE-LEVEL SYNCHRONIZATION AT INDIVIDUAL THREAD GRANULARITY
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+40.5%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 250 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month