Prosecution Insights
Last updated: April 19, 2026
Application No. 18/509,506

Programmatic Reprovisioning of Computing Platform Configurations

Non-Final OA §103
Filed
Nov 15, 2023
Examiner
BULLOCK JR, LEWIS ALEXANDER
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
1 (Non-Final)
23%
Grant Probability
At Risk
1-2
OA Rounds
3y 11m
To Grant
79%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
15 granted / 65 resolved
-31.9% vs TC avg
Strong +56% interview lift
Without
With
+56.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
12 currently pending
Career history
77
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
43.7%
+3.7% vs TC avg
§102
17.4%
-22.6% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 65 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-4, 6, 10-14, 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over SCHMITT (Publication 2019/0334909) in view of HASHIMOTO (Publication 2020/0186416) in further view of BOWLES (Publication 20180287872). As to claim 1, SCHMITT teaches one or more non-transitory computer readable media comprising instructions which, when executed by one or more hardware processors (claim 27), cause performance of operations comprising: receiving (by a controller) a request to reprovision a baremetal system, the request comprising a platform definition that defines a plurality of computing components in the baremetal system; and reprovisioning the baremetal system by executing system configuration instructions ([0170] FIG. 11B illustrates an example method of automatically allocating resources using the global system rules and templates of an example embodiment. A request is made to the system that requires resource allocation to satisfy the request 1120. The controller is aware of its resource pools based on its system state database 1121. The controller uses a template to determine the resources needed 1122. The controller assigns the resources and stores the information in the system state 1123. The controller deploys the resources using the template 1124. [0171] Referring to FIG. 12, an example method for automatically deploying an application or service is illustrated using a system 100 described herein. A user or an application makes a request for a service 1210. The request is translated to the API application 1220. The API application routes the request to the controller 1230. The controller interprets the request 1240. The controller takes the state of the system and its resources into account 1250. The controller uses its rules and templates for service deployment 1260. The controller 1270 sends a request to resources 1270 and deploys an image derived from the template 1280 and updates the IT system state. [0166] In addition the compute resources, storage resources, and controller may or may not be coupled to a storage network (SAN) 280 in a manner that the controller 200 can use the storage network to boot each resource. The controller 200 may send the boot images or other templates to a separate storage or other resource or other resource so that other resources can boot off of the storage or other resource. The controller may instruct where to boot from in such situation. The controller may power on a resource, instruct the resource from where to boot and how to configure itself. The controller 200 instructs the resource how to boot, what image to use, and where the image is located if that image is on another resource. The BIOS's resources may be pre-configured. The controller may also or alternatively configure the BIOS through out of band management so that they will boot off the storage area network. The controller 200 may also be configured to boot an operating system from an ISO and enable the resource to copy data to local disks. The local disks may then subsequently be used for booting. The controller may configure other resources including other controllers, in such a way that the resources can boot. Some resources may comprise an application that provides compute, storage, or networking function. In addition it is possible for the controller to boot up a storage resource and then make the storage resource responsible for supplying the boot image of the subsequent resources or services. The storage may also be managed over a different network that is being used for another purpose. [0167] Optionally, one or more of the resources may be coupled to an on network in band management connection 290. The connection 290 may comprise one or more types of in band management as described with respect to in band management connection 270. The connection 290 may connect the controller to application network to make use of the networks or to manage them through in band management networks. [0168] FIG. 2L illustrates an image 250 that may be loaded directly or indirectly (through another resource or database) from a template 230 to a resource to boot the resource or applications or services loaded on the resource. The image 250 may comprise boot files 240 for the resource type and hardware. The boot files 240 may comprise a kernel 241 corresponding to a resource, application or service to be deployed. Boot files 240 may also comprise an initrd or similar filesystem used to aid the booting process. The boot system 240 may comprise a plurality of kernels or initrds configured for different hardware types and resource types. In addition the image 250 may comprise a filesystem 251. The filesystem 251 may comprise a base image 252 and corresponding file system as well as a service image 253 and corresponding files system and a volatile image 254 and corresponding filesystem. The file systems and data loaded may vary depending on the resource type and applications or services to be running. The base image 252 may comprise a base operating system file system. The base operating system may be read only. The base image 252 may also comprise basic tools of the operating system independent of what is being run. The base image 252 may include base directories and operating system tools. The service filesystem 253 may include configuration files and specifications for the resource, application or service. The volatile filesystem 254 may contain information or data specific to that deployment such as binary applications, specific addresses and other information, which may or may not be configured as variables including but not limited to passwords, session keys and private keys. The filesystems may be mounted as one single filesystem using technologies such as overlayFS to allow for some read only and some read-write filesystems reducing the amount of duplicate data used for applications. [0169] As noted above, the controller 200 can be used to add resources such as compute, storage, and/or networking resources to the system. FIG. 11A illustrates an example method for adding a physical resource such as a baremetal node to a system 100. A resource, i.e., compute, storage or networking resource, is plugged into the controller by way of network connections 1110. The network connections may include an out of band management connection. The controller recognizes that the resource is plugged in through out of band management connection 1111. The controller recognizes information relating to the resource, which may include but is not limited to the resource's type, capabilities and/or attributes 1112. The controller adds the resource and/or information relating to the resource to its system state 1113. An image derived from a template is loaded to physical component of a system, which may include but is not limited to a resource, on another resource such as storage resources, or on the controller 1114. The image comprises one or more filesystems that may include configuration files. Such configurations may include BIOS and booting parameters. The controller instructs the physical resource to boot using the filesystem of the image 1115. Additional resources or a plurality of bare-metal or physical resources of different types may be added in this manner using the image of the template or at least a portion thereof.) .However, SCHMITT does not teach the particulars of the configuration algorithm in provisioning the configuration of the baremetal system. HASHIMOTO teaches a known provisioning algorithm for provisioning an environment by a controller by: retrieving a first component configuration for a first computing component in the plurality of computing components and a second component configuration for a second computing component in the plurality of computing components, the second component configuration identifying a first dependency of the second computing component on the first computing component; ([0080] In some example embodiments, the first workspace 165a, the second workspace 165b, and/or the third workspace 165c may be marked for automatic destruction. For example, the first workspace 165a, the second workspace 165b, and/or the third workspace 165c may persist for a period of time (e.g., 24 hours), after which the information technology infrastructure controller 110 may be configured to automatically destroy the first workspace 165a, the second workspace 165b, and/or the third workspace 165c. The first workspace 165a, the second workspace 165b, and/or the third workspace 165c may be persisted for a limited period of time in order to configure the first information technology infrastructure 130a to provide a temporary environment or disposable environment (e.g., a demo environment). [0081] The information technology infrastructure controller 110 may generate the execution plan 190 including by creating a corresponding dependency graph (e.g., a directed acyclic graph (DAG) and/or the like) having a plurality of nodes, at least some of which being interconnected by interconnected by one or more directed edges. FIG. 2 depicts an example of a dependency graph 200, in accordance with some example embodiments. [0082] To apply the configurations associated with the execution plan 190 to the first information technology infrastructure 130a, the information technology infrastructure controller 110 may traverse the corresponding dependency graph. For instance, the information technology infrastructure controller 110 may perform a depth-first traversal of the dependency graph in order to determine the resources that the execution plan 190 indicates as requiring provisioning, modification, and/or de-provisioning. The information technology infrastructure controller 110 may further identify, based on the dependency graph, independent resources that may be provisioned, modified, and/or de-provisioned in parallel. It should be appreciated that the information technology infrastructure controller 110 may be configured to maximize parallelization when applying, to the first information technology infrastructure 130a, the configurations associated with the execution plan 190. [0083] Table 3 below depicts examples of nodes that may be present in the dependency graph corresponding to the execution plan 190. TABLE-US-00003 TABLE 3 Type of Node Description Resource Node Representative of a single resource such as, for example, a hardware resource, a software resource, a network resource, and/or the like. Provider Node Representative of a provider of one or more resources including, for example, hardware resources, software resources, network resources, and/or the like. Each provider node may include the time required to fully configure a corresponding provider to provide the corresponding resources. Resource Meta Representative of a group of resources including, for Node example, one or more hardware resources, software resources, network resources, and/or the like. Data Node Representative of data needing to be fetched, retrieved, and/or generated for purposes of configuring other resources and/or providers. [0084] The information technology infrastructure controller 110 may generate the dependency graph by at least adding, to the dependency graph, one or more resource nodes corresponding to individual resources including, for example, one or more hardware resources 135a, software resources 135b, network resources 135c, and/or the like. The one or more resource nodes may be mapped to the corresponding provider nodes, for example, to identify the first provider 150a and/or the second provider 150b as being the provider of the resources associated with each of the resource nodes. Moreover, the information technology infrastructure controller 110 may generate the dependency graph by at least inserting one or more edges to interconnect, for example, the resource nodes and the provider nodes. An edge interconnecting a resource node to a provider node may identify the provider associated with the provider node as being a provider of the resource associated with the resource node. Meanwhile, an edge interconnecting two resource nodes may indicate a dependency between the resources associated with the two resource nodes. [0085] To represent resources that require de-provisioning, the dependency graph may include one or more “orphan” resource nodes, which may be disconnected from the provider nodes and other resource nodes in the dependency graph. Alternatively and/or additionally, in order to represent the modification of an existing resource within the first information technology infrastructure 130a, the information technology infrastructure controller 110 may generate the dependency graph by at least splitting the corresponding resource node into a first resource node and a second resource node. The first resource node may correspond to the existing resource, which may be de-provisioned when the configurations specified in the execution plan 190 are applied to the first information technology infrastructure 130a. Meanwhile, the second resource node may correspond to the modified resource, which may be provisioned when the configurations specified in the execution plan 190 are applied to the first information technology infrastructure 130a. [0110] The information technology infrastructure controller 110 may apply, based at least on the execution plan, the one or more configurations including by at least provisioning, modifying, and/or de-provisioning one or more resources at the first information technology infrastructure 130a (310). In some example embodiments, to apply the configurations associated with the execution plan 190 to the first information technology infrastructure 130a, the information technology infrastructure controller 110 may generate and traverse a corresponding dependency graph. For example, the information technology infrastructure controller 110 may generate the dependency graph 200, which may include a plurality of resource nodes and provider nodes, at least some of which being interconnected by one or more directed edges. The information technology infrastructure controller 110 may traverse the dependency graph 200 to at least identify independent resources that may be provisioned, modified, and/or de-provisioned in parallel. As noted, the information technology infrastructure controller 110 may be configured to maximize parallelization when applying, to the first information technology infrastructure 130a, the configurations associated with the execution plan 190.) programmatically generating a dependency graph comprising the first dependency of the second computing component on the first computing component ([0080-0085, 110]); retrieving a first set of component configuration instructions for configuring the first computing component in accordance with the first component configuration and a second set of component configuration instructions for configuring the second computing component in accordance with the second component configuration ([0078] The information technology infrastructure controller 110 may generate, based at least on the configurations associated with the first workspace 165a, the second workspace 165b, and/or the third workspace 165c, the execution plan 190. The execution plan 190 may include one or more operations to provision, modify, and/or de-provision resources at the first information technology infrastructure 130a in order to apply, to the first information technology infrastructure 130a, the configurations associated with the first workspace 165a, the second workspace 165b, and/or the third workspace 165c. [0079] In some example embodiments, the information technology infrastructure controller 110 may generate the execution plan 190 by at least consolidating the configurations associated with the first workspace 165a, the second workspace 165b, and the third workspace 165c. That is, the execution plan 190 may be generated to achieve a combination of the different iterations of the configurations for the first information technology infrastructure 130a and/or the configurations for different portions of the first information technology infrastructure 130a. Alternatively and/or additionally, the information technology infrastructure controller 110 may generate the execution plan 190 based on some but not all of the configurations associated with the first workspace 165a, the second workspace 165b, and/or the third workspace 165c. For example, the execution plan 190 may be generated to achieve only some iterations of the configurations for the first information technology infrastructure 130a and/or the configurations for only a portion of the first information technology infrastructure 130a. [0080] In some example embodiments, the first workspace 165a, the second workspace 165b, and/or the third workspace 165c may be marked for automatic destruction. For example, the first workspace 165a, the second workspace 165b, and/or the third workspace 165c may persist for a period of time (e.g., 24 hours), after which the information technology infrastructure controller 110 may be configured to automatically destroy the first workspace 165a, the second workspace 165b, and/or the third workspace 165c. The first workspace 165a, the second workspace 165b, and/or the third workspace 165c may be persisted for a limited period of time in order to configure the first information technology infrastructure 130a to provide a temporary environment or disposable environment (e.g., a demo environment). [0081] The information technology infrastructure controller 110 may generate the execution plan 190 including by creating a corresponding dependency graph (e.g., a directed acyclic graph (DAG) and/or the like) having a plurality of nodes, at least some of which being interconnected by interconnected by one or more directed edges. FIG. 2 depicts an example of a dependency graph 200, in accordance with some example embodiments. [0082] To apply the configurations associated with the execution plan 190 to the first information technology infrastructure 130a, the information technology infrastructure controller 110 may traverse the corresponding dependency graph. For instance, the information technology infrastructure controller 110 may perform a depth-first traversal of the dependency graph in order to determine the resources that the execution plan 190 indicates as requiring provisioning, modification, and/or de-provisioning. The information technology infrastructure controller 110 may further identify, based on the dependency graph, independent resources that may be provisioned, modified, and/or de-provisioned in parallel. It should be appreciated that the information technology infrastructure controller 110 may be configured to maximize parallelization when applying, to the first information technology infrastructure 130a, the configurations associated with the execution plan 190.); generating system configuration instructions based on the first set of component configuration instructions and the second set of component configuration instructions and the dependency graph, wherein (a) a first portion of the system configuration instructions corresponding to the first set of component configuration instructions with (b) a second portion of the system configuration instructions corresponding to the second set of component configuration instructions in accordance with the dependency graph; and provisioning the system based on the configuration instructions (EN: merging of configuration is based on dependence graph [0078-0085). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply the known technique of HASHIMOTO in the known environment of SCHMITT in order to provision baremetal resources based on a dependency graph and thereby established an improved system of provisioning and configuring a baremetal system ready for improvement to yield predictable results (Note MPEP 2143, Rationale D). While the SCHMITT-HASHIMOTO combination teaches provisioning an environment based on a dependency tree, the combination is silent that one configuration instruction set precedes another configuration instruction set. BOWLES teaches the concept of using a dependency tree in order to provision / configure a parent feature ahead of a dependent feature (see abstract; [0003-0005; 0024-0027]). Thus, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to combine the teachings of SCHMITT with HASHIMOTO and further BOWLES in order to prioritize the order of provisioning / configuration of a device. As to claim 2, SCHMITT teaches reprovisioning the baremetal system by executing system configuration instructions ([0166-0171]). .However, SCHMITT does not teach the particulars of the configuration algorithm in provisioning the configuration of the baremetal system. HASHIMOTO teaches retrieving a third component configuration for a third computing component in the plurality of computing components, wherein the second component configuration identifies a second dependency of the second computing component on the third computing component (EN: merging of configuration is based on dependence graph [0078-0085); generating a second dependency graph comprising the first dependency of the second computing component on the first computing component and the second dependency of the second computing component on the third computing component (EN: merging of configuration is based on dependence graph [0078-0085); retrieving a third set of component configuration instructions for reconfiguring the third computing component in accordance with the third component configuration (EN: merging of configuration is based on dependence graph [0078-0085); generating second system configuration instructions based on the first set of component configuration instructions, the second set of component configuration instructions, the third set of component configuration instructions and the dependency graph, wherein a first portion of the second system configuration instructions corresponding to the first set of component configuration instructions and a third portion of the second system configuration instructions corresponding to the third set of component configuration instructions having a second portion of the system configuration instructions in accordance with the dependency graph (EN: merging of configuration is based on dependence graph [0078-0085); and reprovisioning the system by executing the second system configuration instructions (EN: merging of configuration is based on dependence graph [0078-0085). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply the known technique of HASHIMOTO in the known environment of SCHMITT in order to provision baremetal resources based on a dependency graph and thereby established an improved system of provisioning and configuring a baremetal system ready for improvement to yield predictable results (Note MPEP 2143, Rationale D). While the SCHMITT-HASHIMOTO combination teaches provisioning an environment based on a dependency tree, the combination is silent that one configuration instruction set precedes another configuration instruction set. BOWLES teaches the concept of using a dependency tree in order to provision / configure a parent feature ahead of a dependent feature (see abstract; [0003-0005; 0024-0027]). Thus, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to combine the teachings of SCHMITT with HASHIMOTO and further BOWLES in order to prioritize the order of provisioning / configuration of a device. As to claim 3, SCHMITT teaches reprovisioning the baremetal system by executing system configuration instructions ([0166-0171]). .However, SCHMITT does not teach the particulars of the configuration algorithm in provisioning the configuration of the baremetal system. HASHIMOTO teaches a known provisioning algorithm for provisioning an environment by a controller by: retrieving a third component configuration for a third computing component in the plurality of computing components, wherein the second component configuration identifies a second dependency of the third computing component on the first computing component (EN: merging of configuration is based on dependence graph [0078-0085); generating a second dependency graph comprising the first dependency of the second computing component on the first computing component and the second dependency of the third computing component on the first computing component (EN: merging of configuration is based on dependence graph [0078-0085); retrieving a third set of component configuration instructions for reconfiguring the third computing component in accordance with the third component configuration (EN: merging of configuration is based on dependence graph [0078-0085); generating second system configuration instructions based on the first set of component configuration instructions, the second set of component configuration instructions, the third set of component configuration instructions and the dependency graph, wherein a first portion of the second system configuration instructions corresponding to the first set of component configuration instructions is associated with both a second portion of the system configuration instructions corresponding to the second set of component configuration instructions and a third portion of the second system configuration instructions corresponding to the third set of component configuration instructions, wherein the second portion and the third portion are operative for parallel execution (EN: merging of configuration is based on dependence graph [0078-0085); and reprovisioning the system by executing the second system configuration instructions (EN: merging of configuration is based on dependence graph [0078-0085). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply the known technique of HASHIMOTO in the known environment of SCHMITT in order to provision baremetal resources based on a dependency graph and thereby established an improved system of provisioning and configuring a baremetal system ready for improvement to yield predictable results (Note MPEP 2143, Rationale D). While the SCHMITT-HASHIMOTO combination teaches provisioning an environment based on a dependency tree, the combination is silent that one configuration instruction set precedes another configuration instruction set. BOWLES teaches the concept of using a dependency tree in order to provision / configure a parent feature ahead of a dependent feature (see abstract; [0003-0005; 0024-0027]). Thus, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to combine the teachings of SCHMITT with HASHIMOTO and further BOWLES in order to prioritize the order of provisioning / configuration of a device. As to claim 4, SCHMITT teaches subsequent to generating the system configuration instructions, compiling the system configuration instructions to generate an executable file, and wherein executing the system configuration instructions comprises executing the executable file ([0125] Templates may derive boot images for applications or services that run on computing resources. The templates and images derived from templates may be used to create an application, deploy an application or service, and/or arrange resources for various system functions, which allow and/or facilitate the creation of an application. A template may have variable parameters in files, file systems, and/or operating system images that may be overwritten with configuration options from either default settings or settings given from the controller. A template may have configuration scripts used to configure an application or other resources and it may make use of configuration variables, configuration rules, and/or default rules or variables; these scripts, variables, and/or rules may contain specific rules, scripts, or variables for specific hardware or other resource specific parameters, e.g. hypervisors (when virtual), available memory. A template may have files in the form of binary resources, compilable source code that results in binary resources or hardware or other resource specific parameters, specific sets of binary resources or source code with compile instructions for specific hardware or other resource specific parameters, e.g. hypervisors (when virtual), available memory. A template may comprise a set of information independent of what is being run on a resource. [0132] FIG. 2E shows an example template 230. A template contains all the information needed to create an application or service. The template 230 also may contain information, alternative data, files, binaries for different hardware types that provide similar or identical functionality. For example there may be a filesystem blob 232 for /usr/bin and /bin with the binaries 234 compiled for different architectures. The template 230 may also contain daemons 233 or scripts 231. The daemons 233 are binaries or scripts that may be run at boot time when the host is powered on and ready; and in some cases the daemons 233 may power APIs that may be accessible by the controller and may allow the controller to change settings of the host (and the controller may subsequently update the active system rules). The daemons may also be powered down and re-started through out of band management 260 or in band management 270, discussed above and below. These daemons may also power generic APIs to provide dependent services for new services (for example a generic web server api that communicates with an api that controls nginx or apache). The scripts 231 can be install scripts that may run while or after booting an image or after starting the daemon or enabling the service. [0146] If there are hardware-specific files, the controller logic will gather the hardware-specific files at step 205.4. In some cases, the file system image may contain the kernel and initramfs along with a directory that contains kernel modules (or kernel modules eventually placed into a directory). The controller logic 205 then picks the appropriate base image that is compatible at step 205.5. A base image contains operating system files that might not be specific to the application or image being derived from the template 230. Compatibility in this context means that the base image contains the files needed to turn the template into a working application. The base images may be managed outside the templates as a mechanism for saving space (and often times the base images may be the same for several applications or services). In addition, at step 205.6, the controller logic 205 picks bucket(s) with executables, source code, and hardware-specific configuration files. The template 230 may reference other files, including but not limited to configuration files, configuration file templates (which are configuration files that contain placeholders or variables that are filled with variables in the system rules 210 that may be made known in the template 230 so that the controller 200 can turn configuration templates into configuration files and may change configuration files optionally through API endpoints), binaries, and source code (that may be complied when the image is booted). At step 205.7, the hardware-specific instructions corresponding to the elements picked at steps 205.4., 205.5, and 205.6 may be loaded as part of the image that is booted. The controller logic 205 derives an image from the selected components. For example, there may be a different preinstall script for a physical host versus a virtual machine, or a difference for powerpc versus x86.). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply the known technique of HASHIMOTO in the known environment of SCHMITT in order to provision baremetal resources based on a dependency graph and thereby established an improved system of provisioning and configuring a baremetal system ready for improvement to yield predictable results (Note MPEP 2143, Rationale D). As to claim 6, HASHIMOTO teaches subsequent to reprovisioning the system, attempting to validate the plurality of computing components; responsive to failing to validate a second component, presenting a notification corresponding to at least one of the first component configuration or the second component configuration ([0086-0092]). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply the known technique of HASHIMOTO in the known environment of SCHMITT in order to validate based on a dependency graph and thereby established an improved system of provisioning and configuring a baremetal system ready for improvement to yield predictable results (Note MPEP 2143, Rationale D). As to claim 10, HASHIMOTO teaches programmatically generating the dependency graph comprises: executing code to: traverse a plurality of configurations comprising the first component configuration and the second component configuration (EN: merging / building of configuration is based on dependence graph [0078-0085); generate a first node corresponding to the first computing component, a second node corresponding to the second computing component, and a directed edge connecting the first node to the second node indicating the first dependency ([0081] The information technology infrastructure controller 110 may generate the execution plan 190 including by creating a corresponding dependency graph (e.g., a directed acyclic graph (DAG) and/or the like) having a plurality of nodes, at least some of which being interconnected by interconnected by one or more directed edges. FIG. 2 depicts an example of a dependency graph 200, in accordance with some example embodiments. [0082] To apply the configurations associated with the execution plan 190 to the first information technology infrastructure 130a, the information technology infrastructure controller 110 may traverse the corresponding dependency graph. For instance, the information technology infrastructure controller 110 may perform a depth-first traversal of the dependency graph in order to determine the resources that the execution plan 190 indicates as requiring provisioning, modification, and/or de-provisioning. The information technology infrastructure controller 110 may further identify, based on the dependency graph, independent resources that may be provisioned, modified, and/or de-provisioned in parallel. It should be appreciated that the information technology infrastructure controller 110 may be configured to maximize parallelization when applying, to the first information technology infrastructure 130a, the configurations associated with the execution plan 190.). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply the known technique of HASHIMOTO in the known environment of SCHMITT in order to provision baremetal resources based on a dependency graph and thereby established an improved system of provisioning and configuring a baremetal system ready for improvement to yield predictable results (Note MPEP 2143, Rationale D). Regarding claims 11-14 and 16, reference is made to a method that corresponds to the medium of claims 1-4 and 6 and is therefore met by the rejection of claims 1-4 and 6 above. Regarding claim 20, reference is made to a system that corresponds to the medium of claim 1 and is therefore met by the rejection of claim 1 above. HASHIMOTO teaches the implementation is capable of being in an apparatus having a data processor ([0005]). Claims 5, 8, 15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over SCHMITT (Publication 2019/0334909) in view of HASHIMOTO (Publication 2020/0186416) in view of BOWLES (Publication 20180287872) and in further view of MILLER (Patent 12,541,373). As to claim 5, HASHIMOTO teaches sequentially validating the plurality of computing components at least by: validating a first subgroup of computing components; and subsequent to validating the first subgroup of computing components, validating a second subgroup of computing components ([0086] Referring again to FIG. 1B, the validation engine 170 may be configured to validate the execution plan 190 before the information technology infrastructure controller 110 applies the corresponding configurations to the information technology infrastructure 130. In some example embodiments, the validation engine 170 may be configured to perform a multitier validation of the execution plan 190 in order to determine whether the configurations associated with the execution plan 190 satisfy one or more requirements including, for example, valid configurations, proper permissions, cost compliance, and/or the like. [0087] For instance, the validation engine 170 may perform a first tier of validation by at least determining the structural validity of the configurations associated with the execution plan 190 including, for example, the syntactic validity and/or semantic validity of the configurations associated with the execution plan 190. If the configurations associated with the execution plan 190 successfully passes the first tier of validation, the validation engine 170 may perform a second tier of validation by at least determining whether the configurations comply with one or more policies including, for example, a first policy 175a, a second policy 175b, and/or the like. The first policy 175a and/or the second policy 175b may impose limitations on the resources allocated by the configurations associated with the execution plan 190. Upon determining that the configurations associated with the execution plan 190 comply with the one or more policies, the validation engine 170 may perform a third tier of validation by at least determining whether the configurations associated with the execution plan 190 meet one or more cost quotas including, for example, a first quota 175c, a second quota 175d, and/or the like. The first quota 175c and/or the second quota 175d may impose target values and/or limits on the projected costs of the configurations associated with the execution plan 190. [0088] In some example embodiments, a programming code based representation of the first policy 175a, the second policy 175b, the first quota 175c, and/or the second quota 175d may be used to provide the first policy 175a, the second policy 175b, the first quota 175c, and/or the second quota 175d to the validation engine 170. Furthermore, the first policy 175a, the second policy 175b, the first quota 175c, and/or the second quota 175d may be input by the first user 145a at the first client 120a and/or the second user 145b at the second client 120b. Alternatively and/or additionally, the first policy 175a, the second policy 175b, the first quota 175c, and/or the second quota 175d may be retrieved from a repository such as, for example, the version controller 140 and/or the like. [0089] In some example embodiments, the first policy 175a, the second policy 175b, the first quota 175c, and/or the second quota 175d may be custom configured, for example, by the first user 145a and/or the second user 145b based at least on the first user 145a and/or the second user 145b having the necessary access privileges (e.g., administrative access and/or the like) for setting and/or modifying a policy at the validation engine 170. Moreover, the first policy 175a, the second policy 175b, the first quota 175c, and/or the second quota 175d may be custom configured to have limited applicability. For example, each of the first workspace 165a, the second workspace 165b, and the third workspace 165c may be associated with attributes including, for example, environment, application type, region, cloud, and/or the like. Whether a policy or a cost quota is applicable to each of the first workspace 165a, the second workspace 165b, and/or the third workspace 165c may be determined based on the corresponding attributes. That is, the validation engine 170 may identify the policies and/or cost quotas that are applicable to a workspace by at least filtering a broader set of policies and/or cost quotas based on the attributes of the workspace. [0090] Accordingly, the first policy 175a and/or the first quota 175c may be configured to apply only to configurations associated with a staging environment while the second policy 175b and/or the second quota 175d may be configured to apply only to configurations associated with a production environment. Alternatively and/or additionally, the first policy 175a and/or the first quota 175c may be configured to apply only to configurations associated with one portion of the first information technology infrastructure 130a (e.g., the hardware resources 135a) while the second policy 175b and/or the second quota 175d may be configured to apply only to configurations associated with a different portion of the first information technology infrastructure 130a (e.g., the network resources 135c). In some example embodiments, the execution plan 190 may be validated against requirements that are classified as advisory, mandatory, and/or semi-mandatory. For example, the first policy 175a, the second policy 175b, the first quota 175c, and/or the second quota 175d may be classified as advisory, mandatory, and/or semi-mandatory. Applying a requirement that is classified as advisory may merely trigger a notification (e.g., an informative output displayed at the first client 120a and/or the second client 120b) indicative, for example, of the configurations associated with the execution plan 190 as failing to comply with the requirement. By contrast, applying a requirement that is classified as mandatory and/or semi-mandatory may prevent the configurations associated with the execution plan 190 from being applied at the first information technology infrastructure 130a in the event the configurations fail to satisfy the requirement. Moreover, while advisory requirements and semi-mandatory requirements may be overridden, a mandatory requirement must be satisfied before the configurations associated with the execution plan 190 may be applied at the first information technology infrastructure 130a. [0091] In some example embodiments, the first policy 175a, the validation engine 170 may invoke an externally configured service in order to verify whether the execution plan 190 satisfies one or more externally configured policies and/or quotas. For example, the first policy 175a, the second policy 175b, the first quota 175c, and/or the second quota 175d may be configured externally by a web hook mechanism. The result of the external validation (e.g., a pass and/or fail status) may be returned to the validation engine 170 via an application programming interface (API). The one or more externally configured policies and/or quotas may also be classified as advisory, mandatory, and/or semi-mandatory. Accordingly, failure of an external policy and/or quota classified as mandatory and/or semi-mandatory may prevent the execution plan 190 from being applied at the first information technology infrastructure 130a. Contrastingly, failure of an external policy and/or quota classified as advisory may trigger instead a notification (e.g., an informative output displayed at the first client 120a and/or the second client 120b) indicative, for example, of the configurations associated with the execution plan 190 as being non-compliant. [0092] The information technology infrastructure controller 110 may apply, to the information technology infrastructure 130, the configurations associated with the first workspace 165a, the second workspace 165b, and/or the third workspace 165c by at least performing the operations included in the execution plan 190, for example, to provision, modify, and/or de-provision one or more resources at the first information technology infrastructure 130a. According to some example embodiments, the information technology infrastructure controller 110 may be configured to implement the execution plan 190 based at least on the execution plan 190 having been successfully validated by the validation engine 170. The validation engine 170 may be configured to provide an indication of the execution plan 190 as having been successfully or unsuccessfully validated by the validation engine 170. Alternatively and/or additionally, the validation engine 170 may provide an indication of the execution plan 190 as having passed or failed each of the first policy 175a, the second policy 175b, the first quota 175c, the second quota 175d, and/or the like. As noted, one or more of the first policy 175a, the second policy 175b, the first quota 175c, and/or the second quota 175d may be classified as advisory and/or semi-mandatory. These policies and/or quotas may be overridden and/or excluded from the validation of the execution plan 190. Alternatively, one or more of the first policy 175a, the second policy 175b, the first quota 175c, and/or the second quota 175d may be classified as mandatory. Mandatory policies and/or quotas may not be overridden and/or excluded from the validation of the execution plan 190. Instead, the configurations associated with the execution plan 190 may be required to satisfy all mandatory policies and/or quotas before the configurations may be applied at the first information technology infrastructure 130a.). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply the known technique of HASHIMOTO in the known environment of SCHMITT in order to provision baremetal resources based on a dependency graph and thereby established an improved system of provisioning and configuring a baremetal system ready for improvement to yield predictable results (Note MPEP 2143, Rationale D). SCHMITT-HASHIMOTO-BOWLES does not explicitly teach the reprovisioning the computing components into a first subgroup and a second subgroup. MILLER teaches a known technique of provisioning a system wherein portioning the plurality of computing components into a first subgroup of computing components and a second subgroup of computing components such that the computing components within the first subgroup do not depend on other computing components not in the first subgroup and computing components within the second subgroup of computing components do not depend on the second subgroup of computing components (via performing parallel provisioning when no dependencies of resources exist in regards to the graph) (col. 23, line 9 – col. 24, line 7; col. 26, line 27 – col. 27, line 42; col. 44, line 58 – col. 45, line 47). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply MILLER’s known technique to the device of SCHMITT-HASHIMOTO-BOWLES in order to perform parallel provisioning of an environment. As to claim 8, HASHIMOTO teaches retrieving a third component configuration for a third computing component in the plurality of computing components, the third component configuration identifying no dependency of the third computing component on the first computing component or the second computing component; and generating a second dependency graph between the first components and the second components (EN: merging of configuration is based on dependence graph [0078-0085). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply the known technique of HASHIMOTO in the known environment of SCHMITT in order to provision baremetal resources based on a dependency graph and thereby established an improved system of provisioning and configuring a baremetal system ready for improvement to yield predictable results (Note MPEP 2143, Rationale D). SCHMITT-HASHIMOTO-BOWLES does not explicitly teach the grouping. MILLER teaches a known technique of grouping the first computing component with the second computing component in a first component group and grouping the third computing component in a second component group (via performing parallel provisioning when no dependencies of resources exist in regards to the graph otherwise performing the provisioning sequentially, e.g. within the same group) (col. 23, line 9 – col. 24, line 7; col. 26, line 27 – col. 27, line 42; col. 44, line 58 – col. 45, line 47). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply MILLER’s known technique to the device of SCHMITT-HASHIMOTO-BOWLES in order to perform parallel provisioning of an environment. Regarding claims 15 and 18, reference is made to a method that corresponds to the medium of claims 5 and 8 and is therefore met by the rejection of claims 5 and 8 above. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over SCHMITT (Publication 2019/0334909) in view of HASHIMOTO (Publication 2020/0186416) in view of BOWLES (Publication 20180287872) and in further view of ATUR (Publication 2025/0278258) As to claim 7, HASHIMOTO teaches reprovisioning the first component and the second component comprises: generating an execution plan based on the dependency graph, the retrieved first set of component configuration instructions and the second set of component configuration instructions; and executing the execution plan ([0078-0082]). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to apply the known technique of HASHIMOTO in the known environment of SCHMITT in order to provision baremetal resources based on a dependency graph and thereby established an improved system of provisioning and configuring a baremetal system ready for improvement to yield predictable results (Note MPEP 2143, Rationale D). However, SCHMITT-HASHIMOTO-BOWLES does not explicitly teach the reprovisioning by the execution plan includes a state machine. ATUR teaches a known technique of reprovisioning the first component and the second component using a state machine ([0026] The orchestration server system 102 executes centralized management services used to manage the edge servers 108 and baseboard units 106. Specifically, the orchestration server system 102 executes enterprise management services 110, operations support systems (OSS) 112, and one or more management servers 114 for services implemented on the edge servers 108. The orchestration server system 102 executes a deployment automation module 116 that facilitates deployment of the baseboard units 106, the edge servers 108, and the services executing on the baseboard units 106 and the edge servers 108. [0027] The deployment automation module 116 includes a machine initialization module 118 that detects and initializes hardware within the network environment 100. The hardware may include computing and storage devices for implementing the baseboard units 106 or the edge servers 108. For example, given a computing device configured with an IP address, the machine initialization module 118 may initialize the BIOS (basic input output system), install an operating system, configure the operating system to connect to a network and to the orchestration server system 102, and install an agent for facilitating installation of services and for performing management functions on the computing device at the instruction of the deployment automation module 116. For example, the machine initialization module 118 may use COBBLER in order to initialize the computing device. [0028] The machine initialization module 118 may also discover computing devices on a network and generate a topology of the devices, such as in the form of a directed acyclic graph (DAG). The deployment automation module 116 may then use this DAG to select computing devices for implementing network services and in order to configure a machine to receive installation of a network service. 0038] FIG. 3 is a schematic diagram of an element 300 of a network service in accordance with an embodiment of the present invention. Each entity that constitutes one of the layers 202-208 may be embodied as an element 300. Each element 300 defines functions and interfaces used by the deployment automation module 116 to deploy and manage an entity represented by an element 300. An element 300 may be an entity that is a combination of sub-elements and defines functions and interfaces for deploying and managing the combination of sub-elements. Accordingly, the deployment automation module 116 may invoke these interfaces and functions in order to deploy and manage an element 300 without requiring any modification of the deployment automation module 116 to adapt to or have data describing the entity represented by the element 300. [0039] For example, an element 300 may define functions and interfaces for discovering 302 the element 300 such that once the element 300 is connected by a network to the deployment automation module 116, the element 300 may be discovered and its identity, type, and other attributes may be provided to the deployment automation module 116. [0040] The element 300 may define functions and interfaces for maintaining a reference to the element 300 in an inventory 304 of elements 300 maintained by the deployment automation module 116. This may include responding to queries from the deployment automation module 116 with responses indicating availability of the element 300, e.g., whether it is assigned and operational. [0041] The element 300 may define functions and interfaces for performing life cycle management (LCM) 306 of the element 300. This may include functions and interfaces for instantiating, upgrading, scaling, restarting, restarting, or de-instantiating the element 300. [0054] In some embodiments, each element 300 may have a state and a corresponding finite state machine that defines transitions between states of the finite state machine in response to events occurring involving the element 300. Accordingly, the REST APIs 502 may include a finite state machine (FSM) manager 534 for managing the state machine of each instance of any of the elements 300.) Therefore, it would be obvious to one of ordinary skill in the art before the effective filing of the claimed invention to combine the known technique of ATUR to the system to manages the state transitions of components in provisioning of bare metal machine. Regarding claim 17, reference is made to a method that corresponds to the medium of claim 7 and is therefore met by the rejection of claim 7 above. Allowable Subject Matter Claims 9 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEWIS ALEXANDER BULLOCK JR whose telephone number is (571)272-3759. The examiner can normally be reached Monday-Friday, 9:00-5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cordelia Zecher can be reached at 571-272-7771. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Mar 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566629
SERVER AND A RESOURCE SCHEDULING METHOD FOR USE IN A SERVER
2y 5m to grant Granted Mar 03, 2026
Patent 12561185
FLEXIBLE APPLICATION PROGRAMING INTERFACE USING VERSIONING REQUEST AND RESPONSE TRANSFORMERS
2y 5m to grant Granted Feb 24, 2026
Patent 12511148
SYSTEM AND METHOD SUPPORTING HIGHLY-AVAILABLE REPLICATED COMPUTING APPLICATIONS USING DETERMINISTIC VIRTUAL MACHINES
2y 5m to grant Granted Dec 30, 2025
Patent 12493543
DYNAMIC INSTRUMENTATION TO CAPTURE CLEARTEXT FROM TRANSFORMED COMMUNICATIONS
2y 5m to grant Granted Dec 09, 2025
Patent 11487562
ROLLING RESOURCE CREDITS FOR SCHEDULING OF VIRTUAL COMPUTER RESOURCES
2y 5m to grant Granted Nov 01, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
23%
Grant Probability
79%
With Interview (+56.0%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 65 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month