Prosecution Insights
Last updated: April 19, 2026
Application No. 17/948,119

APPARATUS AND METHOD FOR MANAGING A DISTRIBUTED SYSTEM WITH CONTAINER IMAGE MANIFEST CONTENT

Non-Final OA §103§112
Filed
Sep 19, 2022
Examiner
KIM, DONG U
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Spectro Cloud Inc.
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
610 granted / 702 resolved
+31.9% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
737
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
28.0%
-12.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/20/2026 has been entered. Claims 1-23 are pending and they are presented for examination. Response to Arguments Applicant's arguments filed regarding claim 3 (page 9), “Regarding Claim 3 (similarly claim 14), paragraph 0234 provides example of system components; and paragraph 0235 provides examples of system management agents.” The examiner would like to point out to paragraph 234 and 235 merely provides examples of what a system components and system management agents can be without limiting the scope of the terms. Therefore, clear metes-and-bounds of the terms cannot be reasonably interpreted. Argument is not persuasive. Applicant's arguments filed regarding claim 4 (page 10), “Regarding Claim 4 (similarly claim 15), paragraphs 0237-0238 provide examples of handling the different layers for improving efficiency of the system. It is known in the art that there can be different layers of file structure or directory structure, including a top layer and a bottom layer. Depending on whether the structure is traversed bottom-up or top-down, a person skilled in the art would understand that it is a simple implementation detail to account for the situation of the top layer or the bottom layer.” The examiner would like to point out, regardless of the direction of traversal, base/first layer cannot have a previous layer. Argument is not persuasive. Applicant's arguments filed regarding claim 9 (similarly claim 20) (page 10), “Regarding Claim 9 (similarly claim 20), this claim depends on claim 6. Applicant has amended this claim to clarify the consistency with the elements of claim 6. The support for the amendment of claims 9 and 20 can be found at least in the originally presented claims 9 and 20. No new matter has been added.” The examiner would like to point out the 112 rejection of claim 9 has not been addressed. Furthermore, claim 9 is shown as “Currently amended”, however, no amendment was present that is distinct from previously filed claim. Argument is not persuasive. Applicant's arguments filed regarding claim 10 (page 10), “Regarding Claim 10 (similarly claims 11, 21, and 22), as described in paragraph 0228, the system reboot is performed on a node in the one or more clusters of the distributed system to update the node to be in compliance with the cluster specification update. The support for the amendment of claims 10, 11, 21, and 22 can be found at least in paragraph 0228 of the specification. No new matter has been added.” The examiner would like to point out the 112 rejection of claim 10 has not been addressed. Argument is not persuasive. Applicant's arguments filed regarding claim 11 (page 11-12), “Applicant submits the combination of Haserodt and Rietschin still fails to disclose the elements of the pending claims. For example, there is no teaching of converting, by a runtime container engine of the cluster management agent, the container image manifest content into an operating system bootloader consumable disk image as a transfer media and for rebooting one or more nodes in the distributed system (emphasis added). There is no teaching of initiating, by the cluster management agent, a system reboot using the operating system bootloader consumable disk image for a node of the cluster in the one or more clusters of the distributed system (emphasis added).” The examiner would like first point out to the amendment claim 1. Amended claim 1 recite: A method for managing a distributed system comprising one or more clusters… … operating system bootloader consumable disk image as a transfer media and for rebooting one or more nodes in the distributed system… initiating, by the cluster management agent, a system reboot using the operating system bootloader consumable disk image for a node of the cluster…. [Instant specification PGPub paragraph 3], “A virtual machine (VM) or node may be viewed as some fraction of the underlying resources provided by the cloud… Containerized applications or containers, which may take the form of compartmentalized applications that can be isolated from each other, may run on a single VM and its associated OS.” [Instant specification PGPub paragraph 248], “FIG. 8D illustrates examples of initiating a system reboot using the operating system bootloader consumable disk image for initial deployment or for upgrade according to aspects of the present disclosure. In the examples shown in FIG. 8D, for situations of initial deployment, in block 830, the method boots at a node using a bootstrap node image with a base operation system, the cluster management agent, and the runtime container engine. In block 832, the method reboots at the node using the operating system bootloader consumable disk image. For situation of upgrades, in block 834, the method reboots at the node using the operating system bootloader consumable disk image.” The examiner would like to point out, Haserodt in view of Rietschin discloses the above limitation. Haserodt teaches receiving cluster specification update which includes a container image manifest that describes an infrastructure of the distributed system via management agent. Furthermore, container image manifest is converted to OS bootloader consumable disk image as a transfer media for delivery. [Paragraph 51], An example of a user configurable cluster profile is one that can be modified by a system administrator of an associated enterprise to define one or more cluster profiles to suit the needs of the enterprise and the applications it uses. [Paragraph 15], There can be an ability to modify even closed clusters and the delivery mechanism can make changes for open and closed clusters. Each attribute in the cluster profile includes metadata identifying whether the attribute is visible on the administration user interface, and, if so, whether or not it is editable by the administrator. These attributes can be controlled in a template called a cluster profile. [Paragraph 47], The services (or applications) 212a-j can be any collaboration application providing one or more services… provisioning database service (e.g., provisioned data access), management agent service (serviceability monitoring, maintenance tests, service deployment, and/or data replication and change notification… [Paragraph 17], Other cluster profiles that are unique to a given product application can be delivered in a single product pack or package along with the product snap-in software. This can enable a product to be largely turnkey, such that loading the product pack and creating a cluster with the product's cluster profile will automatically configure the cluster for the product and install the product snap-ins. Applicant's arguments filed regarding claim 6 (page 12), “Rietschin also uses terms like "mount a device" or "mount volume request" in his teachings. However, upon a close review of Rietschin, Applicant submits that Rietschin fails to disclose at least the elements of "initiating deployment of a container using the container image manifest content, wherein the container image manifest content is deployed in read-only mode;” The examiner would like to point out to Haserodt in view of Rietschin discloses the above limitation. In particular Haserodt discloses wherein the container image manifest content is deployed in read-only mode. [Paragraph 15], There can be an ability to modify even closed clusters and the delivery mechanism can make changes for open and closed clusters. Each attribute in the cluster profile includes metadata identifying whether the attribute is visible on the administration user interface, and, if so, whether or not it is editable by the administrator. Therefore, argument is not persuasive. Applicant's arguments filed regarding claim 8 (page 13), “Rietschin, Applicant submits that Rietschin fails to disclose at least the elements of "wherein the mounting point specification includes temporary mount points for mounting a mount point directory as a temporary file storage in memory; or persistent mount points for mounting the mount point directory as a persistent directory from a separate configuration partition” The examiner would like to point out to Haserodt in view of Rietschin discloses the above limitation. In particular Rietschin discloses having persisted changes being made to layers of the container(s). Furthermore, discloses having indicators which a volume is accessible, previously generated markings, etc. Therefore, argument is not persuasive. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 3-5, 9-11, 14-16 and 20-22 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 (similar claim 14) recite: “system component layer configured to include system components” and “system management agents”. The examiner is unclear which system components and system management agents are being included. For example, system components/management agents of one or more clusters, system components/management agents of a container, etc. Thus, clear metes-and-bounds of the term “system” cannot be reasonably interpreted. Claim 4 (similar claim 15) recite “one or more corresponding previous file structures and/or directory structures under previous layer(s)”. The examiner is unclear how an operating system configured to include a base operating system can have a previous layer(s). For example, first layer would not have a previous layer(s) since it’s the first layer, only second layer on would have a previous layer(s). Claim 9 (similar claim 20) recite: “converting the container image manifest content is performed one time at each node with the container image manifest content.”. The examiner is unclear how the container image manifest content is converted with the container image manifest content. Based on the limitation, nothing would be converted but rather remain the same since “container image manifest content” is converted with “the container image manifest content”. Claim 10 (similarly claims 11, 21 and 22) recite: “rebooting, at the node, using the operating system bootloader consumable disk image.” The examiner is unclear if the node is being rebooted or if a container is being rebooted at the node, etc. Claims 4, 5, 15 and 16 are rejected based on rejection of its corresponding dependent claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 8-9, 12-17, 19-20 and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haserodt et al. (Pub 20160285957) (hereafter Haserodt) in view of Rietschin et al. (Pub 20210182078) (hereafter Rietschin). As per claim 1, Haserodt teaches: A method for managing a distributed system comprising one or more clusters and each cluster comprising at least one node, the method comprises: ([Fig. 1] [Paragraph 15], Regarding open and closed clusters, application restrictions or lack thereof can be provided for multiple machines at a cluster level rather than a server level and is differentiated from other approaches by how it is configured and distributed. There can be an ability to modify even closed clusters and the delivery mechanism can make changes for open and closed clusters. Each attribute in the cluster profile includes metadata identifying whether the attribute is visible on the administration user interface, and, if so, whether or not it is editable by the administrator. These attributes can be controlled in a template called a cluster profile.) receiving, by a cluster management agent, a cluster specification update of a cluster in the one or more cluster, wherein the cluster specification update includes a container image manifest content that describes an infrastructure of the distributed system; ([Paragraph 51], An example of a user configurable cluster profile is one that can be modified by a system administrator of an associated enterprise to define one or more cluster profiles to suit the needs of the enterprise and the applications it uses. [Paragraph 15], There can be an ability to modify even closed clusters and the delivery mechanism can make changes for open and closed clusters. Each attribute in the cluster profile includes metadata identifying whether the attribute is visible on the administration user interface, and, if so, whether or not it is editable by the administrator. These attributes can be controlled in a template called a cluster profile. [Paragraph 17], Other cluster profiles that are unique to a given product application can be delivered in a single product pack or package along with the product snap-in software. This can enable a product to be largely turnkey, such that loading the product pack and creating a cluster with the product's cluster profile will automatically configure the cluster for the product and install the product snap-ins. [Paragraph 46], FIG. 2 shows a typical cluster element 200, or server. A cluster 122 includes a plurality of cluster elements 200 or servers. A cluster element can be a single virtual machine, such as a process virtual machine (or application virtual machine or managed runtime environment), with the surround components and a JEE container running on it. [Paragraph 47], The services (or applications) 212a-j can be any collaboration application providing one or more services… provisioning database service (e.g., provisioned data access), management agent service (serviceability monitoring, maintenance tests, service deployment, and/or data replication and change notification… [Paragraph 51], An example of a user configurable cluster profile is one that can be modified by a system administrator of an associated enterprise to define one or more cluster profiles to suit the needs of the enterprise and the applications it uses. [Paragraph 52], As noted, the cluster can define a data grid that can be used by the installable software module installed on the cluster. The data grid is shared by all cluster element nodes in the cluster. The attributes for the grid can be defined in the cluster profile. The grid will be created and configured when cluster elements are added to the cluster. For high availability, two of the cluster elements can be designated to run the lookup services by the cluster element manager. A data grid application programming interface can be provided for installable software modules to create and access processing units or PUs and spaces. A processing unit container is a component implemented by the user and deployed and managed by a service grid. The data grid application programming interface can also provide access to use simple name/value pair PUs. [Paragraph 19], The present disclosure can provide a general purpose application platform able to configure and define multiple server clusters, each with its own cluster attributes and installed applications (e.g., snap-in services). Cluster attributes can be configured via cluster profiles to allow for greater flexibility in the use of a general purpose application platform. This can allow server clusters for product applications with specific cluster resource and/or configuration needs.) converting, by a runtime container engine of the cluster management agent, the container image manifest content into an operating system bootloader consumable disk image as a transfer media and for rebooting one or more nodes in the distributed system; and initiating, by the cluster management agent, a system reboot using the operating system bootloader consumable disk image for a node of the cluster in the one or more clusters of the distributed system to update the node to be in compliance with the cluster specification update. ([Paragraph 17], Other cluster profiles that are unique to a given product application can be delivered in a single product pack or package along with the product snap-in software. [Paragraph 18], As will be appreciated, the management application can allow or enable an administrator to create his or her own cluster profiles based on the needs for the administrator's enterprise. This ability can provide enhanced flexibility and configurability for the administrator to meet the unique needs and requirements of the enterprise. [Paragraph 51], An example of a user configurable cluster profile is one that can be modified by a system administrator of an associated enterprise to define one or more cluster profiles to suit the needs of the enterprise and the applications it uses. [Paragraph 42], The first and second clusters 122a-b each provides a set of services or applications to other components of the voice portal platform 100. Each of the first and second clusters 122a and 122b have a common set of cluster definitions for its member cluster elements (e.g., servers), but the first and second clusters 122a-b have different sets of cluster definitions when compared to each other. Stated another way, the cluster profiles and attribute definitions for different clusters 122 can be heterogeneous (or different) while the member element profiles and attribute definitions within a selected cluster 122 are homogeneous (or identical) because the descriptions in the cluster profile (e.g., attribute definitions) apply to every member cluster element of the corresponding cluster 122. [Paragraph 56], The cluster profile data 308 can be of many attribute types. Examples include ClusterTypeVersion (or the version of the cluster type or profile), MinNodes (or the minimum number of cluster elements requirements in a cluster), MaxNodes (or the maximum number of cluster elements allowed in a cluster), ReqCPUs (or the minimum number of required virtual Central Processing Units (“CPUs”) per element), ReqRAM (or the minimum amount of RAM required (GB) per element), ReqDisk (or the minimum amount of disk space required (GB) per element). However, Haserodt does not explicitly disclose converting, by a runtime container engine of the cluster management agent, the container image manifest content into an operating system bootloader consumable disk image as a transfer media and for rebooting one or more nodes in the distributed system; and initiating, by the cluster management agent, a system reboot using the operating system bootloader consumable disk image for a node of the cluster in the one or more clusters of the distributed system. Rietschin teaches converting, by a runtime container engine of the cluster management agent, the container image manifest content into an operating system bootloader consumable disk image as a transfer media and for rebooting one or more nodes in the distributed system; and initiating, by the cluster management agent, a system reboot using the operating system bootloader consumable disk image for a node of the cluster in the one or more clusters of the distributed system. Rietschin also teaches a container image manifest content and receiving, by a cluster management agent, a cluster specification update of a cluster in the one or more cluster. ([Paragraph 44], Initially, at step 410, a container instance can be created on a host computing device. Such a creation of a container instance can include the reservation of memory, the establishment of underlying hardware and communication functionality, and the like. Subsequently, at step 415, a hypervisor, such as based on instructions and/or parameters provided by container manager processes, can instantiate firmware to execute within the container instance. As indicated in FIG. 4, step 415 can correspond to the exemplary system 201 shown in FIG. 2a and described in detail above. [Paragraph 20], For example, if the exemplary application 152, executing within the container environment 150, were to edit the exemplary file 141, as illustrated by the edit action 155, such a modification can result in a file 144, representing an edited version of the file 141, being part of the container file system 171. [Paragraph 22], As indicated, in some instances, it can be desirable to allow processes executing within the container environment 150 to modify aspects of the container operating system 160, including aspects that can be established during early portions of the boot process of the container operating system 160. However, the layered file system presented by the container operating system 160 can be established at a much later point during the boot process of the container operating system 160. For example, the container file system drivers 170 and virtual file system drivers 180 may not establish the layered file systems 171 and 181 until after the kernel of the container operating system 160 has executed. [Paragraph 23], To enable processes executing within the container environment 150 to modify aspects of the container operating system 160, including aspects that can be established during the early portions of the boot process of the container operating system 160, a layered composite boot device and file system can be utilized during the booting of the container operating system 160. Turning to FIG. 2a, the exemplary system 201 shown therein illustrates an exemplary initiation of a booting of an operating system in a container file system virtualization environment, such as the exemplary container 150. More specifically, the exemplary host computing environment 110 can comprise a hypervisor, such as the exemplary hypervisor 211, or other like hardware and/or software capabilities. The exemplary hypervisor 211 can utilize data from a container operating system image, such as the exemplary container operating system image 212, or other like container data stored on the storage media 111 of the exemplary host computing environment 110, to start booting an operating system within the exemplary container 150. For example, as illustrated by the exemplary system 201, the hypervisor 211 can cause container firmware 220 to execute within the container environment 150, in the form of the executing container firmware 221, as illustrated by the action 219… A dynamic container operating system image 212, for example, can utilize the same executable instructions for some or all of the relevant portions of the container operating system image 212 as are utilized to boot the operating system of the host computing device 110 itself. In such an instance, changes to the executable instructions that boot the operating system of the host computing device 110 can necessarily result in changes to the container operating system image 212.) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Haserodt wherein a distributed cluster environment having a cluster management agent receives a cluster specification update which includes container image manifest content to configure/modify/update various elements (i.e. cluster, VM, container, application, etc.) of the cluster environment to be in compliance with the cluster specification update, into teachings of Rietschin wherein the container image manifest is converted into an OS system bootloader disk image for rebooting a node(s) in the distributed system, because this would enhance the teachings of Haserodt wherein by converting the manifest into a bootloader disk image, it allows modification of a container operating system that can be established during an early portions of a boot process of the container OS and allow application of plurality of container layers (i.e. overlay) to be applied as the container OS boots, thus composite container can be built/updated utilizing various container layers (i.e. drivers, application, kernels, etc.) to ensure the container the node is updated to be in compliance. As per claim 2, rejection of claim 1 is incorporated: Haserodt teaches wherein the cluster specification update is received via a local application program interface (API) of the cluster management agent in the absence of internet access or via a communication channel through an internet connection with the cluster management agent. ([Fig. 1] [Paragraph 15], Regarding open and closed clusters, application restrictions or lack thereof can be provided for multiple machines at a cluster level rather than a server level and is differentiated from other approaches by how it is configured and distributed. There can be an ability to modify even closed clusters and the delivery mechanism can make changes for open and closed clusters. Each attribute in the cluster profile includes metadata identifying whether the attribute is visible on the administration user interface, and, if so, whether or not it is editable by the administrator. These attributes can be controlled in a template called a cluster profile. [Paragraph 45], The other public networks 128 and enterprise network 132 can be any network, such as the Internet, a Wide Area Network (WAN), a Local Area Network (LAN), the Public Switched Telephone Network (PSTN), a packet-switched network, a circuit-switched network, a cellular network, and/or a combination thereof. In one application, the other public networks 128 include the Internet, and the enterprise network 132 is a trusted or private network, such as a Local Area Network (LAN). [Paragraph 51], An example of a user configurable cluster profile is one that can be modified by a system administrator of an associated enterprise to define one or more cluster profiles to suit the needs of the enterprise and the applications it uses.) As per claim 3, rejection of claim 1 is incorporated: Rietschin teaches wherein the container image manifest content includes descriptions of one or more layers of an overlay file system of a container, comprising: an operating system layer configured to include a base operating system of the cluster in the one or more clusters of the distributed system; a distributed system layer configured to include a distributed system clustering software; a system component layer configured to include system components; a host agent layer configured to include system management agents; and an original equipment manufacturer (OEM) customized layer configured to include OEM customization information. ([Paragraph 18], For example, as illustrated by the exemplary system 100 of FIG. 1, the exemplary container operating system 160 can comprise a layered file system in the form of the container file system 171, which can act as a primary layer, or “overlay”, in combination with the host file system 181, which can act as a secondary layer, or “underlay”. [Paragraph 19], The file systems referenced herein can be any of the known, existing file systems, such as the NT file system (NTFS), the Apple file system (APFS), the UNIX file system (UFS), and the like, or other file systems. Similarly, the file system drivers can be the corresponding drivers, filters, mini-filters, and other like drivers that can implement such file systems. Thus, for example, if the host file system 131 is NTFS, then the host file system drivers 130 can be the relevant NTFS drivers. Within the exemplary container environment 150, however, the host file system 181 can be implemented in a slightly different manner so as to provide access to the host file system from within a file system virtualization environment… [Paragraph 20], For example, if the exemplary application 152, executing within the container environment 150, were to edit the exemplary file 141, as illustrated by the edit action 155, such a modification can result in a file 144, representing an edited version of the file 141, being part of the container file system 171. [Paragraph 22], As indicated, in some instances, it can be desirable to allow processes executing within the container environment 150 to modify aspects of the container operating system 160, including aspects that can be established during early portions of the boot process of the container operating system 160. However, the layered file system presented by the container operating system 160 can be established at a much later point during the boot process of the container operating system 160. For example, the container file system drivers 170 and virtual file system drivers 180 may not establish the layered file systems 171 and 181 until after the kernel of the container operating system 160 has executed. Indeed, in some instances, the kernel of the operating system 160 can be responsible for executing the relevant drivers 170 and 180. Accordingly, if, for example, an application or process executing within the container 150 changed an aspect of the container operating system 160, such a change could be stored in the container file system 171, such as in the manner detailed previously. [Paragraph 23], To enable processes executing within the container environment 150 to modify aspects of the container operating system 160, including aspects that can be established during the early portions of the boot process of the container operating system 160, a layered composite boot device and file system can be utilized during the booting of the container operating system 160. Turning to FIG. 2a, the exemplary system 201 shown therein illustrates an exemplary initiation of a booting of an operating system in a container file system virtualization environment, such as the exemplary container 150. More specifically, the exemplary host computing environment 110 can comprise a hypervisor, such as the exemplary hypervisor 211, or other like hardware and/or software capabilities. The exemplary hypervisor 211 can utilize data from a container operating system image, such as the exemplary container operating system image 212, or other like container data stored on the storage media 111 of the exemplary host computing environment 110, to start booting an operating system within the exemplary container 150. For example, as illustrated by the exemplary system 201, the hypervisor 211 can cause container firmware 220 to execute within the container environment 150, in the form of the executing container firmware 221, as illustrated by the action 219. The information and executable instructions stored in a container operating system image, such as the exemplary container operating system image 212, can be static, such that they are not affected by changes to the host computing environment 110, or they can be dynamic in that changes to the host computing environment 110 can result in changes to some or all of the container operating system image 212. A dynamic container operating system image 212, for example, can utilize the same executable instructions for some or all of the relevant portions of the container operating system image 212 as are utilized to boot the operating system of the host computing device 110 itself. In such an instance, changes to the executable instructions that boot the operating system of the host computing device 110 can necessarily result in changes to the container operating system image 212.) Haserodt also teaches one or more clusters and an original equipment manufacturer (OEM) customized layer configured to include OEM customization information. ([Paragraph 51], An example of a user configurable cluster profile is one that can be modified by a system administrator of an associated enterprise to define one or more cluster profiles to suit the needs of the enterprise and the applications it uses. [Paragraph 15], There can be an ability to modify even closed clusters and the delivery mechanism can make changes for open and closed clusters. Each attribute in the cluster profile includes metadata identifying whether the attribute is visible on the administration user interface, and, if so, whether or not it is editable by the administrator. These attributes can be controlled in a template called a cluster profile. [Paragraph 17], Other cluster profiles that are unique to a given product application can be delivered in a single product pack or package along with the product snap-in software. This can enable a product to be largely turnkey, such that loading the product pack and creating a cluster with the product's cluster profile will automatically configure the cluster for the product and install the product snap-ins.) As per claim 4, rejection of claim 3 is incorporated: Rietschin teaches wherein each layer in the container points to an environment independent archive file that includes a set of file structures and/or directory structures configured to overlay with one or more corresponding previous file structures and/or directory structures under previous layer(s). ([Paragraph 18], According to one aspect, the file system of the exemplary container operating system 160 can be a layered file system that can enable applications executing within the container environment 150, such as the exemplary application 152, to access some or all of the same files of the host file system 131, such as, for example, the exemplary files 141, 142 and 143, except that any changes or modifications to those files can remain only within the container environment 150. For example, as illustrated by the exemplary system 100 of FIG. 1, the exemplary container operating system 160 can comprise a layered file system in the form of the container file system 171, which can act as a primary layer, or “overlay”, in combination with the host file system 181, which can act as a secondary layer, or “underlay”. [Paragraph 19], The file systems referenced herein can be any of the known, existing file systems, such as the NT file system (NTFS), the Apple file system (APFS), the UNIX file system (UFS), and the like, or other file systems. Similarly, the file system drivers can be the corresponding drivers, filters, mini-filters, and other like drivers that can implement such file systems… [Paragraph 49], Turning to FIG. 5, the exemplary flow diagram 500 shown therein illustrates an exemplary series of steps by which the composite device and the composite file system can be implemented to provide the above described layering. Initially, at step 510, an access request can be received. Such an access request can be a file-based access request, a folder-based or directory-based access request, or a device-based access request… Analogous files-based or directory-based requests can be directed to files and/or directories of the composite file system. [Paragraph 53], If the access request is a device-based request, such as a request to mount a volume, processing can proceed to step 530 and the composite device can send such a request to each layer's file system. In such a manner, the relevant volume can be mounted at each layer such that subsequent directory enumerations, or file access requests can encompass both an underlying base layer provided by the host computing environment, which is not changeable from the container environment, and a primary, or overlay, layer accessible from within the container environment and persisting changes made within the container environment.) As per claim 5, rejection of claim 3 is incorporated: Rietschin teaches further comprising: sharing common content archive files of the one or more layers of the overlay file system of the container in a cache of the cluster managed by the cluster management agent among multiple container image manifest content. ([Paragraph 18], According to one aspect, the file system of the exemplary container operating system 160 can be a layered file system that can enable applications executing within the container environment 150, such as the exemplary application 152, to access some or all of the same files of the host file system 131, such as, for example, the exemplary files 141, 142 and 143, except that any changes or modifications to those files can remain only within the container environment 150. For example, as illustrated by the exemplary system 100 of FIG. 1, the exemplary container operating system 160 can comprise a layered file system in the form of the container file system 171, which can act as a primary layer, or “overlay”, in combination with the host file system 181, which can act as a secondary layer, or “underlay”. [Paragraph 50], Turning back to the exemplary flow diagram 500, if the access request is a file open request, then, at step 515, the requested file can be read from the highest layer where the file exists… Such metadata can be cached in one or more tables, such as file tables implemented by the composite file system, or other analogous databases or data structures. Subsequently, the relevant processing… ) Haserodt also teaches ([Paragraph 46], The cluster element 200 can include data management 204, such as a data grid (e.g., a replicated in-memory data cache used to store runtime and semi-persistent data and share data across cluster elements), to allow services or applications to store data and have it be accessible to other elements in the cluster and an application container 208.) As per claim 6, rejection of claim 1 is incorporated: Rietschin teaches wherein converting the container image manifest content into the operating system bootloader consumable disk image by the runtime container engine comprises: initiating deployment of a container using the container image manifest content, wherein the container image manifest content is deployed in read-only mode; constructing an overlay file system of the container to generate a container root file system; and mounting the container root file system to generate the operating system bootloader consumable disk image. ([Paragraph 18], According to one aspect, the file system of the exemplary container operating system 160 can be a layered file system that can enable applications executing within the container environment 150, such as the exemplary application 152, to access some or all of the same files of the host file system 131, such as, for example, the exemplary files 141, 142 and 143, except that any changes or modifications to those files can remain only within the container environment 150. For example, as illustrated by the exemplary system 100 of FIG. 1, the exemplary container operating system 160 can comprise a layered file system in the form of the container file system 171, which can act as a primary layer, or “overlay”, in combination with the host file system 181, which can act as a secondary layer, or “underlay”. [Paragraph 53], f the access request is a device-based request, such as a request to mount a volume, processing can proceed to step 530 and the composite device can send such a request to each layer's file system. In such a manner, the relevant volume can be mounted at each layer such that subsequent directory enumerations, or file access requests can encompass both an underlying base layer provided by the host computing environment, which is not changeable from the container environment, and a primary, or overlay, layer accessible from within the container environment and persisting changes made within the container environment. [Paragraph 23], The exemplary hypervisor 211 can utilize data from a container operating system image, such as the exemplary container operating system image 212, or other like container data stored on the storage media 111 of the exemplary host computing environment 110, to start booting an operating system within the exemplary container 150.) Haserodt teaches wherein the container image manifest content is deployed in read-only mode ([Paragraph 15], There can be an ability to modify even closed clusters and the delivery mechanism can make changes for open and closed clusters. Each attribute in the cluster profile includes metadata identifying whether the attribute is visible on the administration user interface, and, if so, whether or not it is editable by the administrator.) As per claim 8, rejection of claim 6 is incorporated: Rietschin teaches further comprising: wherein environment independent archive files in a layer of the container include a mounting point specification and wherein the mounting point specification includes: temporary mount points for mounting a mount point directory as a temporary file storage in memory; or persistent mount points for mounting the mount point directory as a persistent directory from a separate configuration partition. ([Paragraph 18], According to one aspect, the file system of the exemplary container operating system 160 can be a layered file system that can enable applications executing within the container environment 150, such as the exemplary application 152, to access some or all of the same files of the host file system 131, such as, for example, the exemplary files 141, 142 and 143, except that any changes or modifications to those files can remain only within the container environment 150. For example, as illustrated by the exemplary system 100 of FIG. 1, the exemplary container operating system 160 can comprise a layered file system in the form of the container file system 171, which can act as a primary layer, or “overlay”, in combination with the host file system 181, which can act as a secondary layer, or “underlay”. [Paragraph 53], f the access request is a device-based request, such as a request to mount a volume, processing can proceed to step 530 and the composite device can send such a request to each layer's file system. In such a manner, the relevant volume can be mounted at each layer such that subsequent directory enumerations, or file access requests can encompass both an underlying base layer provided by the host computing environment, which is not changeable from the container environment, and a primary, or overlay, layer accessible from within the container environment and persisting changes made within the container environment. [Paragraph 23], The exemplary hypervisor 211 can utilize data from a container operating system image, such as the exemplary container operating system image 212, or other like container data stored on the storage media 111 of the exemplary host computing environment 110, to start booting an operating system within the exemplary container 150. [Paragraph 39], The container operating system configuration data 291, having been changed by processes executing within the container environment 150, can be stored within the container file system 171, as persisted in the sandbox 149.) As per claim 9, rejection of claim 6 is incorporated: Rietschin teaches wherein the converting the container image manifest content is performed one time at each node with the container image manifest content. ([Paragraph 44], Initially, at step 410, a container instance can be created on a host computing device. Such a creation of a container instance can include the reservation of memory, the establishment of underlying hardware and communication functionality, and the like. Subsequently, at step 415, a hypervisor, such as based on instructions and/or parameters provided by container manager processes, can instantiate firmware to execute within the container instance. As indicated in FIG. 4, step 415 can correspond to the exemplary system 201 shown in FIG. 2a and described in detail above. [Paragraph 23], To enable processes executing within the container environment 150 to modify aspects of the container operating system 160, including aspects that can be established during the early portions of the boot process of the container operating system 160, a layered composite boot device and file system can be utilized during the booting of the container operating system 160. Turning to FIG. 2a, the exemplary system 201 shown therein illustrates an exemplary initiation of a booting of an operating system in a container file system virtualization environment, such as the exemplary container 150. More specifically, the exemplary host computing environment 110 can comprise a hypervisor, such as the exemplary hypervisor 211, or other like hardware and/or software capabilities. The exemplary hypervisor 211 can utilize data from a container operating system image, such as the exemplary container operating system image 212, or other like container data stored on the storage media 111 of the exemplary host computing environment 110, to start booting an operating system within the exemplary container 150. For example, as illustrated by the exemplary system 201, the hypervisor 211 can cause container firmware 220 to execute within the container environment 150, in the form of the executing container firmware 221, as illustrated by the action 219… A dynamic container operating system image 212, for example, can utilize the same executable instructions for some or all of the relevant portions of the container operating system image 212 as are utilized to boot the operating system of the host computing device 110 itself. In such an instance, changes to the executable instructions that boot the operating system of the host computing device 110 can necessarily result in changes to the container operating system image 212.) As per claims 12-17 and 19-20, these are apparatus claims corresponding to the method claims 1-6 and 8-9. Therefore, rejected based on similar rationale. As per claim 23, this is a non-transitory computer-readable medium claim corresponding to the method claim 1. Therefore, rejected based on similar rationale. Claim(s) 7 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haserodt in view of Rietschin and further in view of AWS Elastic Beanstalk Developers Guide API Version 2010-10-01 (hereafter AWS). As per claim 7, rejection of claim 6 is incorporated: Rietschin teaches wherein initiating deployment of a container using the container image manifest content comprises: providing a current container image manifest content and a previously proven container image manifest content to support failsafe upgrades with rollback capability; and retrieving environment independent archive files in a layer of the container automatically from a container registry configured in the runtime container engine in response to the environment independent archive files that are not found in a cache of the cluster managed by the cluster management agent. ([Paragraph 18], For example, as illustrated by the exemplary system 100 of FIG. 1, the exemplary container operating system 160 can comprise a layered file system in the form of the container file system 171, which can act as a primary layer, or “overlay”, in combination with the host file system 181, which can act as a secondary layer, or “underlay”. [Paragraph 19], The file systems referenced herein can be any of the known, existing file systems, such as the NT file system (NTFS), the Apple file system (APFS), the UNIX file system (UFS), and the like, or other file systems. Similarly, the file system drivers can be the corresponding drivers, filters, mini-filters, and other like drivers that can implement such file systems. Thus, for example, if the host file system 131 is NTFS, then the host file system drivers 130 can be the relevant NTFS drivers. [Paragraph 21], As utilized herein, the term “sandbox” means one or more files, databases, structured storage, or other like digital data repository that can store the relevant data necessary to implement the container file system 171. [Paragraph 50], Turning back to the exemplary flow diagram 500, if the access request is a file open request, then, at step 515, the requested file can be read from the highest layer where the file exists. Thus, for example, if the file exists in the primary file system, that file can be provided in response to the request received at step 510. Conversely, if the file does not exist in the primary file system, the secondary file system can be checked for the file, and if the file is located in the secondary file system, it can be provided from there in response to the request.) Haserodt also teaches ([Paragraph 7], The present disclosure can provide a mechanism and system for heterogeneous clusters of a general purpose application platform that delivers cluster definitions such that the different sets of applications and associated configurations for each cluster are effectively attributes. [Paragraph 15], In one application, the disclosure can provide for heterogeneous clusters of a general purpose application platform that deliver cluster definitions such that the different sets of applications and associated configurations for each cluster are effectively attributes… [Paragraph 46], The cluster element 200 can include data management 204, such as a data grid (e.g., a replicated in-memory data cache used to store runtime and semi-persistent data and share data across cluster elements), to allow services or applications to store data and have it be accessible to other elements in the cluster and an application container 208. [Paragraph 15], A cluster can be defined by many different attributes, including without limitation cluster type and version…) Haserodt and Rietschin discloses versioning of cluster/profile. [Haserodt paragraph 56] [Rietschin paragraph 20] However, Haserodt and Rietschin do not explicitly disclose providing a current container image manifest content and a previously proven container image manifest content to support failsafe upgrades with rollback capability. AWS teaches providing a current container image manifest content and a previously proven container image manifest content to support failsafe upgrades with rollback capability. ([Page 128], This option tells Elastic Beanstalk to validate the environment manifest and configuration files in your source bundle when you create the application version. The EB CLI sets this flag automatically when you have an environment manifest in your project directory. [Page 219], You can include a YAML formatted environment manifest in the root of your application source bundle to configure the environment name, solution stack and environment links (p. 128) to use when creating your environment. An environment manifest uses the same format as Saved Configurations (p. 215)… AWSConfigurationTemplateVersion: 1.1.0.0… [Page 562], Elastic Beanstalk uses Amazon EC2 Container Service to coordinate container deployments to multicontainer Docker environments. Amazon ECS provides tools to manage a cluster of instances running Docker containers. Elastic Beanstalk takes care of Amazon ECS tasks including cluster creation, task definition and execution. [Page 669], You can also deploy an existing application to an existing environment if, for instance, you need to roll back to a previous application version. [Page 101], After a failed deployment, check the health of the instances in your environment (p. 295) for information about the cause of the failure, and perform another deployment with a fixed or known good version of your application to roll back.) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Haserodt and Rietschin wherein a distributed cluster environment having a cluster management agent receives a cluster specification update which includes container image manifest content to configure/modify/update various elements (i.e. cluster, VM, container, application, etc.) of the cluster environment to be in compliance with the cluster specification update and the container image manifest is converted into an OS system bootloader disk image for rebooting a node(s) in the distributed system, into teachings of AWS wherein current container image manifest is provided along with previously proven (i.e. working/known good version) of container image manifest content to support failsafe upgrades with rollback capability, because this would enable rollback to a previously known working container image manifest to undo any changes if any failure occurs. As per claim 18, this is an apparatus claim corresponding to the method claim 7. Therefore, rejected based on similar rationale. Claim(s) 10, 11, 21 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haserodt in view of Rietschin and further in view of Rodriguez et al. (Pub 20220171648) (hereafter Rodriguez). As per claim 10, rejection of claim 1 is incorporated: Rietschin teaches wherein initiating a system reboot using the operating system bootloader consumable disk image comprises: for initial deployment, booting, at the node of the cluster in the one or more clusters, using a bootstrap node image with a base operation system, the cluster management agent, and the runtime container engine; and rebooting, at the node, using the operating system bootloader consumable disk image; ([Paragraph 23], The exemplary hypervisor 211 can utilize data from a container operating system image, such as the exemplary container operating system image 212, or other like container data stored on the storage media 111 of the exemplary host computing environment 110, to start booting an operating system within the exemplary container 150… A dynamic container operating system image 212, for example, can utilize the same executable instructions for some or all of the relevant portions of the container operating system image 212 as are utilized to boot the operating system of the host computing device 110 itself. In such an instance, changes to the executable instructions that boot the operating system of the host computing device 110 can necessarily result in changes to the container operating system image 212. [Paragraph 3], Such a primary device, and primary file system, can correspond to a virtualized file system within a container environment, thereby enabling changes within the container environment to affect early stages of an operating system boot in such a container environment. [Paragraph 22], Accordingly, if, for example, an application or process executing within the container 150 changed an aspect of the container operating system 160, such a change could be stored in the container file system 171, such as in the manner detailed previously. However, since the container file system 171 may not be accessible until after the kernel of the operating system 160 has executed the relevant drivers 170, the operating system 160 will not be able to access such a change early in its boot process, since such a change will not be accessible until after the operating system kernel has already been loaded into memory.) wherein a mounting specification can be read from the operating system composable disk image during the reboot; and wherein temporary mounting points are mounted as a directory mapped to in-memory temporary file system, and persistent mounting points are mounted as a directory mapped to a persistent directory from a separate configuration partition. ([Paragraph 49], Turning to FIG. 5, the exemplary flow diagram 500 shown therein illustrates an exemplary series of steps by which the composite device and the composite file system can be implemented to provide the above described layering. Initially, at step 510, an access request can be received. Such an access request can be a file-based access request, a folder-based or directory-based access request, or a device-based access request. Although only select access requests are illustrated and described, other corresponding access requests can proceed among the multiple layers of the composite device and composite file system in an analogous manner to those illustrated and described. For example, device-based access requests that can be directed to a composite device include traditional device-based requests, such as requests to open the device, initialize the device, read from the device, write to the device, mount the device, unmount the device, and the like. Analogous files-based or directory-based requests can be directed to files and/or directories of the composite file system.) However, Haserodt and Rietschin do not explicitly disclose wherein temporary mounting points are mounted as a directory mapped to in-memory temporary file system, and persistent mounting points are mounted as a directory mapped to a persistent directory from a separate configuration partition. Rodriguez teaches wherein temporary mounting points are mounted as a directory mapped to in-memory temporary file system, and persistent mounting points are mounted as a directory mapped to a persistent directory from a separate configuration partition. ([Paragraph 35], Fundamentally, a Linux operating system is made up of the Linux kernel, an initial RAM disk, and a filesystem on the hard drive. To run processes on the Linux Kernel, the kernel needs a way to “init” the system, or bootstrap the user space and manage user processes. Until recently, Unix System V (SysV) (originally released in 1983 by AT&T) was commonly used to handle the “init” or bootstrapping of the Linux operating system. Within the past few years, however, most Linux distributions have transitioned to “SystemD” to perform the “init” process. [Paragraph 36], When a Linux operating system boots, the bootloader loads the Linux Kernel (vmlinz) and the initial RAM disk (initrd) into memory. After the kernel initializes, initrd provides the bare minimum drivers to “init” the system, which provides access to the larger hard drive on the system. Once the “root” partition is mounted, the “init” process pivots to the hard drive filesystem and executes SystemD to begin bootstrapping all the startup processes. The files on the filesystem are software libraries and binaries to support the running processes and user utilities and binaries to manage the system. [Paragraph 39], FIG. 1 illustrates an example embodiment of a container-first architecture 100. In the illustrated embodiment, the container-first architecture 100 includes a compute platform 102 (e.g., an Intel Architecture (IA) compute platform), a system BIOS 104, an operating system (OS) kernel 106 (e.g., a Linux kernel from any distribution), and an immutable initial RAM disk (initrd) 108 with a container runtime service 112 for bootstrapping the OS 106. In some embodiments, the Linux kernel 106 and the immutable initial RAM disk 108 can be packaged with a very small storage footprint (e.g., roughly 250 MB).) Rodriguez also teaches wherein initiating a system reboot using the operating system bootloader consumable disk image comprises: for initial deployment, booting, at the node, using a bootstrap node image with a base operation system, the cluster management agent, and the runtime container engine; and rebooting, at the node, using the operating system bootloader consumable disk image; persistent mounting points; ([Paragraph 81], persistent memory can be used to store the OS image and/or the initial RAM disk (initrd) in an immutable state, thus removing disks from being an attack surface before they are securely on-boarded; [Paragraph 84-87], a bootable partition with an operating system captured inside a file image or a raw partition on a physical drive; a hypervisor to launch a VM from the bootable partition (e.g., using the QEMU binary to launch the virtual machine using Kernel-based Virtual Machine (KVM) hypervised acceleration on the Linux kernel); an initialization script to dynamically configure and launch the VM based on detected hardware characteristics (e.g., detect hardware characteristics and configure any environment variables or parameters (e.g., memory size, vCPU count, and so forth) in order to execute the QEMU binary with the appropriate switches). [Paragraph 42], After the storage drive 130 is mounted, the integrity service 110 verifies the integrity of containers, files, and optionally partitions based on a “hash” manifest that was generated during manufacturing and system updates. Once integrity is verified with success, the integrity service 110 executes the container runtime service 112 (e.g., ContainerD) to bootstrap the system.) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Haserodt and Rietschin wherein a distributed cluster environment having a cluster management agent receives a cluster specification update which includes container image manifest content to configure/modify/update various elements (i.e. cluster, VM, container, application, etc.) of the cluster environment to be in compliance with the cluster specification update and the container image manifest is converted into an OS system bootloader disk image for rebooting a node(s) in the distributed system, into teachings of Rodriguez wherein temporary mounting points are mounted as a directory mapped to in-memory temporary file system (i.e. RAM disk), because this would enhance the teachings of Haserodt and Rietschin wherein by creating a RAM disk for initial boot/reboot where the RAM disk comprises bare minimum files necessary to boot/reboot thus the composable disk image can be packaged with a very small storage foot print and allows securing of disks from being an attach surface before they are securely on-boarded. [Rodriguez paragraph 39, 81] As per claim 11, rejection of claim 1 is incorporated: Rietschin teaches wherein initiating a system reboot using the operating system bootloader consumable disk image further comprises: for an upgrade, rebooting, at the node of the cluster in the one or more clusters, using the operating system bootloader consumable disk image; wherein a mounting specification can be read from the operating system composable disk image during the reboot; and wherein temporary mounting points are mounted as a directory mapped to in-memory temporary file system, and persistent mounting points are mounted as a directory mapped to a persistent directory from a separate configuration partition. ([Paragraph 23], The exemplary hypervisor 211 can utilize data from a container operating system image, such as the exemplary container operating system image 212, or other like container data stored on the storage media 111 of the exemplary host computing environment 110, to start booting an operating system within the exemplary container 150… A dynamic container operating system image 212, for example, can utilize the same executable instructions for some or all of the relevant portions of the container operating system image 212 as are utilized to boot the operating system of the host computing device 110 itself. In such an instance, changes to the executable instructions that boot the operating system of the host computing device 110 can necessarily result in changes to the container operating system image 212. [Paragraph 3], Such a primary device, and primary file system, can correspond to a virtualized file system within a container environment, thereby enabling changes within the container environment to affect early stages of an operating system boot in such a container environment. [Paragraph 22], Accordingly, if, for example, an application or process executing within the container 150 changed an aspect of the container operating system 160, such a change could be stored in the container file system 171, such as in the manner detailed previously. However, since the container file system 171 may not be accessible until after the kernel of the operating system 160 has executed the relevant drivers 170, the operating system 160 will not be able to access such a change early in its boot process, since such a change will not be accessible until after the operating system kernel has already been loaded into memory. [Paragraph 49], Turning to FIG. 5, the exemplary flow diagram 500 shown therein illustrates an exemplary series of steps by which the composite device and the composite file system can be implemented to provide the above described layering. Initially, at step 510, an access request can be received. Such an access request can be a file-based access request, a folder-based or directory-based access request, or a device-based access request. Although only select access requests are illustrated and described, other corresponding access requests can proceed among the multiple layers of the composite device and composite file system in an analogous manner to those illustrated and described. For example, device-based access requests that can be directed to a composite device include traditional device-based requests, such as requests to open the device, initialize the device, read from the device, write to the device, mount the device, unmount the device, and the like. Analogous files-based or directory-based requests can be directed to files and/or directories of the composite file system.) wherein temporary mounting points are mounted as a directory mapped to in-memory temporary file system, and persistent mounting points are mounted as a directory mapped to a persistent directory from a separate configuration partition. However, Haserodt and Rietschin do not explicitly disclose wherein temporary mounting points are mounted as a directory mapped to in-memory temporary file system. Rodriguez teaches wherein temporary mounting points are mounted as a directory mapped to in-memory temporary file system. ([Paragraph 35], Fundamentally, a Linux operating system is made up of the Linux kernel, an initial RAM disk, and a filesystem on the hard drive. To run processes on the Linux Kernel, the kernel needs a way to “init” the system, or bootstrap the user space and manage user processes. Until recently, Unix System V (SysV) (originally released in 1983 by AT&T) was commonly used to handle the “init” or bootstrapping of the Linux operating system. Within the past few years, however, most Linux distributions have transitioned to “SystemD” to perform the “init” process. [Paragraph 36], When a Linux operating system boots, the bootloader loads the Linux Kernel (vmlinz) and the initial RAM disk (initrd) into memory. After the kernel initializes, initrd provides the bare minimum drivers to “init” the system, which provides access to the larger hard drive on the system. Once the “root” partition is mounted, the “init” process pivots to the hard drive filesystem and executes SystemD to begin bootstrapping all the startup processes. The files on the filesystem are software libraries and binaries to support the running processes and user utilities and binaries to manage the system. [Paragraph 39], FIG. 1 illustrates an example embodiment of a container-first architecture 100. In the illustrated embodiment, the container-first architecture 100 includes a compute platform 102 (e.g., an Intel Architecture (IA) compute platform), a system BIOS 104, an operating system (OS) kernel 106 (e.g., a Linux kernel from any distribution), and an immutable initial RAM disk (initrd) 108 with a container runtime service 112 for bootstrapping the OS 106. In some embodiments, the Linux kernel 106 and the immutable initial RAM disk 108 can be packaged with a very small storage footprint (e.g., roughly 250 MB).) Rodriguez also teaches wherein initiating a system reboot using the operating system bootloader consumable disk image further comprises: for an upgrade, rebooting, at the node, using the operating system bootloader consumable disk image; wherein a mounting specification can be read from the operating system composable disk image during the reboot; and persistent mounting points. ([Paragraph 81], persistent memory can be used to store the OS image and/or the initial RAM disk (initrd) in an immutable state, thus removing disks from being an attack surface before they are securely on-boarded; [Paragraph 84-87], a bootable partition with an operating system captured inside a file image or a raw partition on a physical drive; a hypervisor to launch a VM from the bootable partition (e.g., using the QEMU binary to launch the virtual machine using Kernel-based Virtual Machine (KVM) hypervised acceleration on the Linux kernel); an initialization script to dynamically configure and launch the VM based on detected hardware characteristics (e.g., detect hardware characteristics and configure any environment variables or parameters (e.g., memory size, vCPU count, and so forth) in order to execute the QEMU binary with the appropriate switches). [Paragraph 42], After the storage drive 130 is mounted, the integrity service 110 verifies the integrity of containers, files, and optionally partitions based on a “hash” manifest that was generated during manufacturing and system updates. Once integrity is verified with success, the integrity service 110 executes the container runtime service 112 (e.g., ContainerD) to bootstrap the system.) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Haserodt and Rietschin wherein a distributed cluster environment having a cluster management agent receives a cluster specification update which includes container image manifest content to configure/modify/update various elements (i.e. cluster, VM, container, application, etc.) of the cluster environment to be in compliance with the cluster specification update and the container image manifest is converted into an OS system bootloader disk image for rebooting a node(s) in the distributed system, into teachings of Rodriguez wherein temporary mounting points are mounted as a directory mapped to in-memory temporary file system (i.e. RAM disk), because this would enhance the teachings of Haserodt and Rietschin wherein by creating a RAM disk for initial boot/reboot where the RAM disk comprises bare minimum files necessary to boot/reboot thus the composable disk image can be packaged with a very small storage foot print and allows securing of disks from being an attach surface before they are securely on-boarded. [Rodriguez paragraph 39, 81] As per claims 21 and 22, these are apparatus claims corresponding to the method claims 10 and 11. Therefore, rejected based on similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DONG U KIM/Primary Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Sep 19, 2022
Application Filed
May 08, 2025
Non-Final Rejection — §103, §112
Aug 14, 2025
Response Filed
Aug 25, 2025
Examiner Interview Summary
Aug 25, 2025
Applicant Interview (Telephonic)
Sep 16, 2025
Final Rejection — §103, §112
Jan 20, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Feb 09, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596564
PRE-LOADING SOFTWARE APPLICATIONS IN A CLOUD COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596594
REINFORCEMENT LEARNING POLICY SERVING AND TRAINING FRAMEWORK IN PRODUCTION CLOUD SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591760
CROSS-INSTANCE INTELLIGENT RESOURCE POOLING FOR DISPARATE DATABASES IN CLOUD NATIVE ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591449
Merging Streams For Call Enhancement In Virtual Desktop Infrastructure
2y 5m to grant Granted Mar 31, 2026
Patent 12586064
BLOCKCHAIN PROVISION SYSTEM AND METHOD USING NON-COMPETITIVE CONSENSUS ALGORITHM AND MICRO-CHAIN ARCHITECTURE TO ENSURE TRANSACTION PROCESSING SPEED, SCALABILITY, AND SECURITY SUITABLE FOR COMMERCIAL SERVICES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.7%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month