DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the amendment filed on 12/30/2025. This Action is made FINAL.
Claims 1, 2 and 4-21 are pending and they are presented for examination.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 21 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 21 recite: “process the at least one virtual environment image block”… “the processed at least one virtual environment image block”. The examiner is unclear if “the processed” is referred to “the processed” in claim 1 or to claim 21 (“process the at least one virtual environment image block by decompressing…”).
Response to Amendment
Applicant's arguments with respect to claims 1, 2 and 4-21 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4-7, 9, 11 and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Featonby et al. (Pat 12190144) (hereafter Featonby) in view of Suarez et al. (Pub 20170177877) (hereafter Suarez).
As per claim 1, Featonby teaches:
A system comprising:
a cluster manager comprising:
processing circuitry to:
request, from an image registry, at least one virtual environment image block of a virtual environment image defining a virtual environment; and ([Column 2 line 20-32], The aforementioned challenges, among others, are addressed in some embodiments by the disclosed techniques for prefetching or predelivering certain container image layers (which are the building blocks that make up a given container image) that are frequently used across multiple container images stored on the cloud provider network into the caches of one or more compute instances such that when a user requests execution of a set of container images, some or all of the container image layers of the set of container images can be accessed from the cache, rather than from a remote container image repository, thereby reducing the latency associated with launching the set of container images. [Column 5 line 63-67 and Column 6 line 1-13], The container registry service 130, the container service 140, and the additional services 170 may provide a set of application programming interfaces (“APIs”) that can be used by the users of the user computing devices 102 to add, modify, or remove compute capacity to the clusters, and/or request execution of user applications (e.g., tasks) on the clusters. An API refers to an interface and/or communication protocol between a client and a server, such that if the client makes a request in a predefined format, the client should receive a response in a specific format or initiate a defined action. In the cloud provider network context, APIs provide a gateway for customers to access cloud infrastructure by allowing customers to obtain data from or cause actions within the cloud provider network, enabling the development of applications that interact with resources and services hosted in the cloud provider network. APIs can also enable different services of the cloud provider network to exchange data with one another.)
process the at least one virtual environment image block upon reception of the at least one virtual environment image block from the image registry by performing at least one of a cyclic redundancy check operation, image block decryption, or image block decompression;
a memory to store the processed at least one virtual environment image block; and ([Column 11 line 41-67 and column 12 line 1-14], FIG. 2 is a diagram illustrating multiple containers sharing one or more container image layers according to some embodiments. As shown in FIG. 2, a container execution environment 200 includes three separate containers 202-206 sharing one or more container images layers. Each of the container image layers shown in FIG. 2 may be a read-only layer (e.g., immutable portion of the container image on top of which other layers build) or a readable-writable layer (e.g., additional code that makes changes to the immutable portion of the container image). For example, layers A1, B1, B2, C1, and C2 may be read-only layers and layers D1, D2, and D3 may be readable-writable layers… FIG. 3 is a diagram illustrating example layer dependency graphs of the container images 202-206 in accordance with aspects of the present disclosure. Each layer of container images 202-206 builds on top of a previous layer, resulting in the layer dependency graphs shown in FIG. 3. As indicated by the first layer dependency graph of FIG. 3, in the container image 202, image layer D1 depends on (e.g., builds on top of) image layer C1, which depends on image layer B1, which depends on image layer A1. As indicated by the second layer dependency graph of FIG. 3, in the container image 204, image layer D2 depends on image layer C1, which depends on image layer B1, which depends on image layer A1. As indicated by the third layer dependency graph of FIG. 3, in the container image 206, image layer D3 depends on image layer C2, which depends on image layer B2, which depends on image layer A1. [Column 8 line 60-67], As used herein, provisioning an instance generally includes reserving resources (e.g., computational and memory resources) of an underlying physical machine for the client (e.g., from a pool of available physical machines and other resources), installing or launching required software (e.g., an operating system), and making the instance available to the client for performing tasks specified by the client. [Column 5 line 51-62], The additional services 170 include storage devices 172 through storage devices 172N, which include layers 174 through layers 174N, respectively. The layers 174-174N may include layers prefetched from the container registry service 130 in anticipation of future execution requests and/or layers fetched in response to execution requests and cached for future use. Although not shown in FIG. 1, the storage devices 172-172N may be accessed by additional compute resources or compute devices that are either within the cloud provider network 120 or outside the cloud provider network 120 (e.g., part of an on-premises environment of a user of the cloud provider network 120).)
communication circuitry to communicate the processed at least one virtual environment image block to a worker node that is to execute the virtual environment, wherein the worker node is to store the processed at least one virtual environment image block in a memory of the worker node. ([Column 1 line 62-67 and column 2 line 1-4], Many software applications can run using one or more computing “clusters,” which can include at least one cluster master (which runs control processes including scheduling, resource control, handling API requests, and deciding what runs on the cluster's nodes) and multiple nodes (which are the worker machines that run containerized applications and other workloads). These clusters can run across a number of physical machines in a distributed computing environment such as a cloud provider network. [Column 13 line 15-62], At (3), the container agent 150 sends, to the container service 140 (or a control plane component thereof), a request to register itself with the cluster as available capacity. At (4), in response to the request from the container agent 150, the container service 140 retrieves image analytics data and layer dependency data from the container registry service 130. At (5), the container service 140 determines layer to be prefetched onto the cache of the compute instance, and at (6), the container service 140 publishes the layers to the container agent 150 to be prefetched. At (7), the container agent 150 sends a request to the container registry service 130 to prefetch the layers indicated by the container service 140. In response to the request from the container agent 150, the container registry service 130 transmits the requested layers, which are stored in the cache 152 of the compute instance on which the container agent 150 is running… After the layers have been prefetched into the cache 152, at (9), the user computing device 102 calls another API provided by the container service 140 to request to execute a task in the cluster, where the task includes the container images that include one or more of the layers prefetched into the cache 152 (e.g., as indicated by the task definition associated with the request). At (10), the container service 140 forwards the task execution request to the container agent 150. In response, at (11), the container agent 150 accesses the prefetched layers from the cache 152. Although not illustrated in FIG. 5A, cache validation may be performed as part of (11). Such cache validation may include reading a layer from the cache 152, requesting a hash value of the layer from the container registry service 130, and comparing the hash value of the layer read from the cache 152 and the hash value received from the container registry service 130. If the hash values match or otherwise correlate, it is determined that the layer in the cache 152 has not been tampered with and is safe to be used. If the hash values do not match or otherwise correlate, a new copy of the layer is requested from the container registry 130 and used to execute the task requested at (9). In other embodiments, other known cache validation algorithms may be used. At (12), the container agent 150 retrieves any missing layer(s) from the container registry service 130. For example, some but not all of the required layers may be present in the cache 152 at the time the request is received at (9), and the remaining layers may be downloaded from the container registry service 130. At (13), the container agent 150 causes the container images to be executed on the compute instance using the layers accessed from the cache 152 and/or from the container registry service 130. [Column 5 line 51-62], The additional services 170 include storage devices 172 through storage devices 172N, which include layers 174 through layers 174N, respectively. The layers 174-174N may include layers prefetched from the container registry service 130 in anticipation of future execution requests and/or layers fetched in response to execution requests and cached for future use. Although not shown in FIG. 1, the storage devices 172-172N may be accessed by additional compute resources or compute devices that are either within the cloud provider network 120 or outside the cloud provider network 120 (e.g., part of an on-premises environment of a user of the cloud provider network 120).)
Although Featonby discloses managing image registry and deployment of container image(s) which includes security.
Although Featonby teaches managing image registry and security, generating cryptographic hash value(s) of image layers. [Column 4 line 6-59] [Column 5 line 63-67 and column 6 line 1-13] [Column 13 line 21-62] [Column 24 line 35-43]
Featonby does not explicitly disclose performing at least one of a cyclic redundancy check operation, image block decryption, or image block decompression.
Suarez teaches performing at least one of a cyclic redundancy check operation, image block decryption, or image block decompression. ([Paragraph 26], The customer 166 may upload the container image 152 to a container registry 102 through a container registry front-end service 114. From the container registry 102, the container image 152 may be served to the container instance 104 through the container registry front-end service 114 to be launched. In some examples, a “container image” may refer to metadata and one or more computer files corresponding to contents and/or structure of one or more software applications configured to execute in a software container. In some cases, the container image 152 may comprise “layers” that correspond to steps in the build process of the container image 152. [Paragraph 37], A service provided by a computing resource service provider may be one of one or more service configured to provide access to resources of a computer system including data processing, data storage, applications, interfaces, permissions, security policies, encryption, and/or other such services. A container service may be provided as a service to users of a computing resource service provider by, for example, providing an interface to the container instance 204. [Paragraph 65], In some implementations, however, a decryption key for the container images is shared with the scanning mechanism 554. In these embodiments, the scanning mechanism 554 is configured to use the shared decryption key to decrypt the container images in order to scan for the reference criteria 556. [Paragraph 69], When the customer requests to launch the container image in a container instance (such as through another application programming interface), the servers of the system of the present disclosure may control the decryption and launching of the container image in the container instance such that, once uploaded, the container image never leaves the environment of the computing resource service provider in unencrypted form, thereby preventing unauthorized access and/or duplication of the container image.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Featonby wherein a cluster manager requests an image block (i.e. image layer(s)) from an image registry, retrieved image block(s) is/are transmitted, processed, stored and executed at a worker node(s) utilizing various well-known communication protocol (i.e. PCIe), into teachings of Suarez wherein an encrypted virtual environment image block is decrypted prior to launching and to scan for security vulnerability utilizing decryption key, because this would enhance the teachings of Featonby wherein by encrypting image(s)/layer(s) and decrypting image(s), it provides added security allowing only entity(ies) with decryption key to either scan for vulnerability and/or launch container image(s). [Suarez paragraph 37, 65, 69]
As per claim 2, rejection of claim 1 is incorporated:
Featonby teaches wherein the communication circuitry is to communicate the processed at least one virtual environment image block to a second worker node that is to execute the virtual environment, wherein the second worker node is to store the processed at least one virtual environment image block in a memory of the second worker node. ([Column 1 line 62-67 and column 2 line 1-4], Many software applications can run using one or more computing “clusters,” which can include at least one cluster master (which runs control processes including scheduling, resource control, handling API requests, and deciding what runs on the cluster's nodes) and multiple nodes (which are the worker machines that run containerized applications and other workloads). These clusters can run across a number of physical machines in a distributed computing environment such as a cloud provider network. [Column 8 line 20-41], The instances 148 and 158 may include one or more of physical machines, virtual machines, containers, nodes, or other forms of virtual or physical compute units that are configured to execute one or more applications, or any combination thereof. [Column 11 line 41-52], FIG. 2 is a diagram illustrating multiple containers sharing one or more container image layers according to some embodiments. As shown in FIG. 2, a container execution environment 200 includes three separate containers 202-206 sharing one or more container images layers. Each of the container image layers shown in FIG. 2 may be a read-only layer (e.g., immutable portion of the container image on top of which other layers build) or a readable-writable layer (e.g., additional code that makes changes to the immutable portion of the container image). For example, layers A1, B1, B2, C1, and C2 may be read-only layers and layers D1, D2, and D3 may be readable-writable layers. [Column 13 line 15-62], At (3), the container agent 150 sends, to the container service 140 (or a control plane component thereof), a request to register itself with the cluster as available capacity. At (4), in response to the request from the container agent 150, the container service 140 retrieves image analytics data and layer dependency data from the container registry service 130. At (5), the container service 140 determines layer to be prefetched onto the cache of the compute instance, and at (6), the container service 140 publishes the layers to the container agent 150 to be prefetched. At (7), the container agent 150 sends a request to the container registry service 130 to prefetch the layers indicated by the container service 140. In response to the request from the container agent 150, the container registry service 130 transmits the requested layers, which are stored in the cache 152 of the compute instance on which the container agent 150 is running… After the layers have been prefetched into the cache 152, at (9), the user computing device 102 calls another API provided by the container service 140 to request to execute a task in the cluster, where the task includes the container images that include one or more of the layers prefetched into the cache 152 (e.g., as indicated by the task definition associated with the request). At (10), the container service 140 forwards the task execution request to the container agent 150. In response, at (11), the container agent 150 accesses the prefetched layers from the cache 152. Although not illustrated in FIG. 5A, cache validation may be performed as part of (11). Such cache validation may include reading a layer from the cache 152, requesting a hash value of the layer from the container registry service 130, and comparing the hash value of the layer read from the cache 152 and the hash value received from the container registry service 130. If the hash values match or otherwise correlate, it is determined that the layer in the cache 152 has not been tampered with and is safe to be used. If the hash values do not match or otherwise correlate, a new copy of the layer is requested from the container registry 130 and used to execute the task requested at (9). In other embodiments, other known cache validation algorithms may be used. At (12), the container agent 150 retrieves any missing layer(s) from the container registry service 130. For example, some but not all of the required layers may be present in the cache 152 at the time the request is received at (9), and the remaining layers may be downloaded from the container registry service 130. At (13), the container agent 150 causes the container images to be executed on the compute instance using the layers accessed from the cache 152 and/or from the container registry service 130.)
Suarez also teaches processed (i.e. a cyclic redundancy check operation, image block decryption, or image block decompression.) ([Paragraph 26], The customer 166 may upload the container image 152 to a container registry 102 through a container registry front-end service 114. From the container registry 102, the container image 152 may be served to the container instance 104 through the container registry front-end service 114 to be launched. In some examples, a “container image” may refer to metadata and one or more computer files corresponding to contents and/or structure of one or more software applications configured to execute in a software container. In some cases, the container image 152 may comprise “layers” that correspond to steps in the build process of the container image 152. [Paragraph 37], A service provided by a computing resource service provider may be one of one or more service configured to provide access to resources of a computer system including data processing, data storage, applications, interfaces, permissions, security policies, encryption, and/or other such services. A container service may be provided as a service to users of a computing resource service provider by, for example, providing an interface to the container instance 204. [Paragraph 65], In some implementations, however, a decryption key for the container images is shared with the scanning mechanism 554. In these embodiments, the scanning mechanism 554 is configured to use the shared decryption key to decrypt the container images in order to scan for the reference criteria 556. [Paragraph 69], When the customer requests to launch the container image in a container instance (such as through another application programming interface), the servers of the system of the present disclosure may control the decryption and launching of the container image in the container instance such that, once uploaded, the container image never leaves the environment of the computing resource service provider in unencrypted form, thereby preventing unauthorized access and/or duplication of the container image.)
As per claim 4, rejection of claim 1 is incorporated:
Featonby teaches wherein the processing circuitry is to request the at least one virtual environment image block from the image registry based on a request for a microservice from the worker node, wherein the microservice is associated with the at least one virtual environment image block. ([Column 13 line 15-30],
At (3), the container agent 150 sends, to the container service 140 (or a control plane component thereof), a request to register itself with the cluster as available capacity. At (4), in response to the request from the container agent 150, the container service 140 retrieves image analytics data and layer dependency data from the container registry service 130. At (5), the container service 140 determines layer to be prefetched onto the cache of the compute instance, and at (6), the container service 140 publishes the layers to the container agent 150 to be prefetched. At (7), the container agent 150 sends a request to the container registry service 130 to prefetch the layers indicated by the container service 140. In response to the request from the container agent 150, the container registry service 130 transmits the requested layers, which are stored in the cache 152 of the compute instance on which the container agent 150 is running.)
As per claim 5, rejection of claim 1 is incorporated:
Featonby teaches wherein the communication circuitry is to communicate an identification of the requested at least one virtual environment image block to a second cluster manager to prompt the second cluster manager to prefetch the at least one virtual environment image block from the image registry. ([Column 11 line 10-39], The cloud provider network 120 may implement various computing resources or services, which may include a virtual compute service (referred to in various implementations as an elastic compute service, a virtual machines service, a computing cloud service, a compute engine, or a cloud compute service), a container orchestration and management service (referred to in various implementations as a container service, cloud container service, container engine, or container cloud service), a Kubernetes-based container orchestration and management service (referred to in various implementations as a container service for Kubernetes, Azure Kubernetes service, IBM cloud Kubernetes service, Kubernetes engine, or container engine for Kubernetes)… [Column 13 line 15-30], At (3), the container agent 150 sends, to the container service 140 (or a control plane component thereof), a request to register itself with the cluster as available capacity. At (4), in response to the request from the container agent 150, the container service 140 retrieves image analytics data and layer dependency data from the container registry service 130. At (5), the container service 140 determines layer to be prefetched onto the cache of the compute instance, and at (6), the container service 140 publishes the layers to the container agent 150 to be prefetched. At (7), the container agent 150 sends a request to the container registry service 130 to prefetch the layers indicated by the container service 140. In response to the request from the container agent 150, the container registry service 130 transmits the requested layers, which are stored in the cache 152 of the compute instance on which the container agent 150 is running. [Column 17 line 11-25], At block 704, the container registry service 130 inspects the manifest associated with the container image. For example, the manifest may specify details about the container image and the runtime environment in which the container image is to be executed including, but not limited to, the image ID, tag, and/or digest that can be used to identify the container image, image path, image version, author, architecture, operating system, size, network host/domain/user names exposed network ports, expected resource allocations (CPU, memory, disk, network), layer identifiers, layer hash values, and any other parameters specified by the user who uploaded the container image onto the container registry service 130 at the time of uploading the container image (or a public repository within or external to the cloud network provider 120). [Column 20 line 15-26], At block 1006, the container service 140 identifies a compute instance to be used to execute the task, based on the identification of the layers. For example, the container service 140 may determine which one of the available instances (e.g., in the cluster that belongs to the user submitting the request at block 1002, or in the pool of unassigned instances) includes some or all of the layers needed to execute the task, and the container service 140 may select the compute instance that would result in the lowest task launch time (e.g., based on the layers prefetched onto the cache and the sizes of the layers to be downloaded onto the cache). [Column 5 line 62-67 and column 6 line 1-13], The container registry service 130, the container service 140, and the additional services 170 may provide a set of application programming interfaces (“APIs”) that can be used by the users of the user computing devices 102 to add, modify, or remove compute capacity to the clusters, and/or request execution of user applications (e.g., tasks) on the clusters. An API refers to an interface and/or communication protocol between a client and a server, such that if the client makes a request in a predefined format, the client should receive a response in a specific format or initiate a defined action. In the cloud provider network context, APIs provide a gateway for customers to access cloud infrastructure by allowing customers to obtain data from or cause actions within the cloud provider network, enabling the development of applications that interact with resources and services hosted in the cloud provider network. APIs can also enable different services of the cloud provider network to exchange data with one another.)
As per claim 6, rejection of claim 1 is incorporated:
Featonby teaches wherein the communication circuitry is to communicate the processed at least one virtual environment image block to a second cluster manager for provision to a second worker node that is to execute the virtual environment. ([ [Column 3 line 40-63], A cloud provider network (sometimes referred to as a cloud provider system or simply a “cloud”) refers to a large pool of network-accessible computing resources (such as compute, storage, and networking resources, applications, and services), which may be virtualized (e.g., virtual machines) or bare-metal (e.g., bare-metal instances or physical machines). The cloud can provide convenient, on-demand network access to a shared pool of configurable computing resources that can be programmatically provisioned and released in response to customer commands. These resources can be dynamically provisioned and reconfigured to adjust to variable load, which provides the “elasticity” of the cloud provider network 120. Cloud computing can thus be considered as both the applications delivered as services over a publicly accessible network (e.g., the Internet, a cellular communication network) and/or the hardware and software in cloud provider data centers that provide those services. It will be appreciated that the disclosed techniques for prefetching and managing container image layers may be implemented in non-elastic computing environments as well. [Column 11 line 10-39], The cloud provider network 120 may implement various computing resources or services, which may include a virtual compute service (referred to in various implementations as an elastic compute service, a virtual machines service, a computing cloud service, a compute engine, or a cloud compute service), a container orchestration and management service (referred to in various implementations as a container service, cloud container service, container engine, or container cloud service), a Kubernetes-based container orchestration and management service (referred to in various implementations as a container service for Kubernetes, Azure Kubernetes service, IBM cloud Kubernetes service, Kubernetes engine, or container engine for Kubernetes), data processing service(s) (e.g., map reduce, data flow, and/or other large scale data processing techniques), data storage services (e.g., object storage services, block-based storage services, or data warehouse storage services) and/or any other type of network based services (which may include various other types of storage, processing, analysis, communication, event handling, visualization, and security services not illustrated). The resources required to support the operations of such services (e.g., compute and storage resources) may be provisioned in an account associated with the cloud provider network 120, in contrast to resources requested by users of the cloud provider network 120, which may be provisioned in user accounts. The disclosed techniques for prefetching and managing container image layers can be implemented as part of a virtual compute service, container service, or Kubernetes-based container service in some embodiments.)
As per claim 7, rejection of claim 1 is incorporated:
Suarez teaches wherein the communication circuitry is to encrypt the processed at least one virtual environment image block according to a communication protocol prior to communicating the processed at least one virtual environment image block to the worker node. ([Paragraph 37], A service provided by a computing resource service provider may be one of one or more service configured to provide access to resources of a computer system including data processing, data storage, applications, interfaces, permissions, security policies, encryption, and/or other such services. A container service may be provided as a service to users of a computing resource service provider by, for example, providing an interface to the container instance 204. [Paragraph 38], In some embodiments, the services provided by a computing resource service provider include one or more interfaces that enable the customer to submit requests via, for example, appropriately-configured application programming interface calls to the various services. In addition, each of the services may include one or more service interfaces that enable the services to access each other (e.g., to enable a virtual computer system of the virtual computer system service to store data in or retrieve data from an on-demand data storage service and/or access one or more block-level data storage devices provided by a block-lever data storage service). Each of the service interfaces may also provide secured and/or protected access to each other via encryption keys and/or other such secured and/or protected access methods, thereby enabling secure and/or protected access between them. Collections of services operating in concert as a distributed computer system may have a single front-end interface and/or multiple interfaces between the elements of the distributed computer system. [Paragraph 68], FIG. 5 further depicts a third scenario. In the third scenario, a container image 552C is stored in the repository in encrypted form. However, if the container image 552C is decrypted (such as by an entity authorized by the customer to extract and launch the container image or by providing the scanning mechanism 554 with a decryption key 594 for decrypting the container image, as described above), the scanning mechanism 554 would be able to scan the unencrypted file structure as shown in the third scenario. [Paragraph 86], The credentials or proof of credentials 978 may be exchanged for the security token 974. The security token 974 may operate as a request token (e.g., may be used for a certain number of requests and/or until such time as the security token 974 expires), similar to a session-based token. The security token 974 may include the credentials or proof of credentials 978 in encrypted form. In some implementations, the security token 974 may include additional information, such as an expiration time, in encrypted form. [Paragraph 122], However, in 1514, if the authentication service indicates that the credential information does indicate that the entity should be allowed access to the repository, the system performing the process 1500 may proceed to 1514, whereupon an authorization token encoding or otherwise indicating that the requesting entity has permission to access the specified repository, may be generated. The authorization token may be a string of characters generated by encrypting, such that the token may be decrypted by the key held by a container registry proxy or container registry front-end service, credentials and/or proof of credentials (e.g., a cryptographic hash of credentials) of an entity authorized to make the request and/or a digital signature usable at least in part at least for certain amount of time (e.g., the token may have been generated at least in part using time-based parameters such that the token has an effective expiration date, after which the token is no longer considered valid) for validating access to the repository.)
As per claim 9, rejection of claim 1 is incorporated:
Suarez teaches wherein processing the at least one virtual environment image block comprises decrypting the at least one virtual environment image block. ([Paragraph 37], A service provided by a computing resource service provider may be one of one or more service configured to provide access to resources of a computer system including data processing, data storage, applications, interfaces, permissions, security policies, encryption, and/or other such services. A container service may be provided as a service to users of a computing resource service provider by, for example, providing an interface to the container instance 204. [Paragraph 38], In some embodiments, the services provided by a computing resource service provider include one or more interfaces that enable the customer to submit requests via, for example, appropriately-configured application programming interface calls to the various services. In addition, each of the services may include one or more service interfaces that enable the services to access each other (e.g., to enable a virtual computer system of the virtual computer system service to store data in or retrieve data from an on-demand data storage service and/or access one or more block-level data storage devices provided by a block-lever data storage service). Each of the service interfaces may also provide secured and/or protected access to each other via encryption keys and/or other such secured and/or protected access methods, thereby enabling secure and/or protected access between them. Collections of services operating in concert as a distributed computer system may have a single front-end interface and/or multiple interfaces between the elements of the distributed computer system. [Paragraph 68], FIG. 5 further depicts a third scenario. In the third scenario, a container image 552C is stored in the repository in encrypted form. However, if the container image 552C is decrypted (such as by an entity authorized by the customer to extract and launch the container image or by providing the scanning mechanism 554 with a decryption key 594 for decrypting the container image, as described above), the scanning mechanism 554 would be able to scan the unencrypted file structure as shown in the third scenario. [Paragraph 86], The credentials or proof of credentials 978 may be exchanged for the security token 974. The security token 974 may operate as a request token (e.g., may be used for a certain number of requests and/or until such time as the security token 974 expires), similar to a session-based token. The security token 974 may include the credentials or proof of credentials 978 in encrypted form. In some implementations, the security token 974 may include additional information, such as an expiration time, in encrypted form. [Paragraph 122], However, in 1514, if the authentication service indicates that the credential information does indicate that the entity should be allowed access to the repository, the system performing the process 1500 may proceed to 1514, whereupon an authorization token encoding or otherwise indicating that the requesting entity has permission to access the specified repository, may be generated. The authorization token may be a string of characters generated by encrypting, such that the token may be decrypted by the key held by a container registry proxy or container registry front-end service, credentials and/or proof of credentials (e.g., a cryptographic hash of credentials) of an entity authorized to make the request and/or a digital signature usable at least in part at least for certain amount of time (e.g., the token may have been generated at least in part using time-based parameters such that the token has an effective expiration date, after which the token is no longer considered valid) for validating access to the repository.)
As per claim 11, rejection of claim 1 is incorporated:
Featonby teaches wherein processing the at least one virtual environment image block comprises performing cyclic redundancy check operations on the at least one virtual environment image block. ([Column 13 line 15-62], At (3), the container agent 150 sends, to the container service 140 (or a control plane component thereof), a request to register itself with the cluster as available capacity. At (4), in response to the request from the container agent 150, the container service 140 retrieves image analytics data and layer dependency data from the container registry service 130. At (5), the container service 140 determines layer to be prefetched onto the cache of the compute instance, and at (6), the container service 140 publishes the layers to the container agent 150 to be prefetched. At (7), the container agent 150 sends a request to the container registry service 130 to prefetch the layers indicated by the container service 140. In response to the request from the container agent 150, the container registry service 130 transmits the requested layers, which are stored in the cache 152 of the compute instance on which the container agent 150 is running… After the layers have been prefetched into the cache 152, at (9), the user computing device 102 calls another API provided by the container service 140 to request to execute a task in the cluster, where the task includes the container images that include one or more of the layers prefetched into the cache 152 (e.g., as indicated by the task definition associated with the request). At (10), the container service 140 forwards the task execution request to the container agent 150. In response, at (11), the container agent 150 accesses the prefetched layers from the cache 152. Although not illustrated in FIG. 5A, cache validation may be performed as part of (11). Such cache validation may include reading a layer from the cache 152, requesting a hash value of the layer from the container registry service 130, and comparing the hash value of the layer read from the cache 152 and the hash value received from the container registry service 130. If the hash values match or otherwise correlate, it is determined that the layer in the cache 152 has not been tampered with and is safe to be used. If the hash values do not match or otherwise correlate, a new copy of the layer is requested from the container registry 130 and used to execute the task requested at (9). In other embodiments, other known cache validation algorithms may be used. At (12), the container agent 150 retrieves any missing layer(s) from the container registry service 130. For example, some but not all of the required layers may be present in the cache 152 at the time the request is received at (9), and the remaining layers may be downloaded from the container registry service 130. At (13), the container agent 150 causes the container images to be executed on the compute instance using the layers accessed from the cache 152 and/or from the container registry service 130.)
As per claim 17, rejection of claim 15 is incorporated:
Featonby teaches wherein the request to the image registry is sent based on a list of images to be prefetched, wherein the list is generated by an orchestration manager. ([Column 2 line 20-57], The aforementioned challenges, among others, are addressed in some embodiments by the disclosed techniques for prefetching or predelivering certain container image layers (which are the building blocks that make up a given container image) that are frequently used across multiple container images stored on the cloud provider network into the caches of one or more compute instances such that when a user requests execution of a set of container images, some or all of the container image layers of the set of container images can be accessed from the cache, rather than from a remote container image repository, thereby reducing the latency associated with launching the set of container images. More specifically, the presently disclosed technology addresses these deficiencies by analyzing the dependencies among the individual layers within the container images stored and/or executed on the cloud provider network, determining which layers are likely to be used by a user of the cloud provider network, and prefetching or predelivering such layers into the caches of the compute instances of the cloud provider network before execution of the container images including such layers is requested by the user. By doing so, the latency between the time a request to execute a set of container images is received and the time the execution of the set of container images is actually initiated can be reduced, thereby providing an improved and more efficient application execution experience to the user. [Column 3 line 6-52], In the example of FIG. 1, the container registry service 130 provides layer predelivery manager 131, repositories 132, image metadata 135, image analytics data 136, and layer dependency data 137. The layer predelivery manager 131 manages predelivery of the container image layers stored in the repositories 132 to the container service 140 and the additional services 170 and determines which layers should be delivered to which service/device at what time based on the image metadata 135, image analytics data 136, and/or the layer dependency data 137 and based on any requests from other services/devices to prefetch one or more of the layers. The repositories 132 store container images 134, including the bits corresponding to the layers that make up the container images 134. The image metadata 135 associated with a container image may specify details about the container image and the runtime environment in which the container image is to be executed including, but not limited to, the image ID, tag, and/or digest that can be used to identify the container image, image path, image version, author, architecture, operating system, image size, layer size, network host/domain/user names exposed network ports, expected resource allocations (CPU, memory, disk, network), layer identifiers, layer hash values, and any other parameters specified by the user who uploaded the container image onto the container registry service 130 at the time of uploading the container image (or a public repository within or external to the cloud network provider 120). The image analytics data 136 may indicate certain metadata about the container images 134 such as the frequency at which each of the container images 134 has been accessed from the respective one of the repositories 132, the recency of such access, dependencies between the container images (e.g., how frequently a given set of container images are loaded/executed together), availability of container images (currently or over time), availability of repositories (currently or over time), the geographic regions from which each of the container images 134 has been accessed, the services (e.g., container service 140, additional service 170, etc.) by which each of the container images 134 has been accessed, and the like. The layer dependency data 137 may indicate the dependencies (e.g., whether a layer depends on another layer, whether a layer builds on top of another layers, etc.) among the layers of a single container image (e.g., in the form of a directed graph, as shown in FIG. 3) and/or an aggregation of such dependencies across some or all of the container images stored in the repositories 132 (e.g., in the form of an aggregated directed graph, as shown in FIG. 4).)
As per claims 15, 16 and 18, these are non-transitory machine-readable storage medium claims corresponding to the system claims 1, 2 and 4. Therefore, rejected based on similar rationale.
As per claims 19 and 20, these are method claims corresponding to the system claims 1 and 2. Therefore, rejected based on similar rationale.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Featonby in view of Suarez and further in view of Lee (Pub 20220137875).
As per claim 8, rejection of claim 7 is incorporated:
Although Featonby discloses utilizing a well-known PCIe protocol. [Column 9 line 42-59]
Featonby and Suarez do not explicitly disclose wherein the communication protocol is a Compute Express Link (CXL) protocol.
Lee teaches wherein the communication protocol is a Compute Express Link (CXL) protocol. ([Paragraph 50], In operation S401, the host 200 may request an image file for container creation from the data center (image file registry) 400. For example, the host 200 may issue a command to request an image file. [Paragraph 92], Each of the devices may include internal components that perform communication based on a protocol supported through the interconnect. For example, at least one protocol selected from among technologies such as a PCIe protocol, a compute express link (CXL) protocol, an XBus protocol, an NVLink protocol, an Infinity Fabric protocol, a cache coherent interconnect for accelerators (CCIX) protocol, a coherent accelerator processor interface (CAPI) protocol, and the like may be applied to the interconnect.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Featonby and Suarez wherein a cluster manager requests an image block (i.e. image layer(s)) from an image registry, retrieved image block(s) is/are transmitted, processed, stored, encrypted/decrypted and executed at a worker node(s) utilizing various well-known communication protocol (i.e. PCIe), into teachings of Lee wherein the well-known communication protocol is Compute Express Link (CXL) protocol, because this would enhance the teachings of Featonby and Suarez wherein by leveraging various well-known open standard communication protocol such as CXL which is built on PCIe based input/output protocol, it further provides options, capabilities and flexibilities associated with its corresponding protocol. Furthermore, it provides data communication between hosts and shared memories based on various types of protocols. [Lee paragraph 97] [https://en.wikipedia.org/wiki/Compute_Express_Link]
Claim(s) 10 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Featonby in view of Suarez and further in view of Hotinger et al. (Pub 20220197689) (hereafter Hotinger).
As per claim 10, rejection of claim 1 is incorporated:
However, Featonby does not explicitly disclose wherein processing the at least one virtual environment image block comprises decompressing the at least one virtual environment image block.
Hotinger teaches wherein processing the at least one virtual environment image block comprises decompressing the at least one virtual environment image block. ([Paragraph 230], receiving 720 the image manifest, downloading 702 to the instantiation location from the container registry the layer content of all layers of the container image that are not already stored local to the instantiation location, decompressing 1022 any downloaded layer content which was downloaded in a compressed form, and creating 908 a localized union file system which spans the layers that collectively constitute the container image.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Featonby and Suarez wherein a cluster manager requests an image block (i.e. image layer(s)) from an image registry, retrieved image block(s) is/are transmitted, processed, stored, encrypted/decrypted and executed at a worker node(s) utilizing various well-known communication protocol (i.e. PCIe), into teachings of Hotinger wherein the at least one virtual environment image block is decompressed, because this would enhance the teachings of Featonby and Suarez, wherein by compressing the virtual environment image block for storage and transmission, it allows the virtual environment image block to be stored with less footprint and transmitted faster to destination due to its compressed size and decompressed to subsequently execute/process in decompressed form.
As per claim 21, rejection of claim 1 is incorporated:
Suarez teaches the processing circuitry of the cluster manager is to process the at least one virtual environment image block by decompressing the at least one virtual image block; and
the communication circuitry is to protect the at least one virtual environment image block with confidentiality, integrity, and replay protection prior to communicating the processed at least one virtual environment image block to the worker node. ([Paragraph 37], A service provided by a computing resource service provider may be one of one or more service configured to provide access to resources of a computer system including data processing, data storage, applications, interfaces, permissions, security policies, encryption, and/or other such services. A container service may be provided as a service to users of a computing resource service provider by, for example, providing an interface to the container instance 204. [Paragraph 38], In some embodiments, the services provided by a computing resource service provider include one or more interfaces that enable the customer to submit requests via, for example, appropriately-configured application programming interface calls to the various services. In addition, each of the services may include one or more service interfaces that enable the services to access each other (e.g., to enable a virtual computer system of the virtual computer system service to store data in or retrieve data from an on-demand data storage service and/or access one or more block-level data storage devices provided by a block-lever data storage service). Each of the service interfaces may also provide secured and/or protected access to each other via encryption keys and/or other such secured and/or protected access methods, thereby enabling secure and/or protected access between them. Collections of services operating in concert as a distributed computer system may have a single front-end interface and/or multiple interfaces between the elements of the distributed computer system. [Paragraph 68], FIG. 5 further depicts a third scenario. In the third scenario, a container image 552C is stored in the repository in encrypted form. However, if the container image 552C is decrypted (such as by an entity authorized by the customer to extract and launch the container image or by providing the scanning mechanism 554 with a decryption key 594 for decrypting the container image, as described above), the scanning mechanism 554 would be able to scan the unencrypted file structure as shown in the third scenario. [Paragraph 86], The credentials or proof of credentials 978 may be exchanged for the security token 974. The security token 974 may operate as a request token (e.g., may be used for a certain number of requests and/or until such time as the security token 974 expires), similar to a session-based token. The security token 974 may include the credentials or proof of credentials 978 in encrypted form. In some implementations, the security token 974 may include additional information, such as an expiration time, in encrypted form. [Paragraph 122], However, in 1514, if the authentication service indicates that the credential information does indicate that the entity should be allowed access to the repository, the system performing the process 1500 may proceed to 1514, whereupon an authorization token encoding or otherwise indicating that the requesting entity has permission to access the specified repository, may be generated. The authorization token may be a string of characters generated by encrypting, such that the token may be decrypted by the key held by a container registry proxy or container registry front-end service, credentials and/or proof of credentials (e.g., a cryptographic hash of credentials) of an entity authorized to make the request and/or a digital signature usable at least in part at least for certain amount of time (e.g., the token may have been generated at least in part using time-based parameters such that the token has an effective expiration date, after which the token is no longer considered valid) for validating access to the repository.)
However, Featonby and Suarez do not explicitly disclose the processing circuitry of the cluster manager is to process the at least one virtual environment image block by decompressing the at least one virtual image block.
Hotinger teaches the processing circuitry of the cluster manager is to process the at least one virtual environment image block by decompressing the at least one virtual image block. ([Paragraph 230], receiving 720 the image manifest, downloading 702 to the instantiation location from the container registry the layer content of all layers of the container image that are not already stored local to the instantiation location, decompressing 1022 any downloaded layer content which was downloaded in a compressed form, and creating 908 a localized union file system which spans the layers that collectively constitute the container image.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Featonby and Suarez wherein a cluster manager requests an image block (i.e. image layer(s)) from an image registry, retrieved image block(s) is/are transmitted, processed, stored, encrypted/decrypted and executed at a worker node(s) utilizing various well-known communication protocol (i.e. PCIe), into teachings of Hotinger wherein the at least one virtual environment image block is decompressed, because this would enhance the teachings of Featonby and Suarez, wherein by compressing the virtual environment image block for storage and transmission, it allows the virtual environment image block to be stored with less footprint and transmitted faster to destination due to its compressed size and decompressed to subsequently execute/process in decompressed form.
Claim(s) 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Featonby in view of Suarez and further in view of Denyer et al. (Pub 20190288915) (hereafter Denyer).
As per claim 12, rejection of claim 1 is incorporated:
However, Featonby does not explicitly disclose further comprising at least one computing system to: calculate a score for the virtual environment and a plurality of prospective scores for the virtual environment at a plurality of prospective locations, wherein a prospective score of the plurality of prospective scores is calculated for one of the plurality of prospective locations; and initiate movement of the virtual environment from the worker node to a different worker node based on a first prospective score calculated for a first prospective location of the plurality of prospective locations, wherein the first prospective score is higher than the score and higher than other prospective scores of the plurality of prospective scores.
Denyer teaches further comprising at least one computing system to: calculate a score for the virtual environment and a plurality of prospective scores for the virtual environment at a plurality of prospective locations, wherein a prospective score of the plurality of prospective scores is calculated for one of the plurality of prospective locations; and initiate movement of the virtual environment from the worker node to a different worker node based on a first prospective score calculated for a first prospective location of the plurality of prospective locations, wherein the first prospective score is higher than the score and higher than other prospective scores of the plurality of prospective scores. ([Paragraph 11], One embodiment of a computer-implemented system for discovery of computing nodes of a source infrastructure at a source location and for planning migration of the computing nodes to a target infrastructure at a target destination is provided… The migration planning API implements a criticality algorithm to the discovered data to determine a criticality parameter associated with each of the discovered computing nodes. The criticality parameter identifies a potential impact that each discovered computing node has to migration. The migration planning API is configured to automatically group the discovered computing nodes of the source infrastructure into migration pods based analysis of the discovered data. Each migration pod defines a group of discovered computing nodes that depend on one another for network communication at the source infrastructure. The migration planning API is configured to prioritize the migration pods based on the criticality parameters of the discovered computing nodes of each migration pod and to generate a plan for migrating the one or more migration pods to the target infrastructure. [Paragraph 141], The criticality parameter can be derived, in part or in whole, from qualitative and quantities analysis. As used herein, quantitative use is a numerical variable and refers to the how frequently the computing node Sn is/was utilized in the network. Qualitative use is a categorical variable and refers to a quality/impact/importance of the computing node Sn the network. For instance, the criticality algorithm may analyze a frequency of application connection points to identify and prioritize the criticality of the interconnections and determine how many other computing nodes Sn rely upon these connection points. The criticality algorithm can apply a weighting factor to the application for assessing criticality. Qualitative and quantities analysis may be performed for any of the characteristics, properties, operations, or capabilities of the computing nodes Sn, as described herein, such as computing node Sn performance, capacity, latency, and the like. [Paragraph 174], In one example, selection of the customization feature 610 button on the screen of FIG. 5, may trigger a weights/parameter customization screen 702, as shown in FIG. 6. The weights/parameter customization screen 702 enables a detailed examination of the factors that go into the criticality parameter values generated for each node Sn.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Featonby and Suarez wherein a cluster manager requests an image block (i.e. image layer(s)) from an image registry, retrieved image block(s) is/are transmitted, processed, stored, encrypted/decrypted and executed at a worker node(s) utilizing various well-known communication protocol (i.e. PCIe), into teachings of Denyer wherein scoring (i.e. weights/priority/criticality/parameters) is calculated for plurality of prospective scores at prospective locations, because this would enhance the teachings of Featonby and Suarez wherein by analyzing various factors to obtain a calculated score(s) for a prospective target for moving/migrating the virtual environment, the score can be leveraged to place the virtual environment in an optimal position based on various user factors.
As per claim 13, rejection of claim 12 is incorporated:
Denyer teaches wherein the score is based on characteristics of network traffic between the virtual environment and a plurality of other virtual environments as well as distance metrics between a current location of the virtual environment and locations of the plurality of other virtual environments. ([Paragraph 11], One embodiment of a computer-implemented system for discovery of computing nodes of a source infrastructure at a source location and for planning migration of the computing nodes to a target infrastructure at a target destination is provided… he migration planning API implements a criticality algorithm to the discovered data to determine a criticality parameter associated with each of the discovered computing nodes. The criticality parameter identifies a potential impact that each discovered computing node has to migration. The migration planning API is configured to automatically group the discovered computing nodes of the source infrastructure into migration pods based analysis of the discovered data. Each migration pod defines a group of discovered computing nodes that depend on one another for network communication at the source infrastructure. The migration planning API is configured to prioritize the migration pods based on the criticality parameters of the discovered computing nodes of each migration pod and to generate a plan for migrating the one or more migration pods to the target infrastructure. [Paragraph 141], The criticality parameter can be derived, in part or in whole, from qualitative and quantities analysis. As used herein, quantitative use is a numerical variable and refers to the how frequently the computing node Sn is/was utilized in the network. Qualitative use is a categorical variable and refers to a quality/impact/importance of the computing node Sn the network. For instance, the criticality algorithm may analyze a frequency of application connection points to identify and prioritize the criticality of the interconnections and determine how many other computing nodes Sn rely upon these connection points. The criticality algorithm can apply a weighting factor to the application for assessing criticality. Qualitative and quantities analysis may be performed for any of the characteristics, properties, operations, or capabilities of the computing nodes Sn, as described herein, such as computing node Sn performance, capacity, latency, and the like. [Paragraph 174], In one example, selection of the customization feature 610 button on the screen of FIG. 5, may trigger a weights/parameter customization screen 702, as shown in FIG. 6. The weights/parameter customization screen 702 enables a detailed examination of the factors that go into the criticality parameter values generated for each node Sn. [Paragraph 143], The migration planning API 38 can determine which computing nodes Sn are publicly available based on assessing inbound internet traffic. [Paragraph 181], For example, the migration planning API 38 may categorize criticality for pods Pn into several groups (e.g., catastrophic, critical, moderate, negligible). Then, for each pod Pn, a migration impact score can be computed based on a weighted average applied to several inputted factors, as described. The impact score is used to rank an impact of one migration pod Pn relative to the impacts of other migration pods Pn. For example, the pod Pn with the highest impact score is ranked first in priority, with the next highest impact score ranked second in priority and so forth. Each criticality category may define a range or value. The outputted migration impact score for each pod Pn is then compared to the range or value of the criticality category to determine the categorization of the pod Pn. Ranking can occur based on the resulting pod Pn categorization. Additionally or alternatively, criticality thresholds may be utilized in conjunction with any of the techniques described herein. Prioritization is an important feature because prioritization provides insight about what systems are high priority to the enterprise based on the traffic numbers. [Paragraph 193], The migration planning API 38 may also be configured with a scenario planning module or mechanism to determine or weigh different scenarios for migration at the target infrastructure 26, as well as the consequences/results of executing each scenario. In turn, the migration planning API 38 provides a future-looking “what if” analysis for the plan based upon potential future locations (firewall rules, system sizing data, security recommendations, price forecasting).)
As per claim 14, rejection of claim 12 is incorporated:
Denyer teaches wherein the first prospective score is based on a plurality of distance metrics between the first prospective location and other prospective locations of the plurality of prospective locations, a plurality of flow rates, and a plurality of communication frequencies. ([Paragraph 11], One embodiment of a computer-implemented system for discovery of computing nodes of a source infrastructure at a source location and for planning migration of the computing nodes to a target infrastructure at a target destination is provided… The migration planning API implements a criticality algorithm to the discovered data to determine a criticality parameter associated with each of the discovered computing nodes. The criticality parameter identifies a potential impact that each discovered computing node has to migration. The migration planning API is configured to automatically group the discovered computing nodes of the source infrastructure into migration pods based analysis of the discovered data. Each migration pod defines a group of discovered computing nodes that depend on one another for network communication at the source infrastructure. The migration planning API is configured to prioritize the migration pods based on the criticality parameters of the discovered computing nodes of each migration pod and to generate a plan for migrating the one or more migration pods to the target infrastructure. [Paragraph 141], The criticality parameter can be derived, in part or in whole, from qualitative and quantities analysis. As used herein, quantitative use is a numerical variable and refers to the how frequently the computing node Sn is/was utilized in the network. Qualitative use is a categorical variable and refers to a quality/impact/importance of the computing node Sn the network. For instance, the criticality algorithm may analyze a frequency of application connection points to identify and prioritize the criticality of the interconnections and determine how many other computing nodes Sn rely upon these connection points. The criticality algorithm can apply a weighting factor to the application for assessing criticality. Qualitative and quantities analysis may be performed for any of the characteristics, properties, operations, or capabilities of the computing nodes Sn, as described herein, such as computing node Sn performance, capacity, latency, and the like. [Paragraph 174], In one example, selection of the customization feature 610 button on the screen of FIG. 5, may trigger a weights/parameter customization screen 702, as shown in FIG. 6. The weights/parameter customization screen 702 enables a detailed examination of the factors that go into the criticality parameter values generated for each node Sn. [Paragraph 143], The migration planning API 38 can determine which computing nodes Sn are publicly available based on assessing inbound internet traffic. [Paragraph 181], For example, the migration planning API 38 may categorize criticality for pods Pn into several groups (e.g., catastrophic, critical, moderate, negligible). Then, for each pod Pn, a migration impact score can be computed based on a weighted average applied to several inputted factors, as described. The impact score is used to rank an impact of one migration pod Pn relative to the impacts of other migration pods Pn. For example, the pod Pn with the highest impact score is ranked first in priority, with the next highest impact score ranked second in priority and so forth. Each criticality category may define a range or value. The outputted migration impact score for each pod Pn is then compared to the range or value of the criticality category to determine the categorization of the pod Pn. Ranking can occur based on the resulting pod Pn categorization. Additionally or alternatively, criticality thresholds may be utilized in conjunction with any of the techniques described herein. Prioritization is an important feature because prioritization provides insight about what systems are high priority to the enterprise based on the traffic numbers. [Paragraph 193], The migration planning API 38 may also be configured with a scenario planning module or mechanism to determine or weigh different scenarios for migration at the target infrastructure 26, as well as the consequences/results of executing each scenario. In turn, the migration planning API 38 provides a future-looking “what if” analysis for the plan based upon potential future locations (firewall rules, system sizing data, security recommendations, price forecasting). [Paragraph 145], To further minimize migration failure, the migration planning API 38 can identify DNS changes and turn down time-to-live (TTL) hop limits to lowest values prior to the migration. [paragraph 151], In one example, the migration planning API 38 employs the criticality algorithm in conjunction with predictive analytics to predict for any computing node any one or more of: qualitative and quantitative factors, potential security risks, predicted or suggested dependency characterizations, predictive inclusion of one or more discovered computing nodes into one of the migration pods, or exclusion of one or more discovered computing nodes from one of the migration pods, predictions about a latency impact that one or more discovered computing nodes will have on planned migration (e.g., latency if the nodes are separated))
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DONG U KIM/Primary Examiner, Art Unit 2197