DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 14th, 2026 has been entered.
In this office action:
Claims 1-20 are pending.
Claims 1-20 are rejected.
Summary of Previous Office Action
In the Final Office Action mailed on November 26th, 2025:
Claims 1-2, 7-10, and 15-16 were rejected under 35 U.S.C. 103 as being unpatentable over Hasti et al. (Patent No. US 11,789,660), hereinafter Hasti; in view of Sharifi Mehr (Patent No. US 11,842,224), hereinafter Sharifi; and further in view of Chatterjee et al. (Pub. No. US 2024/0069998), hereinafter Chatterjee.
Claims 3-5, 11-13 and 17-19 were rejected under 35 U.S.C. 103 as being unpatentable over Hasti et al. (Patent No. US 11,789,660), hereinafter Hasti; in view of Sharifi Mehr (Patent No. US 11,842,224), hereinafter Sharifi; further in view of Chatterjee et al. (Pub. No. US 2024/0069998), hereinafter Chatterjee; and further in view of Jiang et al. (Pub. No. US 2025/0117260), hereinafter Jiang.
Claims 6, 14 and 20 were rejected under 35 U.S.C. 103 as being unpatentable over Hasti (Patent No. US 11,789,660), hereinafter Hasti; in view of Sharifi Mehr (Patent No. US 11,842,224), hereinafter Sharifi; further in view of Chatterjee et al. (Pub. No. US 2024/0069998), hereinafter Chatterjee; further in view of Jiang et al. (Pub. No. US 2025/0117260), hereinafter Jiang; and further in view of Vishwakarma et al. (Pub. No. US 2021/0258267), hereinafter Vishwakarma.
Response to Amendment
The amendments filed on January 14th, 2026 have been entered.
Claims 1, 5-6, 9, 13, 15, and 19 have been amended.
Response to Arguments
Applicant’s arguments filed on January 14th, 2026 have been considered but are not persuasive.
1/ The Applicant has argued that claim 1 includes the limitation of "monitoring a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource operation at the remote data center, and one or more heuristics for transmission reliability of a network infrastructure that connects the service- provider data center to the remote data center".
The combination of Hasti, Sharifi, and Chatterjee does not teach or suggest the limitation above. The Office Action acknowledges that "Hasti doesn't explicitly disclose: monitoring a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource operation" but goes on to assert that Sharifi discloses this limitation. (Office Action, page 7)
However, Sharifi does not teach or suggest "monitoring a correctness of the status ... based on ... one or more heuristics for transmission reliability of a network infrastructure that connects the service-provider data center to the remote data center", as in claim 1.
Examiner’s response:
The Examiner respectfully disagrees.
Sharifi discloses monitoring a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource (computing resource 108) operation at the remote data center, and one or more heuristics for transmission reliability of a network infrastructure that connects the service-provider data center to the remote data center (See Col. 14 lines 57-67, Col. 15 lines 1-17, and Fig. 4B; resource status service 110 (executed in the service provider network 102; See Col. 5 lines 18-27 and Fig. 1) determines whether new resource status data 122 has been received from one or more of the resources 108 (executing in different data centers located in different geographic regions; See Col. 8 lines 2-7). If at operation 420 new resource status data 122 has not been received then, at operation 422, the resource status service 110 determines whether a timeout error has occurred, i.e., whether an excessive amount of time has passed since the last new resource status data 122 … If a timeout error has not occurred the routine 400 returns to operation 420 (i.e., determines whether new resource status data 122 has been received). If a time out error has occurred the routine 400 proceeds to operation 424 where an error is preferably reported, and/or where corrective or other action may be taken ... See Col. 2lines 14-42; one or more of the network services may require a relatively long period of time to respond to requests for status data regarding the computing resources that they provide. If, however, the SPN (i.e., service provider network) is not able to obtain and send the requested data within that predetermined time (transmission reliability) then a time-out error may occur. The synchronous function call mode has the advantage of providing a faster response, but has the disadvantage of holding the communication channel open while the requested data is being obtained and sent, or a time-out occurs. Holding the communication channel open possibly prevents the customer and/or the SPN from using that communication channel for another purpose, such as obtaining other data from the same or a different SPN, or handling another customer. See also Col. 10 lines 10-61 and Fig. 2.
The Examiner notes that the timeout occurs due the service provider network not able to obtain and send (i.e., transmit) the requested data within that predetermined time. In addition, a timeout error is reasonably interpreted to be caused by poor network reliability which causes the connection to take too long or break entirely.
2/ The Applicant has also argued that regarding claim 6, Vishwakarma does not teach or suggest "a machine learning model ... trained to generate the predicted time using features of the computing resource operation, features of computing resources associated with performing the computing resource operation, and features of a network infrastructure that connects the service-provider data center to the remote data center".
Examiner’s response:
The Examiner respectfully disagrees.
Vishwakarma discloses in Parag. [0062] a task duration predictor 610 may refer to a computer program that may execute on the underlying hardware of the backup storage system 602. Specifically, the task duration predictor 610 may be designed and configured to predict a duration (or length of time) that may be consumed, by a background service 606, to complete a desired operation. To that extent, the task duration predictor 610 may perform any subset or all of the flowchart steps outlined in FIG. 8 ... Further, the prediction of any background service task duration may entail generating and applying a random forest regression based predictive model using sets of features (i.e., individual, measurable properties or variables significant to the performance and length of time consumed to complete a given background service task. Vishwakarma discloses in Parag. [0060] that [w]ith reference back to FIG. 3, once the estimated time of completion is performed (304), the process 300 computes an n-step ahead prediction, 306. In this step, the process uses the system information to collect historical data as a time series for each relevant resource (e.g., CPU, Memory, Disk IO and Network). For example, in the case of Data Domain, the historical data can comprise a sar report, IOstat, system performance, and the like, to gather the required information and store in a database. This provides a multivariate approach for probabilistic weighted fuzzy time series (PWFTS) that is used for forecasting compute resources. Unlike prior systems that rely on a single variable for resource prediction, embodiments use a multivariate time series based on parameters such as CPU idle percent, disk I/O, network bandwidth, and memory capacity, among others.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 7-10, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Hasti et al. (Patent No. US 11,789,660), hereinafter Hasti; in view of Sharifi Mehr (Patent No. US 11,842,224), hereinafter Sharifi; and further in view of Chatterjee et al. (Pub. No. US 2024/0069998), hereinafter Chatterjee.
Claim 1. Hasti discloses [a] computer-implemented method comprising:
receiving, at a control plane hosted on a service-provider data center, an action request to perform a computing resource operation at a remote data center (See Col. 13 lines 46-61 and Fig. 1 A-C; distributed control plane 128 may receive a command (action request) from an application hosted within a container of the container orchestration platform 102; the command may correspond to a control plane operation, such as a command to create a backup (cloud backup see Col. 14 lines 12-19) ... See Col. 14 lines 40-44; Once the distributed control plane 128 has received the command, the distributed control plane 128 may determine whether the command targets an object owned by the first worker node 114 (computing resource) or the second worker node 116 (or a different worker node) (within distributed storage architecture, See Fig. 1A). See Col. 4 lines 5-14; The distributed storage architecture may be hosted separate from and external to the container orchestration platform. This provides the ability to tailor and configure the distributed storage architecture to manage distribute storage in an efficient manner that can be made accessible to any type of computing environment, such as the applications hosted within the container orchestration platform, applications and services hosted on servers or on-prem, applications and services hosted within various types of cloud computing environments, etc. See also Col. 3 lines 61-67 and Col. 4 lines 1-4; Applications may be deployed as containers within the container orchestration platform in a scalable and on-demand manner ... See also Col. 3 lines 41-60. Examiner’s interpretation: Hasti teaches that the distributed storage architecture is external to the container orchestration platform and it’s accessed to perform operations such as cloud backup operations. Therefore, the Examiner interprets the distributed storage architecture, taught by Hasti to be located at a remote data center), wherein the control plane is used to manage computing resources located at the remote data center (See Col. 4 lines 31-52; The control plane logic acts as an intermediary layer that facilitates, tracks, and manages worker nodes executing control plane operations requested by the applications hosted within the containers in the container orchestration platform. See also Col. 15 lines 15-21);
creating a data object at the service-provider data center to represent a status of the computing resource operation at the remote data center (See Col. 13 lines 62-67, Col. 14 lines 1-11; In some embodiments of receiving the command from the application, a custom resource definition maintained within a distributed database hosted within the container orchestration platform 102 may be created or modified in order to define the command through the custom resource definition. See Col. 17 lines 9-46 and Fig. 4; The custom resource definition may comprise the status field 406 populated by a control plane controller with information from a response received by a worker node that implemented a control plane operation. In this way, the status field 406 may be used by the control plane controller to communicate information to the application regarding execution of the control plane operation. Similarly, the control plane controller can populate an events field 408 with state information of the volume and/or warning information relating the execution of the control plane operation. See also Fig. 3; “distributed database custom resource definition 304”);
sending an instruction, to a resource manager at the remote data center, to initiate the computing resource operation (See Col. 14 lines 60-67 and Col. 15 lines 1-21; If the first worker node 114 is the owner of the object targeted by the command, then the distributed control plane 128 may route the command to the first control plane controller 136; the first control plane controller 136 reformats the command … the first control plane controller 136 transmits the reformatted command, such as through a REST API call, to the API endpoint 150 of the first worker node 114 for the first worker node 114 to implement the control plane operation defined within the reformatted command … See Col. 10 lines 52-64; The first worker node 114 may comprise a data management system (DMS) 152 and a storage management system (SMS) 158. The data management system 152 is a client facing frontend with which clients (e.g., applications within the container orchestration platform 102) interact through the distributed control plane 128, such as where reformatted commands from the first control plane controller 136 are received at the API endpoint 150),
wherein, in response to an event associated with performance of the computing resource operation at the remote data center, the control plane receives an indication of the event and updates the data object located at the service-provider data center to represent the status of the computing resource operation indicated by the event (See Col. 15 lines 15-21; the first control plane controller 136 can track the status of performing the reformatted command by monitoring the job. See Col. 15 lines 51-67 and Col. 16 lines 1-18; a control plane controller that has transmitted a reformatted command to a worker node for implementation of a control plane operation may receive a response from the worker node. The response may comprise information relating to a current status (progress completion) of implementing the control plane operation, a result of completing the implementation of the control plane operation, warning information relating to implementing the control plane operation, state information of the object, etc (in response to an event associated with performance of the computing resource operation at the remote data center, the control plane receives an indication of the event) … the warning information or state information (event information, See Col. 19 lines 7-11) of the object may be populated within an event field of the custom resource definition … (updates the data object located at the service-provider data center to represent the status of the computing resource operation indicated by the event));
querying the data object maintained at the service-provider data center to obtain the status of the computing resource operation represented by the data object; and providing the status of the computing resource operation represented by the data object maintained at the service-provider data center (See Col. 17; The custom resource definition 402 may comprise a status field 406 (The custom resource definition may be stored within a distributed database 304 within the container orchestration platform 102, See Col. 16 lines 27-29) (data object maintained at the service-provider) ... The status field 406 may be populated by a control plane controller with information from a response received by a worker node... In this way, the status field 406 may be used by the control plane controller to communicate information to the application regarding execution of the control plane operation (providing the status of the computing resource operation represented by the data object maintained at the service-provider). Similarly, the control plane controller can populate an events field 408 with state information of the volume and/or warning information relating the execution of the control plane operation. See Col. 15 lines 51-67 and Col. 16 lines 1-18).
Hasti doesn’t explicitly disclose: monitoring a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource operation at the remote data center, and one or more heuristics for transmission reliability of a network infrastructure that connects the service-provider data center to the remote data center; [and] receiving, at the control plane, a status request for the computing resource operation.
However, Sharifi discloses:
monitoring a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource (computing resource 108) operation at the remote data center, and one or more heuristics for transmission reliability of a network infrastructure that connects the service-provider data center to the remote data center (See Col. 14 lines 57-67, Col. 15 lines 1-17, and Fig. 4B; resource status service 110 (executed in the service provider network 102; See Col. 5 lines 18-27 and Fig. 1) determines whether new resource status data 122 has been received from one or more of the resources 108 (executing in different data centers located in different geographic regions; See Col. 8 lines 2-7). If at operation 420 new resource status data 122 has not been received then, at operation 422, the resource status service 110 determines whether a timeout error has occurred, i.e., whether an excessive amount of time has passed since the last new resource status data 122 … If a timeout error has not occurred the routine 400 returns to operation 420 (i.e., determines whether new resource status data 122 has been received). If a time out error has occurred the routine 400 proceeds to operation 424 where an error is preferably reported, and/or where corrective or other action may be taken ... See Col. 2lines 14-42; one or more of the network services may require a relatively long period of time to respond to requests for status data regarding the computing resources that they provide. If, however, the SPN (i.e., service provider network) is not able to obtain and send the requested data within that predetermined time (transmission reliability) then a time-out error may occur. The synchronous function call mode has the advantage of providing a faster response, but has the disadvantage of holding the communication channel open while the requested data is being obtained and sent, or a time-out occurs. Holding the communication channel open possibly prevents the customer and/or the SPN from using that communication channel for another purpose, such as obtaining other data from the same or a different SPN, or handling another customer. See also Col. 10 lines 10-61 and Fig. 2. Examiner’s interpretation: Applicant discloses in the Specification, in Parag. [0037], that “the service broker 108 relies on heuristics to determine that a status of a computing resource operation represented by the data object 110 may no longer be current. For example, an event indicating that a computing resource operation has completed can be detected by the event manager 122 located at the remote data center 116, and the event manager 122 can send a message indicating completion of the computing resource operation to the event processor 112 located at the service-provider data center 102. However, due to high-latency or failures in the network infrastructure 114, the message may not be received, or may not be received in time, by the event processor 112 to allow the data object 110 to be updated to a status that correctly represents the computing resource operation.” Based on Parag. [0037] of the Specification, the correctness of the status is associated with a time during which to expect receiving the status of the computing resource operation, which is consistent with determining whether an excessive amount of time has passed since the last new resource status data, as taught by Sharifi. Examiner’s note: The timeout occurs due the service provider network not able to obtain and send the requested data within that predetermined time. In addition, a timeout error is reasonably interpreted to be caused by poor network reliability which causes the connection to take too long or break entirely); [and]
receiving a status request for the computing resource operation (See Col. 12 lines 62-67 and Col. 13 lines 1-3; the resource status service 110 receives a request 118 for resource status data 122).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the container orchestration platform, taught by Hasti, to monitor a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource, and receiving a status request for the computing resource operation, as taught by Sharifi. This would be convenient in allowing customers to purchase and utilize various types of computing resources on a permanent or as-needed basis (Sharifi, Col. 1 lines 7-46).
Hasti in view of Sharifi doesn’t explicitly disclose receiving the status request for the computing resource operation at the control plane.
However, Chatterjee discloses receiving, at the control plane, a status request for the computing resource operation (See Parag. [0075]; node health aggregator (control plane, See Parag. [0072]) may be queried to obtain aggregated node health history using an application programming interface (API) … node health aggregator may provide metrics on computing resources in a cluster based on data, event, errors, etc., received from node problem detector and stored in time series database. See Parag. [0068]; a node problem detector (NPD) is a monitoring component that monitors health of a worker node and periodically runs health checks and reports it back to node health aggregator. See also Parag. [0055]; aggregated health (e.g., attributes associated with rate of job success to failures). See also Parag. [0067]; nodes are computing resources).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the control plane, taught by Hasti in view of Sharifi, to receive a status request for the computing resource operation, as taught by Chatterjee. This would be convenient for improving the selection and scheduling of computing systems for performing jobs (Chatterjee, Parag. [0003]).
Claim 2. Hasti in view of Sharifi and Chatterjee discloses [t]he computer-implemented method of claim 1,
Hasti discloses the computer-implemented method further comprising:
determining to update the status represented by the data object (See Col. 15 lines 15-21; the first control plane controller 136 may create and monitor a job that the first worker node 114 performs in order to implement the control plane operation based upon the reformatted command. In this way, the first control plane controller 136 can track the status of performing the reformatted command by monitoring the job. See Col. 15 lines 51-67 and Col. 16 lines 1-18; The response may comprise information relating to a current status (progress completion) of implementing the control plane operation, a result of completing the implementation of the control plane operation, warning information relating to implementing the control plane operation, state information of the object, etc … See Col. 17 lines 9-46 and Fig. 4; The custom resource definition may comprise the status field 406 populated by a control plane controller with information from a response received by a worker node that implemented a control plane operation);
obtaining a correct status of the computing resource operation from a resource manager located at the remote data center (See Col. 15 lines 51-67 and Col. 16 lines 1-18; a control plane controller that has transmitted a reformatted command to a worker node for implementation of a control plane operation may receive a response from the worker node. The response may comprise information relating to a current status (progress completion) of implementing the control plane operation, a result of completing the implementation of the control plane operation, warning information relating to implementing the control plane operation, state information of the object, etc., Examiner’s interpretation: The Examiner interprets “obtaining a correct status” as monitoring the status of the computing resource operation by performing a continuous status monitoring to obtain an up-to-date status); and
updating the data object to represent the correct status of the computing resource operation (See Col. 15 lines 51-67 and Col. 16 lines 1-18; The response may comprise information relating to a current status (progress completion) of implementing the control plane operation, a result of completing the implementation of the control plane operation, warning information relating to implementing the control plane operation, state information of the object, etc … the warning information or state information of the object may be populated within an event field of the custom resource definition).
Hasti doesn’t explicitly disclose determining to update the status represented by the data object is based on the one or more heuristics for performing the computing resource operation.
However, Sharifi discloses determining, based on the one or more heuristics for performing the computing resource operation, to update the status represented by the data object (See Col. 14 lines 57-67, Col. 15 lines 1-17, and Fig. 4B; resource status service 110 determines whether new resource status data 122 has been received from one or more of the resources 108. If at operation 420 new resource status data 122 has not been received then, at operation 422, the resource status service 110 determines whether a timeout error has occurred, i.e., whether an excessive amount of time has passed since the last new resource status data 122 (one or more heuristics)… If a timeout error has not occurred the routine 400 returns to operation 420 (i.e., determines whether new resource status data 122 has been received). See Col. 7 lines 21-25; As the resource status service 110 receives the resource status data 122 from the network services, the resource status service 110 preferably stores the resource status data in a cache 130 that is accessible to multiple instances of the resource status service 110. Examiner’s interpretation: Sharifi teaches storing (i.e., updating) the resource status data (i.e., new resource status data) in a cache upon receiving the resource status data from the network services, wherein determining if the new resource status data has been received is based on whether an excessive amount of time has passed since the last new resource status data).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the container orchestration platform, taught by Hasti, to determining, based on the one or more heuristics for performing the computing resource operation, to update the status represented by the data object, as taught by Sharifi. This would be convenient in allowing customers to purchase and utilize various types of computing resources on a permanent or as-needed basis (Sharifi, Col. 1 lines 7-46).
Claim 7. Hasti in view of Sharifi and Chatterjee discloses [t]he computer-implemented method of claim 1,
Hasti further discloses the control plane located at the service-provider data center is used to manage the computing resources (See Col. 4 lines 31-52; Control plane logic can be implemented to manage volume operations (e.g., backup and restore operations) that are performed upon the volumes stored by the worker nodes within the distributed storage of the distributed storage architecture … The control plane logic acts as an intermediary layer that facilitates, tracks, and manages worker nodes executing control plane operations requested by the applications hosted within the containers in the container orchestration platform).
Hasti in view of Sharifi doesn’t explicitly disclose wherein computing resources in the remote data center provide an Infrastructure-as-a-Service (IaaS) layer of a hybrid cloud infrastructure, and the computing resources providing the IaaS layer of the hybrid cloud infrastructure.
However, Chatterjee discloses wherein computing resources in the remote data center provide an Infrastructure-as-a-Service (IaaS) layer of a hybrid cloud infrastructure (See Parag. [0192]; services provided by third party network infrastructure system may include Infrastructure as a Service (IaaS) category … See Parag. [0196] various different infrastructure services may be provided by an IaaS platform in a third party network infrastructure system; infrastructure services facilitate management and control of underlying computing resources... See Parag. [0191]; third party network services may also be provided under a hybrid third party network model. See Parag. [0182]; a hybrid cloud. See also Parag. [0067]; nodes are computing resources), and the control plane located at the service-provider data center is used to manage the computing resources providing the IaaS layer of the hybrid cloud infrastructure (See Parag. [0072-0076]; node health aggregator (NHA) may be a control plane component that reacts to any unhealthy node condition flagged by NPD by marking the overall health status of the node as unhealthy; when a node is marked unhealthy by NHA, scheduler may avoid scheduling jobs onto such nodes until this node is marked healthy again … a node health aggregator may report an aggregated node health condition … a node health aggregator may aggregate a current node health based on various node conditions and if any of these conditions become true, then node health aggregator may change a health value of a node based on this changed node condition … See also Parag. [0067]; nodes are computing resources. Examiner’s interpretation: control plane is used to manage the computing resources).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the control plane, taught by Hasti in view of Sharifi, to manage the computing resources providing the IaaS layer of the hybrid cloud infrastructure, as taught by Chatterjee. This would be convenient for improving the selection and scheduling of computing systems for performing jobs (Chatterjee, Parag. [0003]).
Claim 8. Hasti in view of Sharifi and Chatterjee discloses [t]he computer-implemented method of claim 1,
Hasti further discloses the control plane located at the service-provider data center is used to manage the computing resources (See Col. 4 lines 31-52; Control plane logic can be implemented to manage volume operations (e.g., backup and restore operations) that are performed upon the volumes stored by the worker nodes within the distributed storage of the distributed storage architecture … The control plane logic acts as an intermediary layer that facilitates, tracks, and manages worker nodes executing control plane operations requested by the applications hosted within the containers in the container orchestration platform).
Hasti in view of Sharifi doesn’t explicitly disclose wherein computing resources in the remote data center provide an IaaS layer of an edge computing infrastructure, and the computing resources providing the IaaS layer of the edge computing infrastructure.
However, Chatterjee discloses wherein computing resources in the remote data center provide an IaaS layer of an edge computing infrastructure (See Parag. [0192]; services provided by third party network infrastructure system may include Infrastructure as a Service (IaaS) category … See Parag. [0196] various different infrastructure services may be provided by an IaaS platform in a third party network infrastructure system; infrastructure services facilitate management and control of underlying computing resources... See Parag. [0279] and Fig. 23; network operator and third party services may be hosted close to UE access point of attachment to achieve an efficient service delivery through a reduced end-to-end latency and load on a transport network; for edge computing implementations, 5GC may select a UPF 2304 close to UE 2302 and execute traffic steering from UPF 2304 to DN 2306 via N6 interface. Examiner’s interpretation: The third party services (i.e., IaaS) is implemented in an edge computing infrastructure), and the control plane located at the service-provider data center is used to manage the computing resources providing the IaaS layer of the edge computing infrastructure (See Parag. [0072-0076]; node health aggregator (NHA) may be a control plane component that reacts to any unhealthy node condition flagged by NPD by marking the overall health status of the node as unhealthy; when a node is marked unhealthy by NHA, scheduler may avoid scheduling jobs onto such nodes until this node is marked healthy again … a node health aggregator may report an aggregated node health condition … a node health aggregator may aggregate a current node health based on various node conditions and if any of these conditions become true, then node health aggregator may change a health value of a node based on this changed node condition … See also Parag. [0067]; nodes are computing resources. Examiner’s interpretation: control plane is used to manage the computing resources).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the control plane, taught by Hasti in view of Sharifi, to manage the computing resources providing the IaaS layer of the edge computing infrastructure, as taught by Chatterjee. This would be convenient for improving the selection and scheduling of computing systems for performing jobs (Chatterjee, Parag. [0003]); as well as achieving an efficient service delivery through a reduced end-to-end latency and load on a transport network using the edge computing infrastructure (Chatterjee, Parag. [0279]).
Claim 9. Hasti discloses [a] system comprising:
one or more computer readable storage media storing program instructions and one or more processors which, in response to executing the program instructions (See Col. 28 lines 37-63), are configured to:
receive, at a control plane hosted on a service-provider data center, an action request to perform a computing resource operation at a remote data center (See Col. 13 lines 46-61 and Fig. 1 A-C; distributed control plane 128 may receive a command (action request) from an application hosted within a container of the container orchestration platform 102; the command may correspond to a control plane operation, such as a command to create a backup (cloud backup see Col. 14 lines 12-19) ... See Col. 14 lines 40-44; Once the distributed control plane 128 has received the command, the distributed control plane 128 may determine whether the command targets an object owned by the first worker node 114 (computing resource) or the second worker node 116 (or a different worker node) (within distributed storage architecture, See Fig. 1A). See Col. 4 lines 5-14; The distributed storage architecture may be hosted separate from and external to the container orchestration platform. This provides the ability to tailor and configure the distributed storage architecture to manage distribute storage in an efficient manner that can be made accessible to any type of computing environment, such as the applications hosted within the container orchestration platform, applications and services hosted on servers or on-prem, applications and services hosted within various types of cloud computing environments, etc. See also Col. 3 lines 61-67 and Col. 4 lines 1-4; Applications may be deployed as containers within the container orchestration platform in a scalable and on-demand manner ... See also Col. 3 lines 41-60. Examiner’s interpretation: Hasti teaches that the distributed storage architecture is external to the container orchestration platform and it’s accessed to perform operations such as cloud backup operations. Therefore, the Examiner interprets the distributed storage architecture, taught by Hasti to be located at a remote data center), wherein the control plane is used to manage computing resources located at the remote data center (See Col. 4 lines 31-52; The control plane logic acts as an intermediary layer that facilitates, tracks, and manages worker nodes executing control plane operations requested by the applications hosted within the containers in the container orchestration platform. See also Col. 15 lines 15-21);
create a data object at the service-provider data center to represent a status of the computing resource operation at the remote data center (See Col. 13 lines 62-67, Col. 14 lines 1-11; In some embodiments of receiving the command from the application, a custom resource definition maintained within a distributed database hosted within the container orchestration platform 102 may be created or modified in order to define the command through the custom resource definition. See Col. 17 lines 9-46 and Fig. 4; The custom resource definition may comprise the status field 406 populated by a control plane controller with information from a response received by a worker node that implemented a control plane operation. In this way, the status field 406 may be used by the control plane controller to communicate information to the application regarding execution of the control plane operation. Similarly, the control plane controller can populate an events field 408 with state information of the volume and/or warning information relating the execution of the control plane operation. See also Fig. 3; “distributed database custom resource definition 304”);
send an instruction, to a resource manager at the remote data center, to initiate the computing resource operation (See Col. 14 lines 60-67 and Col. 15 lines 1-21; If the first worker node 114 is the owner of the object targeted by the command, then the distributed control plane 128 may route the command to the first control plane controller 136; the first control plane controller 136 reformats the command … the first control plane controller 136 transmits the reformatted command, such as through a REST API call, to the API endpoint 150 of the first worker node 114 for the first worker node 114 to implement the control plane operation defined within the reformatted command … See Col. 10 lines 52-64; The first worker node 114 may comprise a data management system (DMS) 152 and a storage management system (SMS) 158. The data management system 152 is a client facing frontend with which clients (e.g., applications within the container orchestration platform 102) interact through the distributed control plane 128, such as where reformatted commands from the first control plane controller 136 are received at the API endpoint 150),
wherein, in response to an event associated with performance of the computing resource operation at the remote data center, the control plane receives an indication of the event and updates the data object located at the service-provider data center to represent the status of the computing resource operation indicated by the event (See Col. 15 lines 15-21; the first control plane controller 136 can track the status of performing the reformatted command by monitoring the job. See Col. 15 lines 51-67 and Col. 16 lines 1-18; a control plane controller that has transmitted a reformatted command to a worker node for implementation of a control plane operation may receive a response from the worker node. The response may comprise information relating to a current status (progress completion) of implementing the control plane operation, a result of completing the implementation of the control plane operation, warning information relating to implementing the control plane operation, state information of the object, etc (in response to an event associated with performance of the computing resource operation at the remote data center, the control plane receives an indication of the event) … the warning information or state information (event information, See Col. 19 lines 7-11) of the object may be populated within an event field of the custom resource definition … (updates the data object located at the service-provider data center to represent the status of the computing resource operation indicated by the event));
query the data object maintained at the service-provider data center to obtain the status of the computing resource operation represented by the data object; and provide the status of the computing resource operation represented by the data object maintained at the service-provider data center (See Col. 17; The custom resource definition 402 may comprise a status field 406 (The custom resource definition may be stored within a distributed database 304 within the container orchestration platform 102, See Col. 16 lines 27-29) (data object maintained at the service-provider) ... The status field 406 may be populated by a control plane controller with information from a response received by a worker node... In this way, the status field 406 may be used by the control plane controller to communicate information to the application regarding execution of the control plane operation (providing the status of the computing resource operation represented by the data object maintained at the service-provider). Similarly, the control plane controller can populate an events field 408 with state information of the volume and/or warning information relating the execution of the control plane operation. See Col. 15 lines 51-67 and Col. 16 lines 1-18).
Hasti doesn’t explicitly disclose: monitoring a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource operation at the remote data center, and one or more heuristics for transmission reliability of a network infrastructure that connects the service-provider data center to the remote data center; [and] receive, at the control plane, a status request for the computing resource operation.
However, Sharifi discloses:
monitoring a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource (computing resource 108) operation at the remote data center, and one or more heuristics for transmission reliability of a network infrastructure that connects the service-provider data center to the remote data center (See Col. 14 lines 57-67, Col. 15 lines 1-17, and Fig. 4B; resource status service 110 (executed in the service provider network 102; See Col. 5 lines 18-27 and Fig. 1) determines whether new resource status data 122 has been received from one or more of the resources 108 (executing in different data centers located in different geographic regions; See Col. 8 lines 2-7). If at operation 420 new resource status data 122 has not been received then, at operation 422, the resource status service 110 determines whether a timeout error has occurred, i.e., whether an excessive amount of time has passed since the last new resource status data 122 … If a timeout error has not occurred the routine 400 returns to operation 420 (i.e., determines whether new resource status data 122 has been received). If a time out error has occurred the routine 400 proceeds to operation 424 where an error is preferably reported, and/or where corrective or other action may be taken ... See Col. 2lines 14-42; one or more of the network services may require a relatively long period of time to respond to requests for status data regarding the computing resources that they provide. If, however, the SPN (i.e., service provider network) is not able to obtain and send the requested data within that predetermined time (transmission reliability) then a time-out error may occur. The synchronous function call mode has the advantage of providing a faster response, but has the disadvantage of holding the communication channel open while the requested data is being obtained and sent, or a time-out occurs. Holding the communication channel open possibly prevents the customer and/or the SPN from using that communication channel for another purpose, such as obtaining other data from the same or a different SPN, or handling another customer. See also Col. 10 lines 10-61 and Fig. 2. Examiner’s interpretation: Applicant discloses in the Specification, in Parag. [0037], that “the service broker 108 relies on heuristics to determine that a status of a computing resource operation represented by the data object 110 may no longer be current. For example, an event indicating that a computing resource operation has completed can be detected by the event manager 122 located at the remote data center 116, and the event manager 122 can send a message indicating completion of the computing resource operation to the event processor 112 located at the service-provider data center 102. However, due to high-latency or failures in the network infrastructure 114, the message may not be received, or may not be received in time, by the event processor 112 to allow the data object 110 to be updated to a status that correctly represents the computing resource operation.” Based on Parag. [0037] of the Specification, the correctness of the status is associated with a time during which to expect receiving the status of the computing resource operation, which is consistent with determining whether an excessive amount of time has passed since the last new resource status data, as taught by Sharifi. Examiner’s note: The timeout occurs due the service provider network not able to obtain and send the requested data within that predetermined time. In addition, a timeout error is reasonably interpreted to be caused by poor network reliability which causes the connection to take too long or break entirely); [and]
receive a status request for the computing resource operation (See Col. 12 lines 62-67 and Col. 13 lines 1-3; the resource status service 110 receives a request 118 for resource status data 122).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the container orchestration platform, taught by Hasti, to monitor a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource, and receiving a status request for the computing resource operation, as taught by Sharifi. This would be convenient in allowing customers to purchase and utilize various types of computing resources on a permanent or as-needed basis (Sharifi, Col. 1 lines 7-46).
Hasti in view of Sharifi doesn’t explicitly disclose receive the status request for the computing resource operation at the control plane.
However, Chatterjee discloses receive, at the control plane, a status request for the computing resource operation (See Parag. [0075]; node health aggregator (control plane, See Parag. [0072]) may be queried to obtain aggregated node health history using an application programming interface (API) … node health aggregator may provide metrics on computing resources in a cluster based on data, event, errors, etc., received from node problem detector and stored in time series database. See Parag. [0068]; a node problem detector (NPD) is a monitoring component that monitors health of a worker node and periodically runs health checks and reports it back to node health aggregator. See also Parag. [0055]; aggregated health (e.g., attributes associated with rate of job success to failures). See also Parag. [0067]; nodes are computing resources).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the control plane, taught by Hasti in view of Sharifi, to receive a status request for the computing resource operation, as taught by Chatterjee. This would be convenient for improving the selection and scheduling of computing systems for performing jobs (Chatterjee, Parag. [0003]).
Claim 10. Hasti in view of Sharifi and Chatterjee discloses [t]he system of claim 9,
Hasti further discloses wherein the program instructions are further configured to cause the one or more processors to:
monitor the status of the computing resource operation by way of the data object located at the service-provider data center; determine to update the status represented by the data object (See Col. 15 lines 15-21; the first control plane controller 136 may create and monitor a job that the first worker node 114 performs in order to implement the control plane operation based upon the reformatted command. In this way, the first control plane controller 136 can track the status of performing the reformatted command by monitoring the job. See Col. 15 lines 51-67 and Col. 16 lines 1-18; The response may comprise information relating to a current status (progress completion) of implementing the control plane operation, a result of completing the implementation of the control plane operation, warning information relating to implementing the control plane operation, state information of the object, etc … See Col. 17 lines 9-46 and Fig. 4; The custom resource definition may comprise the status field 406 populated by a control plane controller with information from a response received by a worker node that implemented a control plane operation);
obtain a correct status of the computing resource operation from a resource manager located at the remote data center (See Col. 15 lines 51-67 and Col. 16 lines 1-18; a control plane controller that has transmitted a reformatted command to a worker node for implementation of a control plane operation may receive a response from the worker node. The response may comprise information relating to a current status (progress completion) of implementing the control plane operation, a result of completing the implementation of the control plane operation, warning information relating to implementing the control plane operation, state information of the object, etc., Examiner’s interpretation: The Examiner interprets “obtaining a correct status” as monitoring the status of the computing resource operation by performing a continuous status monitoring to obtain an up-to-date status); and
update the data object to represent the correct status of the computing resource operation (See Col. 15 lines 51-67 and Col. 16 lines 1-18; The response may comprise information relating to a current status (progress completion) of implementing the control plane operation, a result of completing the implementation of the control plane operation, warning information relating to implementing the control plane operation, state information of the object, etc … the warning information or state information of the object may be populated within an event field of the custom resource definition).
Claim 15. Hasti discloses [a] computer program product comprising:
one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions configured to cause one or more processors (See Col. 28 lines 37-63) to:
receive, at a control plane hosted on a service-provider data center, an action request to perform a computing resource operation at a remote data center (See Col. 13 lines 46-61 and Fig. 1 A-C; distributed control plane 128 may receive a command (action request) from an application hosted within a container of the container orchestration platform 102; the command may correspond to a control plane operation, such as a command to create a backup (cloud backup see Col. 14 lines 12-19) ... See Col. 14 lines 40-44; Once the distributed control plane 128 has received the command, the distributed control plane 128 may determine whether the command targets an object owned by the first worker node 114 (computing resource) or the second worker node 116 (or a different worker node) (within distributed storage architecture, See Fig. 1A). See Col. 4 lines 5-14; The distributed storage architecture may be hosted separate from and external to the container orchestration platform. This provides the ability to tailor and configure the distributed storage architecture to manage distribute storage in an efficient manner that can be made accessible to any type of computing environment, such as the applications hosted within the container orchestration platform, applications and services hosted on servers or on-prem, applications and services hosted within various types of cloud computing environments, etc. See also Col. 3 lines 61-67 and Col. 4 lines 1-4; Applications may be deployed as containers within the container orchestration platform in a scalable and on-demand manner ... See also Col. 3 lines 41-60. Examiner’s interpretation: Hasti teaches that the distributed storage architecture is external to the container orchestration platform and it’s accessed to perform operations such as cloud backup operations. Therefore, the Examiner interprets the distributed storage architecture, taught by Hasti to be located at a remote data center), wherein the control plane is used to manage computing resources located at the remote data center (See Col. 4 lines 31-52; The control plane logic acts as an intermediary layer that facilitates, tracks, and manages worker nodes executing control plane operations requested by the applications hosted within the containers in the container orchestration platform. See also Col. 15 lines 15-21);
create a data object at the service-provider data center to represent a status of the computing resource operation at the remote data center (See Col. 13 lines 62-67, Col. 14 lines 1-11; In some embodiments of receiving the command from the application, a custom resource definition maintained within a distributed database hosted within the container orchestration platform 102 may be created or modified in order to define the command through the custom resource definition. See Col. 17 lines 9-46 and Fig. 4; The custom resource definition may comprise the status field 406 populated by a control plane controller with information from a response received by a worker node that implemented a control plane operation. In this way, the status field 406 may be used by the control plane controller to communicate information to the application regarding execution of the control plane operation. Similarly, the control plane controller can populate an events field 408 with state information of the volume and/or warning information relating the execution of the control plane operation. See also Fig. 3; “distributed database custom resource definition 304”);
send an instruction, to a resource manager at the remote data center, to initiate the computing resource operation (See Col. 14 lines 60-67 and Col. 15 lines 1-21; If the first worker node 114 is the owner of the object targeted by the command, then the distributed control plane 128 may route the command to the first control plane controller 136; the first control plane controller 136 reformats the command … the first control plane controller 136 transmits the reformatted command, such as through a REST API call, to the API endpoint 150 of the first worker node 114 for the first worker node 114 to implement the control plane operation defined within the reformatted command … See Col. 10 lines 52-64; The first worker node 114 may comprise a data management system (DMS) 152 and a storage management system (SMS) 158. The data management system 152 is a client facing frontend with which clients (e.g., applications within the container orchestration platform 102) interact through the distributed control plane 128, such as where reformatted commands from the first control plane controller 136 are received at the API endpoint 150),
wherein, in response to an event associated with performance of the computing resource operation at the remote data center, the control plane receives an indication of the event and updates the data object located at the service-provider data center to represent the status of the computing resource operation indicated by the event (See Col. 15 lines 15-21; the first control plane controller 136 can track the status of performing the reformatted command by monitoring the job. See Col. 15 lines 51-67 and Col. 16 lines 1-18; a control plane controller that has transmitted a reformatted command to a worker node for implementation of a control plane operation may receive a response from the worker node. The response may comprise information relating to a current status (progress completion) of implementing the control plane operation, a result of completing the implementation of the control plane operation, warning information relating to implementing the control plane operation, state information of the object, etc (in response to an event associated with performance of the computing resource operation at the remote data center, the control plane receives an indication of the event) … the warning information or state information (event information, See Col. 19 lines 7-11) of the object may be populated within an event field of the custom resource definition … (updates the data object located at the service-provider data center to represent the status of the computing resource operation indicated by the event));
query the data object maintained at the service-provider data center to obtain the status of the computing resource operation represented by the data object; and provide the status of the computing resource operation represented by the data object maintained at the service-provider data center (See Col. 17; The custom resource definition 402 may comprise a status field 406 (The custom resource definition may be stored within a distributed database 304 within the container orchestration platform 102, See Col. 16 lines 27-29) (data object maintained at the service-provider) ... The status field 406 may be populated by a control plane controller with information from a response received by a worker node... In this way, the status field 406 may be used by the control plane controller to communicate information to the application regarding execution of the control plane operation (providing the status of the computing resource operation represented by the data object maintained at the service-provider). Similarly, the control plane controller can populate an events field 408 with state information of the volume and/or warning information relating the execution of the control plane operation. See Col. 15 lines 51-67 and Col. 16 lines 1-18).
Hasti doesn’t explicitly disclose: monitoring a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource operation at the remote data center, and one or more heuristics for transmission reliability of a network infrastructure that connects the service-provider data center to the remote data center; [and] receive, at the control plane, a status request for the computing resource operation.
However, Sharifi discloses:
monitoring a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource (computing resource 108) operation at the remote data center, and one or more heuristics for transmission reliability of a network infrastructure that connects the service-provider data center to the remote data center (See Col. 14 lines 57-67, Col. 15 lines 1-17, and Fig. 4B; resource status service 110 (executed in the service provider network 102; See Col. 5 lines 18-27 and Fig. 1) determines whether new resource status data 122 has been received from one or more of the resources 108 (executing in different data centers located in different geographic regions; See Col. 8 lines 2-7). If at operation 420 new resource status data 122 has not been received then, at operation 422, the resource status service 110 determines whether a timeout error has occurred, i.e., whether an excessive amount of time has passed since the last new resource status data 122 … If a timeout error has not occurred the routine 400 returns to operation 420 (i.e., determines whether new resource status data 122 has been received). If a time out error has occurred the routine 400 proceeds to operation 424 where an error is preferably reported, and/or where corrective or other action may be taken ... See Col. 2lines 14-42; one or more of the network services may require a relatively long period of time to respond to requests for status data regarding the computing resources that they provide. If, however, the SPN (i.e., service provider network) is not able to obtain and send the requested data within that predetermined time (transmission reliability) then a time-out error may occur. The synchronous function call mode has the advantage of providing a faster response, but has the disadvantage of holding the communication channel open while the requested data is being obtained and sent, or a time-out occurs. Holding the communication channel open possibly prevents the customer and/or the SPN from using that communication channel for another purpose, such as obtaining other data from the same or a different SPN, or handling another customer. See also Col. 10 lines 10-61 and Fig. 2. Examiner’s interpretation: Applicant discloses in the Specification, in Parag. [0037], that “the service broker 108 relies on heuristics to determine that a status of a computing resource operation represented by the data object 110 may no longer be current. For example, an event indicating that a computing resource operation has completed can be detected by the event manager 122 located at the remote data center 116, and the event manager 122 can send a message indicating completion of the computing resource operation to the event processor 112 located at the service-provider data center 102. However, due to high-latency or failures in the network infrastructure 114, the message may not be received, or may not be received in time, by the event processor 112 to allow the data object 110 to be updated to a status that correctly represents the computing resource operation.” Based on Parag. [0037] of the Specification, the correctness of the status is associated with a time during which to expect receiving the status of the computing resource operation, which is consistent with determining whether an excessive amount of time has passed since the last new resource status data, as taught by Sharifi. Examiner’s note: The timeout occurs due the service provider network not able to obtain and send the requested data within that predetermined time. In addition, a timeout error is reasonably interpreted to be caused by poor network reliability which causes the connection to take too long or break entirely); [and]
receive a status request for the computing resource operation (See Col. 12 lines 62-67 and Col. 13 lines 1-3; the resource status service 110 receives a request 118 for resource status data 122).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the container orchestration platform, taught by Hasti, to monitor a correctness of the status represented by the data object maintained at the service-provider data center based on one or more heuristics for performing the computing resource, and receiving a status request for the computing resource operation, as taught by Sharifi. This would be convenient in allowing customers to purchase and utilize various types of computing resources on a permanent or as-needed basis (Sharifi, Col. 1 lines 7-46).
Hasti in view of Sharifi doesn’t explicitly disclose receive the status request for the computing resource operation at the control plane.
However, Chatterjee discloses receive, at the control plane, a status request for the computing resource operation (See Parag. [0075]; node health aggregator (control plane, See Parag. [0072]) may be queried to obtain aggregated node health history using an application programming interface (API) … node health aggregator may provide metrics on computing resources in a cluster based on data, event, errors, etc., received from node problem detector and stored in time series database. See Parag. [0068]; a node problem detector (NPD) is a monitoring component that monitors health of a worker node and periodically runs health checks and reports it back to node health aggregator. See also Parag. [0055]; aggregated health (e.g., attributes associated with rate of job success to failures). See also Parag. [0067]; nodes are computing resources).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the control plane, taught by Hasti in view of Sharifi, to receive a status request for the computing resource operation, as taught by Chatterjee. This would be convenient for improving the selection and scheduling of computing systems for performing jobs (Chatterjee, Parag. [0003]).
Claim 16 is taught by Hasti in view of Sharifi and Chatterjee as described for claim 10.
Claims 3-4, 11-12 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Hasti et al. (Patent No. US 11,789,660), hereinafter Hasti; in view of Sharifi Mehr (Patent No. US 11,842,224), hereinafter Sharifi; further in view of Chatterjee et al. (Pub. No. US 2024/0069998), hereinafter Chatterjee; and further in view of Jiang et al. (Pub. No. US 2025/0117260), hereinafter Jiang.
Claim 3. Hasti in view of Sharifi and Chatterjee discloses [t]he computer-implemented method of claim 2,
Hasti further discloses determining to update the status represented by the data object (See Col. 15 lines 15-21 and lines 51-67, Col. 16 lines 1-18, Col. 17 lines 9-46, and Fig. 4).
Sharifi also discloses determining to update the status represented by the data object (See Col. 7 lines 21-25, Col. 14 lines 57-67, Col. 15 lines 1-17, and Fig. 4B).
The combination doesn’t explicitly disclose wherein determining to update the status represented by the data object further comprises: determining that a time to perform the computing resource operation has been exceeded.
However, Jiang discloses determining to update the status represented by the data object (See Parag. [0063]; The health check component (i.e., job scheduling server) monitors the overall health and status of the job executors and infrastructure to ensure availability and reliability. The health check component may trigger alerts, restarts, or failovers if issues are detected. See Parag. [0063]; The computing resource management component (i.e., job scheduling server) tracks availability, load, capacities, and statuses of the registered job executors. See Parag. [0066]; The cloud-based job execution environment 200 may implement a continuous feedback loop between the distributed job execution system 128 and the job scheduling server 126. For example, the job executors 204-208 may regularly send back status updates, including metrics for utilization data, such as busy versus free threads, overall health, loads, and other data. Examiner’s interpretation: The Examiner interprets “determining to update the status represented by the data object” as performing a continuous status monitoring (tracking) to update the status according to errors/issues to ensure the status is up-to-date) further comprises: determining that a time to perform the computing resource operation has been exceeded (See Parag. [0030]; a computing job (computing resource operation) may be classified as resource intensive if the computing job meets one or more resource intensity criteria. The one or more resource intensity criteria such as: the duration of the computing job exceeds a time threshold. See Parag. [0015]; “job executor,” as used herein, refers to a deployed computing resource. See Parag. [0018]; One example of a resource-intensive computing job may be a long-running job. For example, a computing job that takes more than 1 hour, 2 hours, 3 hours, or 10 hours (depending on the implementation) to be executed may be classified as a long-running job (a time to perform the computing resource operation). See also Parag. [0015] [0019] [0024] [0045] [0072]).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify determining to update the status represented by the data object, taught by the combination, to further comprise determine that a time to perform the computing resource operation has been exceeded, as taught by Jiang. This would be convenient to ensure availability and reliability (Jiang, Parag. [0061]).
Claim 4. Hasti in view of Sharifi, Chatterjee, and Jiang discloses [t]he computer-implemented method of claim 3,
Hasti doesn’t explicitly disclose the method further comprising: identifying the time to perform the computing resource operation based at least in part on a type of the computing resource operation.
However, Jiang discloses identifying the time to perform the computing resource operation based at least in part on a type of the computing resource operation (See Parag. [0017]; A computing job can be classified as resource intensive according to one or more resource intensity criteria. It will be appreciated that resource intensity criteria may vary, depending, for example, on the nature of the computing jobs executed within the environment. See Parag. [0030]; The one or more resource intensity criteria is the duration of the computing job exceeds a time threshold. See also Parag. [0018] [0024]).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the node problem detector, taught by Hasti, to identify the time to perform the computing resource operation based at least in part on a type of the computing resource operation, as taught by Jiang. This would be convenient to ensure availability and reliability (Jiang, Parag. [0061]).
Claim 11 is taught by Hasti in view of Sharifi, Chatterjee, and Jiang as described for claim 3.
Claim 12 is taught by Hasti in view of Sharifi, Chatterjee, and Jiang as described for claim 4.
Claim 17 is taught by Hasti in view of Sharifi, Chatterjee, and Jiang as described for claim 3.
Claim 18 is taught by Hasti in view of Sharifi, Chatterjee, and Jiang as described for claim 4.
Claims 5, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hasti et al. (Patent No. US 11,789,660), hereinafter Hasti; in view of Sharifi Mehr (Patent No. US 11,842,224), hereinafter Sharifi; further in view of Chatterjee et al. (Pub. No. US 2024/0069998), hereinafter Chatterjee; and further in view of Tokunaga (Pub. No. US 2023/0393925).
Claim 5. Hasti in view of Sharifi and Chatterjee discloses [t]he computer-implemented method of claim 1,
Hasti discloses the network infrastructure that connects the service-provider data center to the remote data center (See Col. 13 lines 46-61 and Fig. 1 A-C;).
The combination doesn’t explicitly disclose wherein the one or more heuristics for transmission reliability of the network infrastructure selected from a group consisting of: networking hardware, networking software, a distance of the remote data center to the service-provider data center, historical transmission rates for the network infrastructure, and historical error rates for the network infrastructure.
However, Tokunaga discloses wherein the one or more heuristics for transmission reliability of the network infrastructure selected from a group consisting of: networking hardware, networking software, a distance of the remote data center to the service-provider data center, historical transmission rates for the network infrastructure, and historical error rates for the network infrastructure (See Parag. [0066]; As the “status of the data center internal network 12”, there are, for example, “normal” in which the data center internal t network 12 is of a normal status, “time out” in which the response could not be received by a prescribed time (response time threshold described later with reference to FIG. 5) due to disconnection or congestion, and “error” in which the response was received but the content thereof was an error. Examiner’s interpretation: The Examiner reasonably interprets the time out due to disconnection or congestion of a network to be associated with both hardware limitations/failures and/or software issues).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the one or more heuristics for transmission reliability of a network infrastructure that connects the service-provider data center to the remote data center, taught by the combination, to be selected from a networking hardware and/or networking software, as taught by Tokunaga. This would be convenient for supporting the measures to be taken by a maintenance worker for handling a failure that occurred in a system (Tokunaga, Parag. [0001]).
Claim 13 is taught by Hasti in view of Sharifi, Chatterjee, and Tokunaga as described for claim 5.
Claim 19 is taught by Hasti in view of Sharifi, Chatterjee, and Tokunaga as described for claim 5.
Claims 6, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hasti (Patent No. US 11,789,660), hereinafter Hasti; in view of Sharifi Mehr (Patent No. US 11,842,224), hereinafter Sharifi; further in view of Chatterjee et al. (Pub. No. US 2024/0069998), hereinafter Chatterjee; further in view of Jiang et al. (Pub. No. US 2025/0117260), hereinafter Jiang; and further in view of Vishwakarma et al. (Pub. No. US 2021/0258267), hereinafter Vishwakarma.
Claim 6. Hasti in view of Sharifi, Chatterjee, and Jiang discloses [t]he computer-implemented method of claim 3,
The combination doesn’t explicitly disclose the method further comprising: identifying the time to perform the computing resource operation based at least in part on a predicted time to perform the computing resource operation, wherein a machine learning model is trained to generate the predicted time using features of the computing resource operation, features of computing resources associated with performing the computing resource operation, and features of a network infrastructure that connects the service-provider data center to the remote data center.
However, Vishwakarma discloses identifying the time to perform the computing resource operation based at least in part on a predicted time to perform the computing resource operation, wherein a machine learning model is trained to generate the predicted time using features of the computing resource operation, features of computing resources associated with performing the computing resource operation, and features of a network infrastructure that connects the service-provider data center to the remote data center (See Parag. [0050]; estimates the completion time of the desired operation (e.g., cloud data movement, replication, garbage collection, etc.); the prediction of the completion duration of desired operation on the backup storage system can be obtained using random forest regression (machine learning model), or similar process. See Parag. [0062]; a task duration predictor 610 may refer to a computer program that may execute on the underlying hardware of the backup storage system 602. Specifically, the task duration predictor 610 may be designed and configured to predict a duration (or length of time) that may be consumed, by a background service 606, to complete a desired operation. To that extent, the task duration predictor 610 may perform any subset or all of the flowchart steps outlined in FIG. 8 ... Further, the prediction of any background service task duration may entail generating and applying a random forest regression based predictive model using sets of features (i.e., individual, measurable properties or variables significant to the performance and length of time consumed to complete a given background service task). See Parag. [0060]; With reference back to FIG. 3, once the estimated time of completion is performed (304), the process 300 computes an n-step ahead prediction, 306. In this step, the process uses the system information to collect historical data as a time series for each relevant resource (e.g., CPU, Memory, Disk IO and Network). For example, in the case of Data Domain, the historical data can comprise a sar report, IOstat, system performance, and the like, to gather the required information and store in a database. This provides a multivariate approach for probabilistic weighted fuzzy time series (PWFTS) that is used for forecasting compute resources. Unlike prior systems that rely on a single variable for resource prediction, embodiments use a multivariate time series based on parameters such as CPU idle percent, disk I/O, network bandwidth, and memory capacity, among others See also Parag. [0043] [0056] [0071] [0073] and Claim 11).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify identifying the time to perform the computing resource operation, taught by the combination, based at least in part on a predicted time to perform the computing resource operation, wherein a machine learning model is trained to generate the predicted time, as taught by Vishwakarma. This would be convenient to autonomously and dynamically allocating resources (Vishwakarma, Parag. [0002]).
Claim 14 is taught by Hasti in view of Sharifi, Chatterjee, Jiang, and Vishwakarma as described for claim 6.
Claim 20 is taught by Hasti in view of Sharifi, Chatterjee, Jiang, and Vishwakarma as described for claim 6.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Chen et al. (Pub. No. US 2018/0267833) – Related art in the area of management systems and related methods for managing cloud resources of a plurality of virtual machines in a hybrid cloud system, (Abstract; A management method of cloud resources is provided for use in a hybrid cloud system with first and second cloud systems, wherein the first cloud system includes first servers operating first virtual machines (VMs) and the second cloud system includes second servers operating second VMs, the method including the step of: collecting, by a resource monitor, performance monitoring data of the first VMs within the first servers; analyzing, by an analysis and determination device, the performance monitoring data collected to automatically send a trigger signal in response to determining that a predetermined trigger condition is met, wherein the trigger signal indicates a deployment target and a deployment type; and automatically performing, by a resource deployment device, an operation corresponding to the deployment type on the deployment target in the second cloud system in response to the trigger signal).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELBASST TALIOUA whose telephone number is (571)272-4061. The examiner can normally be reached on Monday-Thursday 7:30 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oscar Louie can be reached on 571-270-1684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Abdelbasst Talioua/Examiner, Art Unit 2445