Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-4 and 6-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2, 6-8, 10-14, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sabath (US 10,897,497) and further in view of Nabi (US 2019/0034240) and further in view of Gao (US 11,886,905).
Regarding claim 1, Sabath teaches: A method comprising:
identifying a plurality of compute nodes scheduled to undergo an upgrade process (col. 11:33-35, “At block 702 of FIG. 7, a set of one or more nodes, such as worker nodes 410 of FIG. 4, in a cluster of nodes is selected for an update”);
identifying an application executed by one or more of the plurality of compute nodes (col. 11:47-49, “the planner micro-service looks at the number of resources (e.g., containers or pods) on each node and the memory utilization on each node” and col. 3:32-37, “Rather than run an entire complex application inside a single container, the application can be split into micro-services. As used herein, the term “pod” refers to a group of one or more containers, and the term “workload” refers to one or more pods or one or more containers that are currently executing on a node in a cluster” and col. 3:1-3, “the scheduler packs new jobs and workloads into a selected group of nodes in a cluster, thus steering new workloads to a subset of the nodes in the cluster”) including identifying a plurality of applications (col. 9:45-49, “The example application shown in FIG. 5 includes three tiers: Tier 1 514 which is a front-end presentation tier; Tier 2 516 that is a web application logic tier; and Tier 3 518 that is a data tier for providing persistent data”);
determining a minimum node availability budget for each of the plurality of applications (col. 4:54-57, “a minimum number of workloads or pods of particular types may be kept running at all times in order to avoid response time degradation during the updates;” col. 11:58-61, “the planner micro-service can also ensure that a certain percentage of resources in the cluster are running at any point during the cluster upgrade;” and col. 10:24-29, “the disruption budget parameter for web server containers 501 and application logic containers 502 may be set to fifty percent, meaning that at any given time at least fifty percent of all of the containers of these types must be active and healthy (e.g., operating without errors)”); and
generating a batch upgrade scheme for the plurality of compute nodes (col. 12:30-35, “the selecting of a set of nodes for upgrade by the planner micro-service is based at least in part on three considerations: meeting a disruption budget parameter, or cluster availability requirement (e.g., fifty percent of containers should be running at any point during the cluster upgrade)”);
wherein the batch upgrade scheme upgrades a maximum quantity of the plurality of compute nodes in parallel while complying with the minimum node availability budget for each of the plurality of applications (col. 4:49-57, “selecting a next set of nodes to update based on criteria such as, but not limited to the elapsed time it takes to perform the upgrade, the cost in terms of a number of times that each pod or workload is moved, and resilience in terms of the amount of service disruption that is caused by performing the update. For example, a minimum number of workloads or pods of particular types may be kept running at all times in order to avoid response time degradation during the updates”).
Sabath does not expressly disclose; however, Nabi discloses: a batch upgrade scheme (¶ 27, “With rolling upgrades, the system is upgraded in a number of batches. The system is upgraded iteratively one batch at a time in a rolling fashion. In each iteration, a batch size number of hosts are taken out of service for upgrade and subsequently added back to the system”).
It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of a batch upgrade scheme, as taught by Nabi, in the same way to the upgrade method, as taught by Sabath. Both inventions are in the field of upgrading nodes, and combining them would have predictably resulted in a “rolling upgrade in a cloud computing environment,” as indicated by Nabi (¶ 1).
Sabath and Nabi do not teach as clearly as Gao teaches: wherein the minimum node availability budget for the plurality of applications requires that at least one compute node for each of the plurality of applications be running and live at all times (col. 9:56-67 and col. 10:1-7, “a maximum shutdown quantity corresponding to a service group is a maximum quantity of virtual machines that are allowed to be shut down in the service group when the service of the service group keeps running” and “It is ensured that the total quantity of virtual machines that are deployed on the at least one target host and that are in each target service group is less than or equal to the maximum shutdown quantity corresponding to the corresponding target service group, so that it can be ensured that when the at least one target host is shut down to be upgraded, a virtual machine outside the at least one target host may still support running of a service to an extent, and shutting down the virtual machines on the at least one target host does not affect a service of the one or more target service groups”).
It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of wherein the minimum node availability budget for the plurality of applications requires that at least one compute node for each of the plurality of applications be running and live at all times, as taught by Gao, in the same way to the node availability budget, as taught by Sabath and Nabi. Both inventions are in the field of batch upgrading of compute nodes, and combining them would have predictably resulted in an upgrade method such that “it is ensured that a service of the virtual machines deployed on the at least one target host is not interrupted,” as indicated by Gao (col. 2:31-32).
Regarding claim 2, Sabath teaches: The method of claim 1, wherein the upgrade process will render the plurality of compute nodes unavailable for a time period (col. 12:56-60, “At block 704, the executor micro-service locks the node(s) in the set to prevent future workloads from being scheduled on them by, for example, a master node in the cluster such as master node 402 of FIG. 4”).
Regarding claim 6, Sabath teaches: The method of claim 1, wherein generating the batch upgrade scheme to comply with the minimum node availability budget for the application comprises ensuring that fewer than all compute nodes running the application are upgraded simultaneously such that at least one compute node running the application is live at all times (col. 10:24-29, “the disruption budget parameter for web server containers 501 and application logic containers 502 may be set to fifty percent, meaning that at any given time at least fifty percent of all of the containers of these types must be active and healthy (e.g., operating without errors).”).
Regarding claim 7, Nabi teaches: The method of claim 1, wherein generating the batch upgrade scheme further comprises optimizing the batch upgrade scheme to ensure that sufficient resources are available to continue operations during the upgrade process (¶ 29, “the upgrade process starts when the system has enough capacity for scaling, failure handling and upgrade. The upgrade continues as long as the capacity exists, while maximizing the number of resources upgraded in each iteration”).
Regarding claim 8, Sabath teaches: The method of claim 7, wherein the sufficient resources comprises: sufficient CPU (central processing unit) resources are available to continue operations; sufficient GPU (graphics processing unit) resources are available to continue operations; sufficient RAM (random access memory) resources are available to continue operations; and sufficient disk storage resources are available to continue operations (col. 8:43-48, “processing system 300 includes processing capability in the form of processors 21, storage capability including system memory (e.g., RAM 24), and mass storage 34, input means such as keyboard 29 and mouse 30, and output capability including speaker 31 and display 35”).
Regarding claim 10, Sabath teaches: The method of claim 1, wherein the method is implemented in a containerized workload management platform (col. 3:60-65, “Kubernetes®, available from The Linux Foundation®, is one example of a commercially available product that can be utilized by one or more exemplary embodiments of the present invention to provide a framework, or cluster infrastructure code, for clustering and managing groups of nodes that are executing workloads that include containers”).
Regarding claim 11, Sabath teaches: The method of claim 10, wherein the containerized workload management platform comprises a Kubernetes® construct (col. 13:48-51, “In this example, the cluster is a Kubernetes cluster that includes ten worker nodes and various workloads running on the ten worker nodes”).
Regarding claim 12, Nabi teaches: The method of claim 1, wherein the application comprises a plurality of applications, and wherein generating the batch upgrade scheme comprises ensuring that none of the plurality of applications become unavailable during the upgrade process (¶ 27, “To maintain the system availability during upgrades, cloud providers may perform rolling upgrades. With rolling upgrades, the system is upgraded in a number of batches”).
Regarding claim 13, Nabi teaches: The method of claim 1, wherein the batch upgrade scheme comprises a plurality of upgrade groups, wherein each of the plurality of upgrade groups comprises one or more compute nodes to be upgraded in parallel, and wherein the plurality of upgrade groups are upgraded serially (¶ 27, “The system is upgraded iteratively one batch at a time in a rolling fashion. In each iteration, a batch size number of hosts are taken out of service for upgrade and subsequently added back to the system.”).
Regarding claim 14, Sabath teaches: The method of claim 1, wherein each of the plurality of compute nodes is associated with a cluster within a containerized workload management system (col. 3:25-27, “containers are used to isolate an application and its dependencies into a self-contained unit that can be executed on any node in the cluster”).
Regarding claim 19, Nabi teaches: The method of claim 1, wherein the application is a cloud-based application and wherein the plurality of compute nodes are implemented within a cloud-native network platform (¶ 29, “A system and method for rolling upgrade of a cloud system is provided herein”).
Regarding claim 20, Nabi teaches: The method of claim 1, wherein each of the plurality of compute nodes comprises one or more pods (col. 11:65-67, “the measured upgrade phase selects a first set or sets of one or more nodes for upgrading that will minimize the startup of pods or containers on non-updated nodes”), and wherein generating the batch upgrade scheme comprises upgrading a plurality of pods in parallel (col. 2:56-57, “the optimal set may include a single node or a plurality of nodes for concurrent update”).
Claim(s) 3 and 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sabath, Nabi, and Gao, as applied above, and further in view of Bhat (US 2008/0295088).
Regarding claim 3, Sabath, Nabi, and Gao do not teach; however, Bhat discloses: at least a portion of the plurality of applications is executed with redundancy such that the portion of the plurality of applications is executed by two or more compute nodes simultaneously (¶ 5, “For a high-reliability system, hardware and software redundancy may be in place, wherein two substantially identical sets of hardware and software operate simultaneously”).
It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of at least a portion of the plurality of applications is executed with redundancy such that the portion of the plurality of applications is executed by two or more compute nodes simultaneously, as taught by Bhat, in the same way to the plurality of applications, as taught by Sabath, Nabi, and Gao. Both inventions are in the field of updating systems’ software, and combining them would have predictably resulted in a method such that “software can be upgraded without any interruption in service,” as indicated by Bhat (¶ 5).
Regarding claim 4, Sabath teaches: The method of claim 3, wherein one compute node of the plurality of compute nodes is configured to execute two or more of the plurality of applications (col. 14:9-12, “Consider a cluster containing a number of worker nodes, or host machines, where each worker node hosts a number of workloads, or tasks, each utilizing some resources on the worker node”).
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sabath, Nabi, and Gao, as applied above, and further in view of Allen (US 11,853,738).
Regarding claim 9, Sabath, Nabi, and Gao do not teach; however, Allen discloses: optimizing the batch upgrade scheme to ensure that sufficient data is available to execute the application during the upgrade process (col. 5:56-59, “With the NDU, the data storage system can continue to provide uninterrupted service and access to the data while the software is being upgraded to the next version across all appliances of the system”).
It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of optimizing the batch upgrade scheme to ensure that sufficient data is available to execute the application during the upgrade process, as taught by Allen, in the same way to the optimizing the batch upgrade scheme, as taught by Sabath, Nabi, and Gao. Both inventions are in the field of updating systems’ software, and combining them would have predictably resulted in “uninterrupted service and access to the data while the software is being upgraded,” as indicated by Allen (col. 5:56-59).
Claim(s) 15 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sabath, Nabi, and Gao, as applied above, and further in view of Beard (US 2021/0311763).
Regarding claim 15, Sabath, Nabi, and Gao do not teach; however, Beard discloses: a plurality of clusters, and wherein each of the plurality of clusters comprises: a control plane node comprising an API (application program interface) server in communication with all compute nodes mapped to the applicable cluster (¶ 57, “orchestration control plane 115 is extended to support orchestration of native VMs, VM images, and guest clusters”).
It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of a plurality of clusters, and wherein each of the plurality of clusters comprises: a control plane node comprising an API (application program interface) server in communication with all compute nodes mapped to the applicable cluster, as taught by Beard, in the same way to the optimizing the containerized workload management system, as taught by Sabath, Nabi, and Gao. Both inventions are in the field of updating systems’ software, and combining them would have predictably resulted in “an orchestration control plane managing the guest cluster,” as indicated by Beard (¶ 4).
Regarding claim 16, Beard discloses: generating the batch upgrade scheme comprises first upgrading the control plane node of each of the plurality of clusters prior to upgrading the plurality of compute nodes (¶ 89, “Supervisor cluster patch 1202 includes an upgrade of orchestration control plane 115, including orchestration control plane core 1002 and GCIS 405”).
Claim(s) 17 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sabath, Nabi, and Gao, as applied above, and further in view of Gupta (US 2017/0192771).
Regarding claim 17, Sabath, Nabi, and Gao do not teach; however, Gupta discloses: generating the batch upgrade scheme further comprises selecting an optimal date and time to execute the batch upgrade scheme (¶ 21, “The patching schedule generated in accordance with the present invention is characterized by maximization, subject to one or more constraints, of an objective function” and ¶ 22, “FIG. 1 illustrates scheduling of patching virtual machines of redundancy groups into time windows, in accordance with embodiments of the present invention”).
It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of generating the batch upgrade scheme further comprises selecting an optimal date and time to execute the batch upgrade scheme, as taught by Gupta, in the same way to the method of claim 1, as taught by Sabath, Nabi, and Gao. Both inventions are in the field of updating systems’ software, and combining them would have predictably resulted in “scheduling patches and in particular, to scheduling patches, in sequential time windows, applicable to virtual machines within redundancy groups,” as indicated by Gupta (¶ 1).
Regarding claim 18, Beard discloses: The method of claim 17, wherein selecting the optimal date and time to execute the batch upgrade scheme comprises selecting based on time-based usage history for the application (¶ 76, “Data obtained from a historical database 36 (e.g., VM failure probability, request arrival rates to components, etc.) is provided to the Patch Optimization Problem Generator 37”).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB D DASCOMB whose telephone number is (571)272-9993. The examiner can normally be reached M-F 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at (571) 272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACOB D DASCOMB/Primary Examiner, Art Unit 2198